abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
A computer system utilizes subsystem supplemental memory resources to implement operating system supplemental disk caching. A main system processor (e.g., a central processing unit) processes information associated with main system functions. A bulk memory (e.g., a hard disk) stores the information. A main system memory (e.g., a main RAM) caches portions of the bulk information. A subsystem supplemental memory (e.g., a graphics subsystem RAM) provides storage capacity for subsystem operations (e.g., graphics operations) and supplemental storage for portions of said bulk information associated with main system functions (e.g., functions performed by the main system processor). Information (e.g., main system information) cached in the subsystem supplemental memory can be accessed by the main system processor directly.
1. A computer system comprising:a bus for communicating information;a main system processor for processing said information;a bulk storage component for storing said information; anda subsystem supplemental memory for caching a first portion of said bulk information for a main system processor.2. The computer system of Claim 1 wherein said subsystem supplemental memory is a random access memory.3. The computer system of Claim 1 wherein a main system memory and said subsystem supplemental memory swap said portions of said bulk information between one another.4. The computer system of Claim 1 wherein information cached in said subsystem supplemental memory is written to said bulk memory before subsystem specific information is written to said subsystem supplementatl memory.5. A supplemental caching method comprising:storing information in a bulk storage component;caching a portion of said information in a subsystem supplemental memory; andaccessing said subsystem supplemental memory to perform storage operations for a main processing component.6. A supplemental caching method of Claim 5 further comprising performing storage operations including writing and reading portions of said information directly between said subsystem supplemental memory and said main processing component.7. A supplemental caching method of Claim 5 wherein said subsystem supplemental coordination process comprises writing information from said subsystem supplemental memory to said bulk storage component if a subsystem operation is initiated.8. A graphics subsystem comprising:a graphics bus for communicating information;a graphics processor for processing graphics information; anda graphics memory for storing graphics information and portions of bulk information associated with non-graphics applications.9. A graphics subsystem of Claim 8 wherein said graphics processor has priority to storage capacity of said graphics memory.10. A graphics subsystem of Claim 8 wherein a central processing unit can access said information associated with non-graphics applications from said graphics memory directly.
This Application claims the benefit of a commonly owned and copending U.S. Provisional Patent Application entitled "AN OPERATING SYSTEM SUPPLEMENTAL DISK CACHING SYSTEM AND METHOD", serial number 60/693,581 , Client Docket # NVID-P001784.PRO, filed on June 24, 2005, which is incorporated herein by this reference.FIELDThe present description relates to the field of information storage systems. In particular the present description relates to an operating system supplemental disk caching system and method.BACKGROUNDElectronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous electronic technologies such as digital computers, calculators, audio devices, video equipment, and telephone systems facilitate increased productivity and cost reduction in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. Realization of these results often involves processing and storage of significant amounts of information. It is usually important for the information to be communicated from storage mediums to processing units quickly in order to perform a variety of operations properly. However, storage medium or memories typically have an inverse relationship between storage capacity and access speed.Information processing systems often include a hierarchy of different memory components varying from relatively large storage capacity with slow access capability to smaller storage capacity with relatively rapid access capability. Conventional computer systems typically include a bulk storage component (e.g., a hard disk memory system) and a main system memory (e.g., a random access memory). The bulk storage components such as a hard disk can typically store a relatively large amount of information but reading information from the hard disk or writing information to the hard disk takes a relatively long time. Attempts by a central processing unit to retrieve information directly from a hard disk would significantly slow the overall performance of operations and probably detrimentally impact the end use application results. While a main system memory such as a random access memory (RAM) typically supports faster read and write operations, RAM usually costs significantly more per storage unit (e.g., byte) and typically have relatively limited storage capacity. The limited storage capacity of a conventional main system memory RAM would significantly impact the applications that a computer system could run without a bulk storage component.Computer systems typically attempt to address the memory size versus speed dilemma by dividing up storage activities between different types of memories in a hierarchical configuration and communicating information between different memory hierarchy components. Processors typically accesses information from a main system memory in relatively fast accesses of small pieces of information. The main system memory in turn exchanges relatively large pieces of information with a relatively slow bulk storage component such as a hard disk. Input and output memory access operations can be a key bottleneck in operating system performance.The exchange of information within the hierarchy of memories is often referred to as disk caching. A cache is usually a memory that holds recently accessed data in a manner designed to seed up subsequent access to the same data. When data is read from or written to a hard disk a copy is also saved in the cache. The cache monitors disk reads to see if the required data is already in the cache. If the information is already in the cache then the information is returned immediately without attempting a disk read. The disk cache uses the system memory so a "cache hit" takes much less time to complete. However, because system memory is used, operating systems and applications have less memory available for other information.A common feature of operating systems is a swap file. A swap file uses the hard disk as virtual memory. When more memory is requested than actually physically exists, sections of memory are written to the hard disk to simulate more memory. While the swap files do permit simulation of additional memory, the performance is still degraded in the sense the accessing the information takes longer as the program uses the much slower swap file to retrieve the information from the hard disk.SUMMARYEmbodiments of the present invention operating system supplemental disk caching system and method provide convenient and efficient information storage and access. Information can be stored and accessed in an automated manner that conserves memory resources and expedites access. The present invention can facilitate flexible access to information by leveraged utilization of subsystem storage components (e.g., a graphics subsystem memory) to store information for a main system processor.In one embodiment, a computer system utilizes subsystem memory resources to implement operating system supplemental disk caching. A main system processor (e.g., a central processing unit) processes information associated with main system functions. A bulk storage component (e.g., a hard disk) stores the bulk information (e.g., application program instruction and data). A main system memory (e.g., a main system RAM) caches portions of the bulk information. A subsystem supplemental memory (e.g., a graphics subsystem RAM) provides storage capacity for subsystem operations (e.g., graphics operations) and supplemental storage for information associated with main system functions (e.g., functions performed by the main system processor). A subsystem supplemental coordination process is performed information is written from the subsystem supplemental memory to the bulk storage component if a subsystem operation is initiated.DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention by way of example and not by way of limitation. The drawings referred to in this specification should be understood as not being drawn to scale except if specifically noted.Figure 1 is a flow chart of an exemplary supplemental caching method in accordance with one embodiment of the present invention.Figure 2 is a block diagram of an exemplary computer system in accordance with one embodiment of the present invention.Figure 3 is a block diagram of an exemplary computer system that includes a graphics subsystem in accordance with one embodiment of the present.DETAILED DESCRIPTIONReference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present invention.Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means generally used by those skilled in data processing arts to effectively convey the substance of their work to others skilled in the art. A procedure, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic, optical, or quantum signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing terms such as "processing", "computing", "calculating", "determining", "displaying" or the like, refer to the action and processes of a computer system, or similar processing device (e.g., an electrical, optical, or quantum, computing device), that manipulates and transforms data represented as physical (e.g., electronic) quantities. The terms refer to actions and processes of the processing devices that manipulate or transform physical quantities within a computer system's component (e.g., registers, memories, logic, other such information storage, transmission or display devices, etc.) into other data similarly represented as physical quantities within other components.The present invention facilitates efficient and convenient storage of information. In one embodiment of the present invention, flexible hierarchical memory enables leverage utilization of hardware components for information storage and communication activities as well as a variety of other activities. For example, embodiments of a present invention processing device can utilize subsystem supplemental memories (e.g., a graphic subsystem memory) to provide operating disk caching. Information utilized by a variety of main system applications can be stored in secondary subsystem supplemental memories. Leveraged utilization of the storage capabilities of subsystem supplemental memories (e.g., a graphics subsystem memory, etc.) can facilitate rapid and convenient access to the information.Figure 1 is a flow chart of exemplary supplemental caching method 100 in accordance with one embodiment of the present invention. In one embodiment supplemental caching method 100 facilitates efficient and convenient storage and access to information in an information processing system. For example, supplemental caching method 100 can utilize otherwise idle subsystem memory to cache main system function information for a main system processor (e.g., a central processing unit).In step 110, information is stored in a bulk storage component. In one embodiment of the present invention the bulk information is stored on a hard disk. It is appreciated that bulk information can be stored on a variety of bulk storage components including CD-ROMs, DVDs, and/or network files.In step 120, a portion of the information is cached in a subsystem supplemental memory. In one embodiment, a portion of the information is communicated from the bulk storage component to the subsystem supplemental memory. In one exemplary implementation, the subsystem is a graphics subsystem and the information is cached in a graphics subsystem memory. For example, the information is communicated directly between a hard disk and the graphics subsystem memory.In step 130, the subsystem supplemental memory is accessed to perform storage operations for a main processing component. In one embodiment, information is communicated directly between the subsystem supplemental memory and the main system processing unit (e.g., a central processing unit). In one embodiment of the present invention, performing storage operations for the main processing component includes writing and reading portions of the information directly between the subsystem supplemental memory and the main processing component.In step 140, a subsystem supplemental coordination process is performed. In one embodiment, the subsystem supplemental coordination process comprises writing information from the subsystem supplemental memory to the bulk storage component if a subsystem operation is initiated. For example, information is written from the subsystem supplemental memory to the bulk storage component if a subsystem attempts to store primary subsystem function related information in the subsystem supplemental memory. In one embodiment, the information associated with the primary subsystem function is graphics information.In one embodiment of the present invention, a supplemental caching method (e.g., supplemental caching method 100) includes caching another portion of the information in a main system memory. In one embodiment information associated with a first application is cached in the main system memory and information associated with a second application is cached in the subsystem supplemental memory. In one exemplary implementation, information is exchanged between the main system memory and subsystem supplemental memory. For example, information is written between the subsystem supplemental memory and the main memory.Figure 2 is a block diagram of exemplary computer system 200 in accordance with one embodiment of the present invention. Computer system 200 includes bulk memory 210, central processing unit (CPU) 220, main memory 230 and secondary subsystem 240. Secondary subsystem 240 includes subsystem processor 241 and subsystem supplemental memory 242. Bulk memory 210, central processing unit (CPU) 220, main memory 230 and secondary subsystem 240 are communicatively coupled to bus 250. Subsystem processor 241 is communicatively coupled to subsystem supplemental memory 242.The components of computer system 200 cooperatively operate to provide information processing and operating system supplemental disk caching. Bus 250 communicates information between the components of computer system 200. Central processor 220 processes the information. Bulk memory 210 provides bulk storage capacity for the information. Main memory 230 caches portions of the bulk information for central processor 220. Subsystem 240 provides support for subsystem operations (e.g., graphics operations). Subsystem processor 241 processes information associated with subsystem functions (e.g., graphics functions) and subsystem supplemental memory 242 stores information (e.g., frame buffer information) for subsystem processor 241. Subsystem 240 also provides operating system supplemental disk caching capabilities for central processing unit 220. In one exemplary implementation, subsystem supplemental memory 242 caches portions of the bulk information for central processing unit 220. In one embodiment of the present invention, subsystem 240 is a graphics subsystem in which subsystem processor 241 is a graphics processor and subsystem supplement memory 242 is a graphics subsystem memory.In one embodiment of the present invention, information can be communicated or swapped directly between bulk memory 210 and main memory 230 and/or subsystem supplemental memory 242. In one exemplary implementation, subsystem supplemental memory 242 acts as a main storage component for subsystem processor 241 and as a "supplemental main" memory for central processing unit 220. In one embodiment, storage of information in subsystem supplemental memory 242 is coordinated between main system functions and subsystem functions. In one exemplary implementation, storage of subsystem information (e.g., graphics information) in the secondary subsystem memory is given priority over main system storage. In the present example, subsystem supplemental memory coordination includes writing information associated with main system functions from subsystem supplemental memory to bulk memory before overwriting the main system information with the subsystem information. For example, if subsystem 240 is a graphics subsystem, main system information stored in subsystem supplemental memory 242 is written to bulk memory 210 before graphics operations cause the main memory function information to be overwritten with graphics function information.Main memory 230 and/or subsystem supplemental memory 242 can operate as a main memory for central processing unit 220. For example, central processing unit 220 can receive a portion of the information directly from subsystem supplemental memory 242 instead of main memory 230. In one embodiment of the present invention, main memory 230 and subsystem supplemental memory 242 are random access memories (RAMs).It is appreciated that the present invention is readily implemented in a variety of configurations to provide operating system supplemental disk caching. For example, subsystem supplemental memory 242 can cache portions of the bulk information if main memory 230 is full. Main memory 230 and subsystem supplemental memory 242 can swap portions of the bulk information between one another. Main memory 230 can cache a first portion of the bulk information and subsystem main memory 242 can cache a second portion of the bulk information. The present invention can also be applied to accesses of bulk information from a number of components or systems. For example, accesses to hard drives, CD-ROMs, DVDs, and/or network file accesses can be performed by caching information in a subsystem supplemental memory.Figure 3 is a block diagram of a computer system 300, one embodiment of a computer system upon which embodiments of the present invention can be implemented. Computer system 300 includes central processor unit 301, main system memory 302 (e.g., a random access memory), chip set 303 with north bridge 309 and south bridge 305, removable data storage device 304, input device 307, signal communications port 308, and graphics subsystem 310 which is coupled to display 320. Computer system 300 includes several busses for communicatively coupling the components of computer system 300. Communication bus 391 (e.g., a front side bus) couples north bridge 309 of chipset 303 to central processor unit 301. Communication bus 392 (e.g., a main memory bus) couples north bridge 309 of chipset 303 to main system memory 302. Communication bus 393 (e.g., the Advanced Graphics Port interface) couples north bridge of chipset 303 to graphic subsystem 310. Communication buses 394 - 397 (e.g., a PCI bus) couple south bridge 305 of chip set 303 to removable data storage device 304, input device 307, and signal communications port 308 respectively. Graphics subsystem 310 includes graphics processor 311 and graphics buffer 315.The components of computer system 300 cooperatively operate to provide presentations of graphics images. Communications bus 391 through 397 communicate information. Central processor 301 processes information. Main system memory 302 stores information and instructions for the central processor 301. Removable data storage device 304 also stores information and instructions (e.g., functioning as a large information reservoir). Removable data storage device can be a variety of different devices including a hard disk, a CD, a DVD, jump drive, etc. Input device 306 provides a mechanism for inputting information and/or for pointing to or highlighting information on display 320. Signal communication port 308 provides a communication interface to exterior devices (e.g., an interface with a network). Display device 309 displays information in accordance with data stored in frame buffer 315.Graphics subsystem 310 performs graphics operations and provides supplemental memory support for central processing unit 301. Graphics processor 311 processes graphics commands from central processor 301 and provides the resulting data to graphics supplemental memory 315 for storage and retrieval by display monitor 320. For example, graphics supplemental memory 315 can provide frame buffer storage for graphics processor 311. Graphics supplemental memory 315 can also provide supplemental main system storage for central processing unit 301. For example, bulk information can be communicated to graphics supplemental memory 315 from removable data storage component 304 and/or from a network resource (not shown) communicatively coupled to signal communication port 308. The information can then be accessed by central processing unit 301 directly from graphics supplemental memory 315.It is appreciated that the present invention can be implemented in a variety of embodiments. In one exemplary implementation, the present invention can be utilized in processing systems to provide a variety of graphics applications and unrelated applications. For example, the present invention can be utilized to perform processing in a personal computer, personal digital assistant, cell phone, handheld device or any number of platforms for implementing processing. It is also appreciated that references to computer system implementations are exemplary and the present invention is not limited to conventional computer system implementations but is also readily implemented in a variety of electronic systems that include a main system memory and a subsystem supplemental memory. It is appreciated that the present invention can be implemented in a variety of embodiments. In one exemplary implementation, the present invention can be utilized in processing systems that support a variety of graphics applications including video games. For example, the present invention can be utilized in graphics rendering processes of a game console, personal computer, personal digital assistant, cell phone or any number of platforms for implementing a video game. It is also appreciated that references to video game application implementations are exemplary and the present invention is not limited to these implementations.Thus, the present invention facilitates efficient and convenient storage and access to information in an information processing system. Embodiments of the present invention support maximized component utilization and advance resource conservation by optimizing storage capacity of subsystem memories for main system operations. Using otherwise idle subsystem memory resources makes more memory available for main system applications, speeds up overall hard disk access operations, enables overall increased virtual memory swap speed and facilitates longer hard disk life (e.g., be reducing the number of hard disk access and associated mechanical wear and tear). Reduced hard drive accesses can also enable power conservation and longer battery life.Broadly, this description sets forth the following. A computer system utilizes subsystem supplemental memory resources to implement operating system supplemental disk caching. A main system processor (e.g., a central processing unit) processes information associated with main system functions. A bulk memory (e.g., a hard disk) stores the information. A main system memory (e.g., a main RAM) caches portions of the bulk information. A subsystem supplemental memory (e.g., a graphics subsystem RAM) provides storage capacity for subsystem operations (e.g., graphics operations) and supplemental storage for portions of said bulk information associated with main system functions (e.g., functions performed by the main system processor). Information (e.g., main system information) cached in the subsystem supplemental memory can be accessed by the main system processor directly.As short summary statements of some of what this description has presented the following is offered.Short SummariesThis writing teaches as a first item acomputer system comprising:a bus for communicating information;a main system processor for processing said information;a bulk storage component for storing said information; anda subsystem supplemental memory for caching a first portion of said bulk information for a main system processor.The computer system of the first item wherein said subsystem supplemental memory is a random access memory.The computer system of the first item further comprising a main system memory for caching a second portion of said bulk information for a main system processor.The computer system of the first item wherein said subsystem supplemental memory is a graphics subsystem memory.The computer system of the first item wherein a main system memory and said subsystem supplemental memory swap said portions of said bulk information between one another.The computer system of the first item further comprising a subsystem processor for processing subsystem information.The computer system of the first item wherein said main system processor receives said first portion of said bulk information from said subsystem supplemental memory.The computer system of the first item wherein information cached in said subsystem supplemental memory is written to said bulk memory before subsystem specific information is written to said subsystem supplemental memory.The computer system of the first item wherein information cached in said subsystem supplemental memory is written to said bulk memory before graphics information is written to said subsystem supplemental memory.As a second item, this description presents asupplemental caching method comprising:storing information in a bulk storage component;caching a portion of said information in a subsystem supplemental memory; andaccessing said subsystem supplemental memory to perform storage operations for a main processing component.A supplemental caching method of the second item further comprising performing storage operations including writing and reading portions of said information directly between said subsystem supplemental memory and said main processing component.A supplemental caching method of the second item further comprising performing a subsystem supplemental coordination process.A supplemental caching method of the second item wherein said subsystem supplemental coordination process comprises writing information from said subsystem supplemental memory to said bulk storage component if a subsystem operation is initiated.A supplemental caching method of the second item wherein a subsystem attempts to store subsystem related information in said subsystem supplemental memory.A supplemental caching method of the second item further comprising caching another portion of said information in a main memory.A supplemental caching method of the second item further comprising writing information between said subsystem supplemental memory and said main memory.As a third item, this writing discloses aA graphics subsystem comprising:a graphics bus for communicating information;a graphics processor for processing graphics information; anda graphics memory for storing graphics information and portions of bulk information associated with non-graphics applications.A graphics subsystem of the third item wherein said graphics memory includes a frame buffer memory.A graphics subsystem of the third item wherein said graphics processor has priority to storage capacity of said graphics memory.A graphics subsystem of the third item wherein a central processing unit can access said information associated with non-graphics applications from said graphics memory directly.The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents. In the claims, the order of elements does not imply any particular order of operations, steps, or the like, unless a particular element makes specific reference to another element as becoming before or after.
A system and method for enabling or disabling clocks to one or more portions of hardware circuitry, for example a display sub-system of a personal computer. A processor generates a command or data to a first circuit configured to perform a function based at least on the command or data. A clock generator selectively supplies clocks to the first circuit and a second circuit configured to perform a second function. A software interface circuit coupled to the processor and the clock generator autonomously determines based at least on the command or data whether the second circuit will perform the second function or be idle in an upcoming period and disables one or more of the clocks to the second circuit if the second circuit will be idle in the upcoming period.
1.A system comprising:a processor for generating commands or data;a first circuit configured to receive the command or data and perform at least based on the command or dataFirst function;a second circuit configured to perform a second function;a clock generator configured to selectively supply one or one of the first circuit and the second circuitMore than one clock; anda software interface circuit coupled to the processor and the clock generator, the software interface circuit configured to determine, at least based on the command or data, that the second circuit is to be in an upcoming cycle Performing the second function will still be idle and disable one or more of the clocks for the second circuit if the second circuit is to be idle in the upcoming cycle.2.The system of claim 1 wherein said software interface circuitry is further configured to determine that said first circuit is to be active in said upcoming cycle but is currently idle and enabled prior to said upcoming cycle The one or more clocks for the first circuit.3.The system of claim 1 wherein the processor is further configured to transmit a status command to the software interface circuit, and the software interface circuit is configured to determine the second circuit based at least on the status command Whether the second function will be performed or will be idle during the upcoming cycle.4.The system of claim 1 wherein said system comprises a plurality of clock domains, and said software interface circuitry is further configured to determine, at least based on said command or data, said said in said upcoming cycle A number of clock domains in multiple clock domains will be idle and can be disabled, and the number of clock domains will be disabled during the upcoming cycle.5.The system of claim 1 wherein said system comprises a scan output engine.6.A method comprising:Generating commands or data in the processor;Transmitting the command or data from the processor to a first circuit configured to perform a first function based at least on the command or data;Selectively supplying one or more clocks to the first circuit and a second circuit configured to perform a second function;Determining, at least based on the command or data, in a software interface circuit coupled to the processor and the clock generator, determining whether the second circuit will perform the second function in an upcoming cycle or not Idle; andOne or more of the clocks for the second circuit are disabled if the second circuit is to be idle in the upcoming cycle.7.A system comprising:a processor for generating commands or data;a first circuit configured to receive the command or data and perform a first function based at least on the command or data;a second circuit configured to perform a second function;a clock generator configured to selectively supply one or more clocks to the first circuit and the second circuit; andDetermining means for determining, at least based on the command or data, whether the second circuit will perform the second function or will be idle in an upcoming cycle, and wherein the second circuit is to be One or more of the clocks for the second circuit are disabled in the event of an idle period in the upcoming cycle.
Work-based clock management for display subsystemsCross-reference to related applicationsThe present application claims priority to U.S. Provisional Application Serial No. 60/794,221, filed on Apr. 20,,,,,,,,,,,,,, The manner is incorporated herein.Technical fieldIn general, the present invention relates to power savings in electronic circuits, such as power savings implemented in clock management of display subsystems that may be present in personal computers (PCs) or laptop PCs.Background technique1 is a generalized block diagram of a prior art computer system 100. In computer system 100, central processing unit (CPU) 105 communicates to system memory 115 via bus interface 110. I/O interface 130 receives user input from one or more user input devices 135 (eg, a keyboard, mouse) and forwards the input to CPU 105. Visual output is provided on display device 145 (e.g., a CRT or LCD monitor) by means of graphics subsystem 140. System disk 120 (eg, a hard drive) is coupled to I/O interface 130 or bus interface 110.The clock generator 150 supplies clocks to various components of the computer system 100 at a variety of frequencies. For example, clock generator 150 can provide a number of different clocks (eg, at different frequencies) to drive various hardware circuits within graphics subsystem 140. The clock generator 150 can supply one or more clocks to a digital-to-analog converter (DAC, not shown) in the graphics subsystem 140 such that the DAC can generate an analog signal to the display device 145 while the clock generator 150 can also Another circuit component (eg, I/O interface 130) supplies other clocks. A clock is required to enable various hardware circuits in computer system 100 to perform their respective functions.However, at any point in time, portions of the circuitry in computer system 100 may be idle and not performing the required functions. When the circuit is idle, computer system 100 can disable the clock for the idle circuit to conserve power. For example, to extend battery life if computer system 100 is a laptop PC, software components running on CPU 105 can instruct clock generator 150 to disable one or more of the clocks supplied to the idle circuit.However, providing software components in computer system 100 to control the enabling and disabling of clocks supplied by clock generator 150 has limitations. For example, providing clock management functionality in a software component can increase the overall complexity of the software running on CPU 105. Moreover, software intervention can cause delays because the software may take a relatively long time to determine that a portion of the circuitry in computer system 100 is idle, determine if the clock for the idle circuit can be safely disabled, and then to clock generator 150. Send a command or signal to disable the clock.Additionally, software components running on CPU 105 may not always "know" the exact state of the hardware in computer system 100. Therefore, there may be some degree of inefficiency in deciding which clocks to disable, or there may be some degree of uncertainty as to when it is safe to disable the clock. In the case of extreme anomalies, if one software component has disabled a portion of the logic and another software component attempts to write to the disabled logic, the software may "hang" or crash. In some cases, software may read the state of the hardware before writing to the hardware, but providing such a mechanism may increase the complexity of both software and hardware in computer system 100.Summary of the inventionIn one aspect, a system includes a processor configured to generate a command or data to a first circuit, the first circuit configured to perform a first function based at least on the command or data; a second circuit A second function is configured to execute; a clock generator configured to selectively supply one or more clocks to the first circuit and the second circuit; and a software interface circuit coupled to the processor and the clock generator. The software interface circuit is configured to, at least based on the command or data, determine from time to time whether the second circuit will perform the second function or idle during the upcoming cycle, and the second circuit will be in the upcoming cycle during the upcoming cycle One or more of the clocks for the second circuit are disabled in the idle state.In some embodiments, the software interface circuit is further configured to determine that the first circuit will function in an upcoming cycle but is currently idle, and enable one or more clocks for the first circuit before the upcoming cycle . In some embodiments, the processor can be further configured to transmit the status command to the software interface circuit, and the software interface circuit is configured to determine, based at least on the status command, that the second circuit will perform the second during the upcoming period The function is still idle. In some embodiments, the upcoming cycle may include, for example, a period of writing data to the first circuit, or where the system includes an isochronous graphics engine, the upcoming cycle may be a refresh cycle. In some embodiments, the clock generator can be further configured to send an acknowledgment to the software interface circuit after a predetermined time to disable the one or more clocks for the second circuit.In another aspect, a method includes: generating a command or data in a processor; transmitting the command or data from a processor to a first circuit, the first circuit configured to execute based on at least the command or data a first function; supplying one or more clocks to the first circuit and a second circuit configured to perform a second function; at least based on the command or data in a software interface circuit coupled to the processor and the clock generator Determining from the origin that the second circuit will perform the second function or idle during the upcoming period, and disable one of the clocks for the second circuit if the second circuit is idle during the upcoming period Or one or more.DRAWINGS1 is a generalized block diagram of a prior art computer system.2 is a block diagram of a computer system in accordance with an embodiment of the present invention.3 illustrates more details of the scan output module of FIG. 2 in accordance with an embodiment of the present invention, the scan output module including a software interface circuit configured to autonomously disable one for a scan output module Or more than one part of the clock.4 illustrates a simplified method of work-based clock management in the scan output modules of FIGS. 2 and 3 in accordance with one embodiment of the present invention.Detailed waysIn general, the present invention relates to a system and method for disabling or enabling a clock for one or more portions of a hardware circuit based on commands and/or data sent from a processor to a hardware circuit. In general, the software interface circuitry receives all or substantially all of the communications from the processor to the hardware circuitry by being in communication or by "sniffing" the communications. The software interface circuitry determines which portions of the hardware circuitry will need to be enabled during the upcoming cycle based on commands and/or data sent by the processor, and which portions will be idle during the upcoming cycle. For portions of the hardware circuitry that will be idle during the upcoming cycle, the software interface circuitry disables one or more clocks for the idle circuitry, such as by commanding a clock generator to disable the one or more clocks. In an exemplary embodiment, the hardware circuit is an isochronous display subsystem of a personal computer.2 is a block diagram of a computer system 200 in accordance with an embodiment of the present invention. Computer system 200 includes a central processing unit (CPU) 202 and system memory 204 that communicate via bus 206. User input is received from one or more user input devices 208 (eg, a keyboard, mouse) coupled to bus 206. Visual output is provided on a pixel-based display device 210 (eg, a conventional CRT- or LCD-based monitor) that operates under the control of a graphics processing subsystem 212 coupled to system bus 206. System disk 240 and other components, such as one or more removable storage devices 229 (eg, a floppy disk drive, a compact disk (CD) drive, and/or a DVD drive), can also be coupled to system bus 206. The system bus 206 can be constructed using one or more of a variety of bus protocols including PCI (Peripheral Component Interconnect), AGP (Advanced Graphics Processing), and/or PCI-Express (PCI-E). Suitable "bridge" chips, such as Northbridge and Southbridge (not shown), may be provided to interconnect various components and/or buses.Graphics processing subsystem 212 is an isochronous pipeline processor having deterministic control for generating images on display device 210. As used herein, an isochronous processor includes any data processing device configured to receive input data and/or deliver output data at a prescribed schedule. For example, isochronous graphics processing subsystem 212 can be configured to deliver an output signal to display device 210 at a specified frame rate, which can be a programmable rate. An isochronous pipeline graphics processor is further described in U.S. Patent Application Serial No. 10/901,887, the entire disclosure of which is hereby incorporated by reference. The manner is incorporated herein. In order to focus on the present invention, the isochronous pipeline graphics processor is generally described below, and specific details that are deemed unnecessary for understanding the present invention are omitted from the present disclosure.Graphics processing subsystem 212 includes a graphics processing unit (GPU) 214 and graphics memory 216, which may be constructed, for example, using one or more integrated circuit devices, such as programmable processors, Application specific integrated circuits (ASICs) and memory devices. GPU 214 includes graphics pipeline 220, memory interface module 222, and scan output module 224. Graphics pipeline 220 may be configured to perform generation of pixel data (eg, implementing various 2D and/or 3D rendering algorithms) from graphics data supplied via system bus 206, interacting with graphics memory 216 to store and update pixel data, and the like. Various tasks related. Memory interface module 222 in communication with graphics pipeline 220 and scan output module 224 manages all interactions with graphics memory 216. Memory interface module 222 may also include a path for writing pixel data received from system bus 206 to graphics memory 216 without being processed by graphics pipeline 220.Graphics memory 216, which may be constructed using one or more integrated circuit memory devices having a generally conventional design, may include various physical or logical sub-portions, such as pixel buffer 226 and command buffer 228. Pixel buffer 226 stores pixel data of an image (or a portion of the image) that is read and processed by scan output module 224 and transmitted to display device 210 for display. This pixel data can be generated, for example, from 2-D or 3-D scene data provided to graphics pipeline 220 of GPU 214 via system bus 206, or by various processes performed on CPU 202, and via the system bus. 206 is provided to pixel buffer 226. In some embodiments, the pixel buffer 226 can be double buffered such that when data of the first image is being read from the "front" buffer for display, the data of the second image can be written to the "back" buffer Without affecting the currently displayed image. Command buffer 228 queues commands received via system bus 206 for execution by graphics pipeline 220 and/or scan output module 224. Other portions of graphics memory 216 may be used to store data (e.g., texture data, color lookup tables, etc.) required by GPU 214, executable program code for GPU 214, and the like.Scan output module 224, which may be integrated with GPU 214 in a single chip or built into a separate chip, reads pixel color data from pixel buffer 226, processes the pixel color data, and passes the processed pixel data to a display device 210 for display. In one embodiment, scan output module 224 operates isochronously to scan output pixel data frames at a specified refresh rate (eg, 80 Hz) regardless of any other activity that may be occurring in GPU 214 or elsewhere in system 200. In some embodiments, the specified refresh rate may be a user selectable parameter and the scan output order may be adapted to the display format (eg, interlaced or progressive). Scan output module 224 may also perform other operations, such as adjusting color values for a particular display hardware, and/or generating a composite screen image by combining pixel data from pixel buffer 226 with data such as video or pointer overlay images. Data such as video or pointer overlay images may be obtained, for example, from graphics memory 216, system memory 204, or another data source (not shown). These operations are performed in the display pipeline of scan output module 224.During operation of system 200, CPU 202 executes (temporarily) various programs resident in system memory 204. In one embodiment, these programs include one or more operating system (OS) programs 232, one or more applications 234, and one or more drivers 236 for graphics processing subsystem 212. It should be appreciated that although these programs are shown as resident in system memory 204, the invention is not limited to any particular mechanism for supplying program instructions for execution by CPU 202. For example, at any given time, some or all of the program instructions of any of these programs may reside within CPU 202 (eg, on a chip instruction cache and/or various buffers and registers). ), in a page file or memory mapped file on system disk 240 and/or in other storage spaces.Operating system program 232 and/or application 234 can have a conventional design. Application 234 can be, for example, a video game program that generates graphics data and invokes appropriate rendering functions of GPU 214 (eg, graphics pipeline 220) to convert graphics data into pixel data. Another application 234 can generate pixel data and provide the pixel data to graphics processing subsystem 212 for display. It should be appreciated that any number of applications that generate pixel and/or graphics data can be executed simultaneously on CPU 202. Operating system programs 232 (e.g., graphics device interface (GDI) components of the Microsoft Windows operating system) may also generate pixel and/or graphics data to be processed by graphics processing subsystem 212.Driver 236 enables communication with graphics processing subsystem 212, including both graphics pipeline 220 and scan output module 224. Driver 236 advantageously constructs one or more standard application programming interfaces (APIs), such as OpenGL, Microsoft DirectX, or D3D, to communicate with graphics processing subsystem 212; any number of APIs or API combinations can be supported, and in some In an embodiment, a separate driver 236 is provided to build different APIs. By invoking an appropriate API function call, operating system program 232 and/or application 234 can instruct driver 236 to pass graphics data or pixel data to graphics processing subsystem 212 via system bus 206 to control the operation of graphics pipeline 220 to The state parameters and the like of the scan output module 224 are modified. The particular commands and/or data that driver 236 transmits to graphics processing subsystem 212 in response to an API function call may vary depending on the implementation of GPU 214, and driver 236 may also transmit implementations that are not subject to operating system program 232 or applications. 234 Controls additional commands (eg, special visual effects) commands and/or data.In some embodiments, command buffer 228 queues commands received via system bus 206 for execution by GPU 214. More specifically, driver 236 can write a command stream to command buffer 228; the stream can include rendering commands and data for graphics pipeline 220, as well as status commands for scan output module 224. In some embodiments, the command buffer 228 can include logically or physically separate portions of commands directed to the graphics pipeline 220 and commands directed to the scan output module 224; in other embodiments, the commands can be mixed in the command buffer The appropriate circuitry in 228 is directed to the appropriate pipeline by appropriate control circuitry within GPU 214.Command buffer 228 (or each portion thereof) is a first in first out buffer (FIFO) that is written by CPU 202 and read by GPU 214. Reads and writes can occur asynchronously. In one embodiment, CPU 202 periodically writes new commands and data to command buffer 228 at a location determined by the "placement" indicator, which causes the "placement" after each write. The indicator is incremented. GPU 214 may continuously read and process the sets of commands and data previously stored in command buffer 228 asynchronously. The GPU 214 maintains a "get" indicator to identify the read position in the command buffer 228 and increments the "get" indicator after each read. Assuming that the CPU 202 remains sufficiently far ahead of the GPU 214, the GPU 214 can render the image without incurring waiting for the idle time of the CPU 202. In some embodiments, depending on the size of the command buffer and the complexity of the scene, the CPU 202 can write commands and data sets for several frames that are several frames before the frame rendered by the GPU 214. The command buffer 228 can be of a fixed size (eg, 5 megabytes) and can be written and read in a wraparound manner (eg, after writing to the last location, the CPU 202 can reset the "placement" indicator For the first position).In some embodiments, the execution of the rendering command by graphics pipeline 220 need not be synchronized with the operation of scan output module 224. For example, where pixel buffer 226 is double buffered as mentioned above, graphics pipeline 220 is free to overwrite the back buffer while scan output module 220 is reading from the front buffer. Thus, graphics pipeline 220 can read and process the commands in a state in which the commands are received. The flip of the rear and front buffers can be synchronized with the end of the scan output frame. For example, when graphics pipeline 220 has completed a new image in the rear buffer, the operation of graphics pipeline 220 may be suspended until the end of the scan output of the current frame, at which point the buffer may be flipped. Various techniques for implementing these synchronization features are omitted as they are not critical to an understanding of the invention.3 illustrates more details of the scan output module 224 of FIG. 2 in accordance with an embodiment of the present invention, the scan output module 224 including a software interface circuit 390 configured to autonomously disable for scan output A clock of one or more portions of module 224. In this embodiment, the scan output module 224 is divided into two physical integrated circuits (ICs), a local IC 301, and a remote IC 302. However, scan output module 224 can include an IC or any number of ICs.To process the pixel data of display 210, memory interface 310 receives data for processing by one or both of two parallel pixel processing heads (first header 315 and second header 320). In some embodiments, for example, heads 315 and 320 can drive up to two displays 210 simultaneously. Head 315 includes synthesizer 316, data pipeline 317, and raster generator 318. Similarly, header 320 includes a synthesizer 321, a data pipeline 322, and a raster generator 333 in parallel. To drive one or more displays 210, the processed video data is output from headers 315 and/or 320 to one or more output resources 340 via a virtual crossbar 330, which includes DAC 350, serial output Resource 355 and parallel output resource 360.The local clock control module 380, in conjunction with the local clock generator 381, selectively supplies a clock for distribution within the local IC 301. Similarly, remote clock control module 385, in conjunction with remote clock generator 386, selectively supplies clocks for distribution within remote IC 302. Although illustrated and described as a separate module, in some embodiments, all or part of the functionality of the local clock control module 380 is integrated into the local clock generator 381, and all or part of the functionality of the remote clock control module 385 Integrated into remote clock generator 386. The selective clock distribution is controlled by software interface circuitry 390 as further described herein.Software interface circuit 390 is configured to determine which portions of scan output module 224 need to be enabled in an upcoming cycle based on commands and/or data received by scan output module 224, and which portions will be in an upcoming cycle Idle. Depending on the function to be performed by the various components within scan output module 224 in the upcoming cycle, software interface circuit 390 dynamically determines and controls the functional configuration of scan output module 224 (which portions are active and which portions are idle). For portions of the scan output module 224 that will function in the upcoming cycle but are currently idle, the software interface circuit 390 enables the clock for the circuit in advance or at the beginning of the upcoming cycle. For portions of scan output module 224 that will be idle during the upcoming cycle, software interface circuit 390 may disable one or more clocks for the idle circuit.In one embodiment, the software interface circuit 390 commands the native clock control module 380 of the local IC 301 and the remote clock control module 385 of the remote IC 302 in accordance with the functional configuration. Accordingly, software interface circuitry 390 suitably enables or disables one or more clocks supplied to components of scan output module 224 by local clock generator 381 and remote clock generator 386, respectively.To determine the functional configuration of scan output module 224 in an upcoming cycle, software interface circuit 390 receives and interprets substantially all of the communications entering scan output module 224. In some embodiments, software interface circuit 390 receives a status command from CPU 202 indicating a functional configuration in an upcoming cycle. For example, software running on CPU 202 or graphics processing subsystem 212 can send commands and/or data to software interface circuitry 390 via memory mapped register writes. Software interface circuitry 390 determines status information for upcoming cycles based on commands and/or data. The status information is automatically updated and becomes a functional configuration of the scan output module 224. Based on the functional configuration, software interface circuitry 390 determines those portions of scan output module 224 that need to be enabled or may be disabled during the upcoming cycle.In other embodiments, software interface circuit 390 indirectly receives commands and/or data by "sniffing" bus throughput and interpreting functional configurations therefrom. There are a number of bus interfaces (not shown) between the software interface circuit 390 and other components within the scan output module 224, and the upcoming cycle may be for receiving data on one of the buses and corresponding to the hardware. The duration of the write. For example, if the software command is interpreted by software interface circuit 390 to indicate that the software intends to write to a register in header 315 in an upcoming register write cycle, and the clock for header 315 is currently disabled, then the software interface Circuitry 390 enables the clock for a predetermined time before transmitting the data to the registers in header 315. Once written to the register, software interface circuit 390 can disable the clock for head 315 if there are no more registers to be written in header 315. In another example, software interface circuit 390 can determine that data will be written to configure one of DACs 350 in an upcoming cycle. For data that will arrive at DAC 350 from software interface circuit 390, software interface circuit 390 enables the appropriate clock for DAC 350, and software interface circuit 390 will then write DAC 350 in the upcoming cycle. Once the data is written and no more data is to be written to the DAC 350, the software interface circuit 390 can disable the clock for the DAC 350.The upcoming cycle may also correspond to a vertical scan output or a raster scan period. For example, the software interface circuit 390 can determine the functional configuration of the next upcoming raster scan period in a manner that each vertical scan output or raster scan period is once. If, in the upcoming raster scan cycle, for example, the header 315 display logic will function to produce video data while the header 320 will be idle, the software interface circuitry 390 may disable the header 320 in the upcoming raster scan cycle. The clock of the circuit. In another example, if the programming sequence causes one of the DACs 350 to connect to the header 315 or header 320 in an upcoming raster scan cycle to display an image on the display 210, the software interface circuitry 390 can generate an image for the DAC 350. The period of the data enables the clock for the appropriate DAC 350. When software interface circuit 390 determines that DAC 350 is no longer needed for display, then software interface circuit 390 can turn off the clock for DAC 350.To efficiently manage the various functional hardware configurations in software interface circuitry 390, the resources in scan output module 224 are subdivided into a predetermined number of discrete clock domains. In one embodiment, the clocks for software interface circuit 390, memory interface 310, local clock control module 380, and remote clock control module 385 include an always-on clock domain that is supplied with a 400 MHz clock. The first dynamic clock domain includes header 315 display logic (synthesizer 316, data pipeline 317, and raster generator 318). The second dynamic clock domain includes header 320 display logic (synthesizer 321, data pipeline 322, and raster generator 333). Each clock domain can contain multiple clock frequencies or phases. For example, raster generators 318 and 333 can receive clocks different from the clocks supplied to synthesizers 316 and 321 and data lines 317 and 322, and/or in addition to synthesizers 316 and 321 and data lines 317 and 322. Other clocks are also received outside the clock.Other clock domains may be predetermined for one or more of the output resources 340, such as DAC 350, serial output resource 355, and parallel output resource 360, which operate independently of each other but display a logical sum with header 315 / or head 320 shows a logical combination to operate. Other clock domains may include other logic in scan output module 224, such as logic or circuitry that is not dedicated to any header or output resources.Because the scan output module 224 is isochronous in an exemplary embodiment to optimally manage the clock state, the software interface circuit 390 needs to scan the real-time functional configuration or state of the hardware in the output module 224. For example, software interface circuit 390 cannot only enable or disable the clock for header 315 because the clock for header 315 needs to be constantly running while header 315 is processing data through data pipeline 317.In some embodiments, to improve the accuracy of the spontaneous clock management in the scan output module 224, the local clock control module 380 and/or the remote clock control module 385, after the predetermined time to enable or disable the appropriate clock has elapsed, to the software Interface circuit 390 sends an acknowledgment (ACK). Thus, the software interface circuit 390, which is a "clock master", automatically knows the state of the hardware within the scan output module 224 and can accurately determine when a particular clock is enabled or disabled. For example, software interface circuit 390 can be programmed to have a predetermined time (eg, a predetermined worst case value) for enabling a clock for head 315. If there are commands or data to access the registers in header 315, software interface circuitry 390 can accurately determine the enable time of header 315 such that header 315 will only be turned on just prior to register writes in the upcoming write cycle. Because software interface circuit 390 acts as the master clock device for the hardware components of scan output module 224, the software instructions can be considered to be requesting to send data to the registers on header 315 and software interface circuit 390 coordinates the clock enable/disable of header 315, enabling for the header The associated command stream of 315, and if no additional communication is to access header 315, then the clock for header 315 is disabled. In this manner, software interface circuitry 390 advantageously prevents software from having to manage clock enable and disable in scan output module 224.To avoid clock enable/disable latency in some embodiments, software running on CPU 202 or graphics processing subsystem 212 may selectively enable or disable spontaneous clock management performed by software interface circuitry 390. For example, during initialization or startup of system 200, software may disable the automatic clock management mode to ensure that all components of system 200 are enabled, for example, to ensure that registers can be reliably written or read during initialization. Software can enable or disable the spontaneous clock management mode, for example, by writing a memory map to software interface circuitry 390.4 illustrates a simplified method of work-based clock management in scan output module 224 of FIGS. 2 and 3, in accordance with one embodiment of the present invention. At step 405, software running on CPU 202 or graphics processing subsystem 212 generates commands and/or data for a first circuit (eg, header 315) of scan output module 224. At step 410, software running on CPU 202 or graphics processing subsystem 212 sends commands and/or data to header 315. Commands and/or data may include status commands for software interface circuitry 390, as appropriate.At step 415, the software interface circuit 390 of the scan output module 224 receives the command and/or data (or status command) and determines which portions of the scan output module 224 need to be in the upcoming cycle based on the command and/or data. Enabled (eg, header 315 needs to be enabled during a data write cycle to a register in header 315), and which portions will be idle in an upcoming cycle (eg, DAC 350). At step 420, if the clock of the clock domain containing header 315 is currently disabled, software interface circuit 390 enables the clock of the clock domain containing header 315 before the upcoming cycle. In addition, software interface circuit 390 disables the clock containing the clock domain of DAC 350 that will be idle during the upcoming cycle. At step 425, the local clock control module 380 and/or the remote clock control module 385 acknowledge (ACK) enable the clock for the header 315 and disable the clock for the DAC 350.Although the method described with respect to FIG. 4 includes enabling and disabling clocks for two of the scan output modules 224, the software interface circuit 390 can maintain a functional configuration for a virtually unlimited number of clock domains in the scan output module 224. In some embodiments, several clock domains managed by state machines built in software interface circuitry 390 are relatively large and complex. Thus, at step 415 of FIG. 4, software interface circuit 390 can determine that more than one clock domain needs to be enabled in the upcoming cycle, and more than one clock domain will be idle. Thus, at step 420, software interface circuit 390 can enable clocks for more than one clock domain in an upcoming cycle and disable clocks for more than one clock domain. Similarly, at step 425, various clock control modules can ACK enable and disable the clock for a larger number of clock domains.In a conventional software-guided clock management scheme, a clock generator supplies a clock to each functional unit in the system, and software can disable each of the clocks typically by memory-mapped register writes to the hardware. In cases where the system includes a data processing pipeline, the software needs to ensure that there is no data in the pipeline stage when the clock is turned off. Typically, this synchronization is performed with a status signal that is returned from the pipeline to the location of the memory map. The status signal indicates whether data is present in the pipeline unit, thereby leaving the pipeline idle. Once this status signal is read via a memory read, the software can boot to disable the clock for the pipeline.In contrast to conventional clock management schemes in which software controls when dynamically enabling or disabling a clock for a particular hardware portion, software interface circuitry 390 autonomously determines when to dynamically enable and/or disable the clock in scan output module 224. Safe or suitable. Because software interface circuitry 390 automatically understands the functional configuration of other hardware components within scan output module 224, software interface circuitry 390 is advantageously more accurate in determining whether a particular portion of scan output module 224 can be disabled by disabling the clock. While there are any doubts on whether the clock can be disabled, conventional software booting mechanisms for disabling clocks must keep the clock enabled, but software interface circuitry 390 can be relatively frequent and in relatively long periods. The clock is disabled, making the software interface circuit 390 relatively more efficient in terms of clock management than the software-guided mechanism.Having software interface circuit 390 as the master clock device in scan output module 224 can be significantly reduced to determine if the hardware portion will function and whether the clock must be enabled in the upcoming cycle, as compared to conventional software controlled clock management schemes. , or whether the hardware will be idle and whether the clock wait time can be disabled. One advantage is that software interface circuitry 390 controls the clocks that ensure proper logic resources in the clock domain to be enabled when software writes to those logical resources. Software interface circuit 390 is much faster in determining whether one of the clock domains can be disabled in an upcoming cycle because there is no software in the loop. In general, software interface circuitry 390 ensures that the software is in a controlled state in hardware. Yet another advantage is that the software interface circuitry 390 prevents contention and other issues from occurring when clock on/off state transitions are made.Software interface circuitry 390 advantageously acts as a single coordination point that synchronizes software and hardware with respect to enabling or disabling the clock of the clock domain in scan output module 224. In a software-controlled clock management scheme, different software threads may conflict in their respective determinations to enable or disable individual hardware portions, and typically there is no single hardware state management mechanism in the software. Software interface circuitry 390 advantageously imposes an orderly disorder on the software.Moreover, software interface circuitry 390 can disable the clock more frequently than the software-controlled clock disable mechanism. In order to prevent errors or events such as race conditions in a software-controlled clock disable mechanism, if there is any doubt as to whether software can disable a portion of the logic, the software keeps the logic enabled to prevent contention. In contrast, when hardware control is performed via the software interface circuit 390, it can be more accurately determined whether a portion of the logic can be safely disabled in the scan output module 224. This provides the benefit that the total amount of time that the logic portion can be disabled is increased compared to the solution controlled by the software.Although described with respect to a graphics engine, the system and method are generally applicable to almost any of the following circuits: in the circuit, the circuit is divided into clock domains, and the clock generator is configured to be directed to the first clock domain and the second The clock domain selectively supplies one or more clocks, and the controller determines, for the upcoming cycle, whether one or more of the clock domains will be utilized in the upcoming cycle, and depending on whether it will be coming soon One or more clock domains are selectively enabled and/or disabled using a clock domain during the period of the cycle.
Aspects of the disclosure provide an integrated circuit that includes a plurality of input/output (IO) circuits, an instruction receiving circuit and control circuits. The IO circuits are configured to receive a plurality of bit streams corresponding to an instruction to the integrated circuit. The instruction receiving circuit is configured to form the instruction from the plurality of bit streams. The control circuits are configured to operate according to the instruction.
WHAT IS CLAIMED IS:1. An integrated circuit, comprising:a plurality of input/output (10) circuits configured to receive a plurality of bit streams corresponding to an instruction to the integrated circuit;an instruction receiving circuit configured to form the instruction from the plurality of bit streams; andcontrol circuits configured to operate according to the instruction.2. The integrated circuit of claim 1, wherein:a memory array is configured to store data at memory addresses; the plurality of 10 circuits are configured to receive bit streams corresponding to an address in the memory array; andthe control circuits are configured to read/write data at the address according to the instruction.3. The integrated circuit of claim 2, whereinthe control circuits are configured according to the instruction that indicates a first number of 10 circuits for receiving the address, and a second number of 10 circuits for data input/output.4. The integrated circuit of claim 1, wherein:a register is configured to store a first value indicative of a first configuration in which the IO circuits receive, in parallel, the bit streams corresponding to the instruction; and the instruction receiving circuit is configured, according to the first value in the register, to form the instruction from the bit steams received in parallel.5. The integrated circuit of claim 4, wherein:the register is configured to change from the first value to a second value in response to the instruction, andthe instruction receiving circuit is configured according to the second value indicative of a second configuration to use a different number of 10 circuits for receiving a next instruction.6. The integrated circuit of claim 4, wherein:the register is initialized to a second value indicative of a second configuration in which instructions are received by a specific 10 circuit in a single bit stream; and the instruction receiving circuit is configured to form the instructions from a bit stream received by the specific IO circuit.7. The integrated circuit of claim 6, wherein:the register is configured to change from the second value to the first value in response to a specific instruction received by the specific IO circuit; andthe instruction receiving circuit is configured to form subsequent instructions from the bit streams received by the plurality of IO circuits.8. A method, comprising:receiving, by a plurality of input/output (IO) circuits of an integrated circuit, a plurality of bit streams corresponding to an instruction to the integrated circuit;forming, by an instruction receiving circuit, the instruction from the plurality of bit streams; andcontrolling control circuits in the integrated circuit to operate according to the instruction.9. The method of claim 8, wherein:receiving two or more bit streams corresponding to an address in a memory array of the integrated circuit; andreading/writing data at the address in the memory array according to the instruction.10. The method of claim 9, further comprisingconfiguring the control circuits according to the instruction that indicates a first number of IO circuits for receiving the address, and a second number of IO circuits for data input/output.11. The method of claim 8, further comprising:storing, in a register, a first value indicative of a first configuration in which the instruction is received as the bit streams in parallel.12. The method of claim 11, further comprising:changing, in the register, from the first value to a second value in response to the instruction;configuring the instruction receiving circuit according to the second value to use a different number of IO circuits for receiving a next instruction.13. The method of claim 11, further comprising:initializing the register with a second value indicative of a second configuration in which instructions are received by a specific 10 circuit; andforming the instructions from a bit stream received by the specific IO circuit.14. The method of claim 13, further comprising:updating the register with the first value in response to a specific instruction received by the specific 10 circuit; andconfiguring the instruction receiving circuit to form subsequent instructions from the bit streams received by the plurality of 10 circuits.15. An integrated circuit, comprising:a control circuit configured to generate a plurality of instruction bit streams corresponding to an instruction to another integrated circuit; anda plurality of input/output (10) circuits configured to output the plurality of instruction bit streams in order to send the instruction to the other integrated circuit.16. The integrated circuit of claim 15, wherein:the control circuit is configured to generate a plurality of address bit streams corresponding to an address for a storage place in a memory array of the other integrated circuit; andthe plurality of 10 circuits are configured to output the plurality of address bit streams in order to access the storage place in the memory array.17. The integrated circuit of claim 15, wherein:the control circuit is configured to generate a single instruction bit stream corresponding to a specific instruction after the other integrated circuit is initialized; andan IO circuit is configured to output the single instruction bit stream in order to send the specific instruction to the other integrated circuit in order to configure the other integrated circuit to receive bit streams corresponding to subsequent instructions from the plurality of 10 circuits.18. The integrated circuit of claim 17, wherein:the control circuit is configured to generate a plurality of instruction bit streams corresponding to a subsequent instruction; and the plurality of 10 circuits are configured to output the plurality of instruction bit order to send the subsequent instruction to the other integrated circuit.
METHOD AND APPARATUS FOR LATENCY REDUCTIONINCORPORATION BY REFERENCE[0001] This present disclosure claims the benefit of U.S. Provisional Application No. 61/763,750, "QSPI QUAD INSTRUCTION MODE" filed on February 12, 2013, which is incorporated herein by reference in its entirety.BACKGROUND[0002] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.[0003] Generally, serial peripheral interface (SPI) bus is used for inter-chipcommunications. In an example, two integrated circuit (IC) chips are configured according to SPI bus technology and are connected by bus wires. One of the IC chips is configured in a master mode, and the other is configured in a slave mode. The master IC chip provides control signals, such as a clock signal, a select signal, and the like, to control the communication between the two IC chips.SUMMARY[0004] Aspects of the disclosure provide an integrated circuit that includes a plurality of input/output (IO) circuits, an instruction receiving circuit and control circuits. The IO circuits are configured to receive a plurality of bit streams corresponding to an instruction to the integrated circuit. The instruction receiving circuit is configured to form the instruction from the plurality of bit streams. The control circuits are configured to operate according to the instruction.[0005] According to an aspect of the disclosure, the integrated circuit includes a memory array configured to store data at memory addresses. The plurality of IO circuits are configured to receive bit streams corresponding to an address in the memory array, and the control circuits are configured to read write data at the address according to the instruction. In an example, the control circuits are configured according to the instruction that indicates a first number of IO circuits for receiving the address, and a second number of IO circuits for data input/output. [0006] In an embodiment, the integrated circuit includes a register configured to store a first value indicative of a first configuration in which the IO circuits receive, in parallel, the bit streams corresponding to the instruction. The instruction receiving circuit is configured according to the first value in the register to form the instruction from the bit steams received in parallel. In an example, the register is configured to change from the first value to a second value in response to the instruction. The second value is indicative of a second configuration to use a different number of IO circuits for receiving a next instruction. The instruction receiving circuit is configured according to the second value indicative of a second configuration to use a different number of IO circuits for receiving a next instruction. In another example, the register is initialized to a value indicative of an initial configuration in which instructions are received by a specific IO circuit in a single bit stream, and the instruction receiving circuit is configured to form the instructions from a bit stream received by the specific IO circuit.[0007] Aspects of the disclosure provide a method. The method includes receiving, by a plurality of input/output (IO) circuits of an integrated circuit, a plurality of bit streams corresponding to an instruction to the integrated circuit, forming, by an instruction receiving circuit, the instruction from the plurality of bit streams, and controlling control circuits in the integrated circuit to operate according to the instruction.[0008] Aspects of the disclosure provide another integrated circuit. The integrated circuit includes a control circuit configured to generate a plurality of instruction bit streamscorresponding to an instruction to another integrated circuit and a plurality of input/output (IO) circuits configured to output the plurality of instruction bit streams in order to send the instruction to the other integrated circuit.BRIEF DESCRIPTION OF THE DRAWINGS[0009] Various embodiments of this disclosure that are proposed as examples will be described in detail with reference to the following figures, wherein like numerals reference like elements, and wherein:[0010] Fig. 1 shows a block diagram of a communication system 100 according to an embodiment of the disclosure;[0011] Fig. 2 shows a flow chart outlining a process example 200 according to an embodiment of the disclosure; and [0012] Figs. 3 and 4 show plots of waveforms for comparison according to an embodiment of the disclosure.DETAILED DESCRIPTION OF EMBODIMENTS[0013] Fig. 1 shows a block diagram of a communication system 100 according to an embodiment of the disclosure. The communication system 100 includes a first circuit 110 and a second circuit 150 coupled together using a modified serial peripheral interface (SPI) bus. The modified SPI bus is programmable and can be programmed to use a single wire or multiple wires for instruction transmission. When multiple wires are used for instruction transmission, the communication system 100 has a reduced latency; and when a single wire is used for instruction transmission, the communication system 100 is backwards compatible with other SPI bus technology.[0014] The first circuit 110 and the second circuit 150 can be any suitably circuits that use the modified SPI bus technology for inter-circuit communication. In an embodiment, the communication system 100 is a chip communication system in which the first circuit 1 10 is a first integrated circuit (IC) chip and the second circuit 150 is a second IC chip. The first IC chip 1 10 includes a first interface 1 17 implemented using the modified SPI bus technology and the second IC chip 151 includes a second interface 157 implemented using the modified SPI bus technology. In an example, the two IC chips are assembled on a printed circuit board (PCB) and corresponding input/output (IO) pins of the two IC chips are coupled together by suitable conductive medium on the PCB, such as printed copper wires, vias, jumpers and the like.[0015] Specifically, in an embodiment, the first interface 117 includes a plurality of input/output (IO) circuits 111- 116, and a control circuit 120 coupled together as shown in Fig. 1. The IO circuits 111- 116 are respectively configured for different signal input/output. For example, the IO circuit 1 11 is configured to input/output a chip select (CS) signal, the IO circuit 1 12 is configured to input/output a clock (CLK) signal, the IO circuit 1 13 is configured to input/output a first information signal (IO-0), the IO circuit 1 14 is configured to input/output a second information signal (IO-l), the IO circuit 115 is configured to input/output a third information signal (IO-2), and the IO circuit 1 16 is configured to input/output a fourth information signal (IO-3).[0016] Similarly, the second interface 157 includes a plurality of input/output (IO) circuits 151-156, and a control circuit 160 coupled together as shown in Fig. 1. The IO circuits 151-156 are respectively configured for different signal input/output. For example, the 10 circuit 151 is configured to input/output a chip select (C ) signal, the 10 circuit 152 is configured to input/output a clock (CLK) signal, the 10 circuit 153 is configured to input/output a first information signal (IO-O), the 10 circuit 154 is configured to input/output a second information signal (10-1), the IO circuit 155 is configured to input/output a third information signal (10-2), and the 10 circuit 156 is configured to input/output a fourth information signal (10-3).[0017] The IO circuits 111-116 of the first circuit 110 and the corresponding 10 circuits 151-156 of the second circuit 150 are suitably coupled together by printed copper wires, vias, jumpers and the like. According to an aspect of the disclosure, one of the first circuit 110 and the second circuit 150 is configured in a master mode and the other is configured in a slave mode. In the Fig. 1 example, the second circuit 150 is a memory device that includes a memory array 180 and suitable auxiliary circuits (not shown), and the first circuit 110 is a memory controller device that includes control logics (not shown) to control memory access to the second circuit 150. In the Fig. 1 example, the first circuit 110 is configured in the master mode and the second circuit 150 is configured in the slave mode. The first circuit 110 provides controls signals to control the communication between the first circuit 110 and the second circuit 150. For example, the first circuit 110 provides the chip select signal via coupled corresponding IO circuit pair (the IO circuits 111 and 151) to the second circuit 150.[0018] In an embodiment, the first circuit 110 is coupled with the second circuit 150 and one or more other memory devices (not shown). The first circuit 110 provides respective chip select signals to the coupled memory devices, and share resources, such as the IO circuits 112- 116 for the clock and information signals, among the coupled memory devices. In an example, when the chip select signal to the second circuit 150 is logic "0", the second circuit 150 is selected, and the information signals, such as instructions, addresses, data and the like, on the shared resources are for the second circuit 150; and when the chip select signal to the second circuit 150 is logic "1", the information signals on the shared resources are not for the second circuit 150.[0019] Further, various information signals are communicated between the first circuit 110 and the second circuit 150, for example via the IO circuits 113-116 and 153-156. In the Fig. 1 example, instructions, addresses and data are communicated between the first circuit 110 and the second circuit 150. In an example, the first circuit 110 sends a configuration instruction to the second circuit 150 to cause the second circuit 150 to be configured accordingly. In another example, the first circuit 110 sends a write instruction with an address and data to the second circuit 150 to cause the data to be written into the memory array 180 at the address. In another example, the first circuit 110 sends a read instruction with an address to the second circuit 150 to cause the second circuit 150 to send back data stored at the address in the memory array 180.[0020] The information signals can be communicated in various formats, such as a single bit stream on a single IO circuit pair, multiple bit streams on multiple IO circuit pairs, and the like. According to an aspect of the disclosure, the first interface 117 and the second interface 157 are respectively configured to enable such various formats.[0021] Specifically, the control circuit 120 is configured to convert signal formats between internal circuits (not shown) of the first circuit 110 and input/output circuits, such as the IO circuits 113-116. In the Fig. 1 example, the control circuit 120 includes an instruction transmission circuit 130 configured to convert an internal format of an instruction to the second circuit 150 to a format that is receivable by the second IC chip 150. In an example, an instruction includes eight bits, and the internal circuits of the first IC chip 110 generate an instruction in the format of 8 parallel bits. When the first interface 117 is configured to transmit an instruction in the format of a single bit stream of 8 bits, the instruction transmission circuit 130 is configured to convert the format of the instruction from the 8 parallel bits to a single bit stream. When the first interface 117 is configured to transmit an instruction in the format of multiple bit streams, such as duo (two) bit streams, quad (four) bit streams, and the like, the instruction transmission circuit 130 is configured to convert the format of the instruction from the 8 parallel bits to multiple bit streams, such as duo bit streams, quad bit streams, and the like. It is noted that the first interface 117 also includes other suitable circuits (not shown) configured to convert address format and data format for example.[0022] The control circuit 160 is configured to convert signal formats between internal circuits of the second IC chip 150 and the IO circuits 151-156. In the Fig. 1 example, the control circuit 160 includes an instruction receiving circuit 170 configured to convert a received format of an instruction to an internal format used by the internal circuits of the second IC chip 150. In an example, the internal circuits of the second IC chip 150 are configured to decode an instruction in the format of 8 parallel bits. When the IO circuits receive an instruction in the format of a single bit stream of 8 bits, the instruction receiving circuit 170 is configured to convert the format of the instruction from a single bit stream to 8 parallel bits; and when the 10 circuits receive an instruction in the format of multiple bit streams, such as duo (two) bit streams, quad (four) bit streams, and the like, the instruction receiving circuit 170 is configured to convert the format of the instruction from multiple bit streams to 8 parallel bits.[0023] According to an aspect of the disclosure, the instruction receiving circuit 170 has multiple modes, such as a single-bit instruction mode, a quad-bit instruction mode, and the like. For example, when the instruction receiving circuit 170 is in the single-bit instruction mode, the instruction receiving circuit 170 is configured to convert a single bit stream of 8 bits received by an IO circuit, such as the IO circuit 153, to the format of 8 parallel bits. When the instruction receiving circuit 170 is in the quad-bit instruction mode, the instruction receiving circuit 170 is configured to convert an instruction received by the IO circuits 153-156 in the format of four bit streams to the format of 8 parallel bits.[0024] In the Fig. 1 example, the second interface 157 includes a status register 165 configured to store a value corresponding to a mode for the instruction receiving circuit 170, and the instruction receiving circuit 170 is configured according to the value stored in the status register 165. For example, when the status register 165 has a value of 0, the instruction receiving circuit 170 is configured in the single-bit instruction mode; and when the status register 165 has a value of 1, the instruction receiving circuit 170 is configured in the quad-bit instruction mode. It is noted that the status register 165 can be configured to store other suitable value corresponding to other suitable mode for the instruction receiving circuit 170.[0025] According to an aspect of the disclosure, the second circuit 150 (e.g., the second IC chip) can be suitably programmed to operate with a memory controller who supports multiple bit streams for instructions, such as the first circuit 110 (e.g., the first IC chip), or another memory controller who does not support multiple bit streams for instructions and only uses single bit stream for instructions.[0026] During operation, in an embodiment, when the second circuit 150 is powered up or is reset, the status register 165 is initialized to store a value corresponding to the single-bit instruction mode, and thus the instruction receiving circuit 170 enters the single-bit instruction mode. In an example, when the second circuit 150 is coupled with a memory controller that does not support multiple bit streams for instructions, the second circuit 150 is able to operate with the memory controller using single bit streams for instructions. [0027] In the Fig. 1 example, after a reset of the second circuit 150, the first circuit 110 sends a configuration instruction to the second circuit 150 using a single-bit stream. The second circuit 150 is able to receive the configuration instruction, decode the configuration instruction, and be configured according to the configuration instruction. In an example, a specific configuration instruction causes the status register 165 to change to another value that corresponds to a multiple-bit instruction mode, thus the instruction receiving circuit 170 enters the multiple-bit instruction mode. Then, the first circuit 110 sends subsequent instructions to the second circuit 150 using multiple bit streams.[0028] Further, in an example, when the instruction receiving circuit 170 is in the multiple-bit instruction mode, and the first circuit 110 decides to switch to using single bit stream for instructions, the first circuit 110 can send a specific configuration instruction using multiple instruction bit streams to the second circuit 150. The specific configuration instruction causes the status register 165 to change to the value that corresponds to the single-bit instruction mode, thus the instruction receiving circuit 170 enters the single bit instruction mode. Thus, the first circuit 110 can send a subsequent instruction to the second circuit 150 using a single instruction bit stream.[0029] Fig. 2 shows a flow chart outlining a process example 200 for inter-chip communication. In an example, the process 200 is executed in the communication system 100.[0030] At S210, a circuit enters a single-bit instruction mode by default in response to a reset. In the Fig. 1 example, in response to a reset or a power up of the second circuit 150, the status register 165 is initialized to store the value corresponding to the single-bit instruction mode. Thus, the instruction receiving circuit 170 enters the single-bit instruction mode, and the second circuit 150 is able to receive an instruction as a single-bit instruction stream, and operate according to the instruction.[0031] At S220, the circuit receives a specific instruction via a single bit stream. The specific instruction instructs the circuit to convert to the quad-bit instruction mode. In the Fig. 1 example, at a time when the first circuit 110 decides to use quad bit streams to send instructions, the first circuit 110 sends a specific instruction to the second circuit 150 via a single bit stream. The specific instruction instructs the second circuit 150 to change to the quad-bit instruction mode in an example. The second circuit 150 receives the specific instruction. [0032] At.S230, the circuit sets registers according to the specific instruction to prepare for receiving instructions via quad-bit instruction streams. In the Fig. 1 example, the specific instruction causes the status register 165 to change to the value corresponding to the quad-bit instruction mode. Thus, the instruction receiving circuit 170 enters the quad-bit instruction mode.[0033] At S240, the circuit is able to receive instructions via quad bit streams and operate according to the instructions. In the Fig. 1 example, when the instruction receiving circuit 170 enters the quad-bit instruction mode, the second circuit 150 is able to receive memory read/write instructions via quad bit streams, and operate according to the instructions. In an example, when an instruction is indicative of a memory write access using duo bit streams for address and quad bit streams for data, the second circuit 150 is configured to receive address in two bit streams and receive data in quad bit streams, and write the data into the address of the memory array 180. In another example, when an instruction is indicative of a read access using quad bit streams for address and duo bit streams for data, the second circuit 150 is configured to receive address in quad bit streams, read data from the address in the memory array 180, and send the data to the first circuit 110 in duo bit streams.[0034] At S250, the circuit receives a specific instruction via quad bit streams. The specific instruction instructs the circuit to convert to the single-bit instruction mode. In the Fig. 1 example, for some reason, at a time the first circuit 110 decides to switch from using quad-bit instruction streams to using single bit instruction stream to send subsequent instructions, the first circuit 110 sends a specific instruction to the second circuit 150 via the quad bit streams first. The specific instruction instructs the second circuit 150 to change to the single-bit instruction mode. The second circuit 150 receives the specific instruction.[0035] At S260, the circuit sets registers according to the specific instruction to prepare for receiving an instruction via a single-bit instruction stream. In the Fig. 1 example, the specific instruction causes the status register 165 to change to the value corresponding to the single-bit instruction mode. Thus, the instruction receiving circuit 170 enters the single-bit instruction mode.[0036] At S270, the circuit is able to receive instructions via single bit streams and operate according to the instructions. In the Fig. 1 example, when the instruction receiving circuit 170 enters the single-bit instruction mode, the second circuit 150 is able to receive a memory read/write instruction via a single bit stream, and operate according to the instruction. In an example, when an instruction is indicative of a memory write access using duo bit streams for address and quad bit streams for data, the second circuit 150 is configured to receive address in two bit streams and receive data in quad bit streams, and write the data into the address of the memory array 180. In another example, when an instruction is indicative of a read access using quad bit streams for address and duo bit streams for data, the second circuit 150 is configured to receive address in quad bit streams, read data from the address in the memory array 180, and send the data to the first circuit 110 in duo bit streams. Then, the process proceeds to S299 and terminates.[0037] It is noted that, in an example, at S270, when the second circuit 150 receives a specific instruction that instructs the second circuit 150 to convert to the quad-bit instruction mode, the process returns to S230. It is also noted that the single-bit instruction mode and the quad-bit instruction mode are used as examples, and the process 200 can be modified to use other suitable instruction transmission and receiving modes.[0038] Fig. 3 shows a plot 300 of waveforms for the communication system 100 according to an embodiment of the disclosure. The plot 300 includes a first waveform 310 for the chip select signal, a second waveform 320 for the clock signal, a third waveform 330 for the first information signal, a fourth waveform 340 for the second information signal, a fifth waveform 350 for the third information signal, and a sixth waveform 360 for the fourth information signal. The information signals can include information of instruction, address, mode, and data.[0039] In the Fig. 3 example, a read instruction is sent from the first circuit 110 to the second circuit 150 using quad-bit streams in parallel to read data from an address in the memory array 180. The read instruction includes 8 bits, and can be sent by the IO circuits 113-116 and received by the IO circuits 153-156, in the format of four parallel bit streams using two clock cycles.[0040] Fig. 4 shows a plot 400 of waveforms for the communication system 100 according to an embodiment of the disclosure. Similarly, the plot 400 includes a first waveform 410 for the chip select signal, a second waveform 420 for the clock signal, a third waveform 430 for the first information signal, a fourth waveform 440 for the second information signal, a fifth waveform 450 for the third information signal, and a sixth waveform 460 for the fourth information signal.[0041] In the Fig. 4 example, the read instruction is sent by the IO circuit 113 and received by the IO circuit 153 in the format of a single-bit stream to read data from an address in the memory array 180. The 8 bits of the instruction are sent using eight clock cycles. Thus, by using the quad-bit instruction streams in the Fig. 3 example, it takes less time for the first circuit 110 to receive the data read back from the second circuit 150 comparing to using the single-bit instruction stream in the Fig. 4 example.[0042] While aspects of the present disclosure have been described in conjunction with the specific embodiments thereof that are proposed as examples, alternatives, modifications, and variations to the examples may be made. Accordingly, embodiments as set forth herein are intended to be illustrative and not limiting. There are changes that may be made without departing from the scope of the claims set forth below.
Using multiple overlays with a data processing array includes loading an application in a data processing array. The data processing array includes a plurality of compute tiles each having a processor. The application specifies kernels executable by the processors and implements stream channels that convey data to the plurality of compute tiles. During runtime of the application, a plurality of overlays are sequentially implemented in the data processing array. Each overlay implements a different mode of data movement in the data processing array via the stream channels. For each overlay implemented, a workload is performed by moving data to the plurality of compute tiles based on the respective mode of data movement.
CLAIMSWhat is claimed is:1. A method, comprising: loading an application in a data processing array; wherein the data processing array includes a plurality of compute tiles each having a processor; wherein the application specifies kernels executable by the processors and implements stream channels that convey data to the plurality of compute tiles; during runtime of the application, sequentially implementing a plurality of overlays in the data processing array, wherein each overlay implements a different mode of data movement in the data processing array via the stream channels; and for each overlay implemented, performing a workload by moving data to the plurality of compute tiles based on the respective mode of data movement.2. The method of claim 1 , wherein the plurality of overlays are implemented in the data processing array for the application without loading a different application into the data processing array that loads different kernels into the compute tiles or modifies the stream channels.3. The method of claim 1 , wherein the data processing array is subdivided into a plurality of partitions each including a subset of the plurality of compute tiles, wherein each partition is adapted to concurrently implement a different application and sequentially implement a plurality of different overlays specific to the application executed by the partition.4. The method of claim 1 , wherein sequentially implementing a plurality of overlays comprises: configurating the data processing array with a first overlay of the plurality of overlays to perform a first workload including a first matrix multiply operation; and configurating the data processing array with a second overlay of the plurality of overlays to perform a second workload including a second matrix multiply operation; wherein the first matrix multiply operation and the second matrix multiply operation are of different dimensions.5. The method of claim 1 , wherein the application implements a neural-network and each layer of the neural-network is mapped to one of the plurality of overlays, and wherein different ones of the plurality of overlays are loaded over time to implement respective layers of the neural-network.6. The method of claim 1 , wherein each overlay specifies a different mapping of buffers to stream channels.7. The method of claim 1 , wherein the mode of data movement of each overlay is characterized by a number of feature maps and a number of weights conveyed over the stream channels.8. The method of claim 1 , wherein sequentially implementing a plurality of overlays comprises: for each overlay, programming a plurality of direct memory access circuits with a different mapping of buffers to the stream channels.9. The method of claim 1 , further comprising: for a selected overlay of the plurality of overlays, providing a runtime parameter to a selected compute tile of the plurality of compute tiles, wherein the runtime parameter configures an operational parameter of a kernel executed by the selected compute tile.10. The method of claim 9, wherein the selected overlay corresponds to a particular layer of the application, and wherein the runtime parameter specifies at least one dimension of the particular layer implemented by the selected overlay.11 . The method of claim 9, wherein the runtime parameter selectively enables a function of the kernel executed by the selected compute tile.12. The method of claim 1 , further comprising: for a selected overlay of the plurality of overlays, providing a runtime parameter to a selected compute tile of the plurality of compute tiles, wherein the runtime parameter selects a kernel from a plurality of kernels of the selected compute tile for execution.13. A system, comprising: a data processing array disposed in an integrated circuit, wherein the data processing array includes a plurality of compute tiles each having a processor; and wherein the data processing array is configured to implement an application, wherein the application specifies kernels executable by the processors and stream channels that convey data to the plurality of compute tiles; and wherein, during runtime of the application, the data processing array is adapted to implement a plurality of different overlays, wherein each overlay implements a different mode of data movement in the data processing array via the stream channels to perform a workload.14. The system of claim 13, wherein the application implements a neural- network and each layer of the neural-network is mapped to one of the plurality of overlays, and wherein different ones of the plurality of overlays are loaded over time to implement respective layers of the neural-network.15. The system of claim 13, wherein: a first overlay of the plurality of overlays configures the data processing array to perform a first workload including a first matrix multiply operation; and a second overlay of the plurality of overlays configures the data processing array to perform a second workload including a second matrix multiply operation; wherein the first matrix multiply operation and the second matrix multiply operation are of different dimensions.
MULTIPLE OVERLAYS FOR USE WITH A DATA PROCESSING ARRAYCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/235,319 filed on August 20, 2021 and to U.S. Provisional Patent Application No. 63/235,532 filed on August 20, 2021 , both of which are incorporated by reference herein in their entirety.TECHNICAL FIELD[0002] This disclosure relates to integrated circuits (ICs) and, more particularly, to using multiple overlays with a data processing array implemented within an IC. This disclosure also relates to controlling operation of a data processing array using one or more array controllers.BACKGROUND[0003] Integrated circuits (ICs) have evolved over time to provide increasingly sophisticated computing architectures. While some ICs utilize computing architectures that include a single processor, others include multiple processors. Still, other ICs include multiple processors arranged in an array. Such ICs are capable of providing significant computational power and a high degree of parallelism that extends well beyond the capabilities of single-processor architectures and even multi-core processor architectures.SUMMARY[0004] In one or more example implementations, a method includes loading an application in a data processing array. The data processing array includes a plurality of compute tiles each having a processor. The application specifies kernels executable by the processors and implements stream channels that convey data to the plurality of compute tiles. The method includes, during runtime of the application, sequentially implementing a plurality of overlays in the data processing array. Each overlay implements a different mode of data movement in the data processing array via the stream channels. The method includes, for each overlay implemented, performing a workload by moving data to the plurality of compute tiles based on the respective mode of data movement. [0005] In one aspect, the plurality of overlays are implemented in the data processing array for the application without loading a different application into the data processing array that loads different kernels into the compute tiles or modifies the stream channels.[0006] In another aspect, the data processing array is subdivided into a plurality of partitions each including a subset of the plurality of compute tiles. Each partition is adapted to concurrently implement a different application and sequentially implement a plurality of different overlays specific to the application executed by the partition.[0007] In another aspect, sequentially implementing a plurality of overlays includes configurating the data processing array with a first overlay of the plurality of overlays to perform a first workload including a first matrix multiply operation and configurating the data processing array with a second overlay of the plurality of overlays to perform a second workload including a second matrix multiply operation. The first matrix multiply operation and the second matrix multiply operation are of different dimensions.[0008] In another aspect, the application implements a neural-network. Each layer of the neural-network is mapped to one of the plurality of overlays. Different ones of the plurality of overlays are loaded over time to implement respective layers of the neural-network.[0009] In another aspect, each overlay specifies a different mapping of buffers to stream channels.[0010] In another aspect, the mode of data movement of each overlay is characterized by (e.g., specifies) a number of feature maps and a number of weights conveyed over the stream channels. For example, the overlay specifies particular weights and feature maps to be transmitted over particular ones of the stream channels.[0011] In another aspect, sequentially implementing a plurality of overlays includes, for each overlay, programming a plurality of direct memory access circuits with a different mapping of buffers to the stream channels.[0012] In another aspect, the method includes, for a selected overlay of the plurality of overlays, providing a runtime parameter to a selected compute tile of the plurality of compute tiles. The runtime parameter configures an operational parameter of a kernel executed by the selected compute tile. [0013] In another aspect, the selected overlay corresponds to a particular layer of the application. The runtime parameter specifies at least one dimension of the particular layer implemented by the selected overlay.[0014] In another aspect, the runtime parameter selectively enables a function of the kernel executed by the selected compute tile.[0015] In another aspect, the method includes, for a selected overlay of the plurality of overlays, providing a runtime parameter to a selected compute tile of the plurality of compute tiles. The runtime parameter selects a kernel from a plurality of kernels of the selected compute tile for execution.[0016] In one or more example implementations, a system includes a data processing array disposed in an integrated circuit. The data processing array includes a plurality of compute tiles each having a processor. The data processing array is configured to implement an application. The application specifies kernels executable by the processors and stream channels that convey data to the plurality of compute tiles. During runtime of the application, the data processing array is adapted to implement a plurality of different overlays. Each overlay implements a different mode of data movement in the data processing array via the stream channels to perform a workload.[0017] In one aspect, the application implements a neural-network and each layer of the neural-network is mapped to one of the plurality of overlays. Different ones of the plurality of overlays are loaded over time to implement respective layers of the neural-network.[0018] In another aspect, each overlay specifies a different mapping of buffers to stream channels.[0019] In another aspect, the mode of each overlay is characterized by a number of feature maps and a number of weights conveyed over the stream channels.[0020] In another aspect, for a selected overlay of the plurality of overlays, a runtime parameter provided to a selected compute tile of the plurality of compute tiles configures an operational parameter of a kernel executed by the selected compute tile.[0021] In another aspect, the selected overlay corresponds to a particular layer of the application. The runtime parameter specifies one or more dimensions of the particular layer implemented by the selected overlay. [0022] In another aspect, the selected overlay corresponds to a particular layer of the application. The runtime parameter selectively enables a function of the kernel executed by the selected compute tile.[0023] In another aspect, a first overlay of the plurality of overlays configures the data processing array to perform a first workload including a first matrix multiply operation. A second overlay of the plurality of overlays configures the data processing array to perform a second workload including a second matrix multiply operation. The first matrix multiply operation and the second matrix multiply operation are of different dimensions.[0024] In one or more example implementations, an integrated circuit includes a data processing array including a plurality of compute tiles each having a processor. The integrated circuit includes an array controller coupled to the data processing array. The array controller is adapted to configure the plurality of compute tiles of the data processing array to implement an application. The application specifies kernels executable by the processors and stream channels that convey data to the plurality of compute tiles. The array controller is configured to initiate execution of workloads by the data processing array as configured with the application.[0025] In one or more example implementations, an integrated circuit includes a data processing array. The data processing array includes a plurality of compute tiles each having a processor. The data processing array is subdivided into a first partition including a first subset of the plurality of compute tiles and a second partition including a second subset of the plurality of compute tiles. The integrated circuit includes a first array controller adapted to configure the first partition to implement a first application. The first application specifies kernels executable by the processors of the first partition and stream channels that convey data to the first subset of the plurality of compute tiles of the first partition. The integrated circuit includes a second array controller adapted to configure the second partition to implement a second application. The second application specifies kernels executable by the processors of the second partition and stream channels that convey data to the second subset of the plurality of compute tiles of the second partition. The first array controller and the second array controller each is configured to initiate execution of workloads in the respective partitions. [0026] This Summary section is provided merely to introduce certain concepts and not to identify any key or essential features of the claimed subject matter. Other features of the inventive arrangements will be apparent from the accompanying drawings and from the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0027] The inventive arrangements are illustrated by way of example in the accompanying drawings. The drawings, however, should not be construed to be limiting of the inventive arrangements to only the particular implementations shown. Various aspects and advantages will become apparent upon review of the following detailed description and upon reference to the drawings.[0028] FIG. 1 illustrates an example system including a data processing (DP) array.[0029] FIG. 2 illustrates an example of an implementation flow for generating an application for a DP array.[0030] FIG. 3 illustrates an example implementation of a DP array.[0031] FIG. 4 illustrates an example implementation of a compute tile of a DP array.[0032] FIG. 5 illustrates an example implementation of a memory tile of a DP array.[0033] FIG. 6 illustrates an example implementation of an interface tile of a DP array.[0034] FIG. 7 illustrates an example of cascade connectivity between compute tiles of a DP array.[0035] FIG. 8. illustrates an example in which a compute tile is configured to operate without the use of a cascade connection to another compute tile.[0036] FIG. 9. illustrates an example in which compute tiles are configured to operate using a cascade connection.[0037] FIGS. 10A, 10B, and 10C illustrate certain operative features of example overlays.[0038] FIG. 11 is a table illustrating attributes of example overlays used to configure an application for a partition of a DP array.[0039] FIGS. 12A, 12B, and 12C illustrate an example of input stream channels implemented by an application with different overlay implementations. [0040] FIG. 13 illustrates an example of output stream channels implemented by an application.[0041] FIG. 14 illustrates an example of a method illustrating certain operative features of the system of FIG. 1 .[0042] FIG. 15 illustrates an example in which a DP array includes multiple partitions each controlled by an array controller.[0043] FIGS. 16A, 16B, 160, 16D, 16E, 16F, 16G, and 16H illustrate different example architectures for an IC including a DP array and one or more array controllers.[0044] FIG. 17 illustrates an example method of operation of an IC including a DP array and an array controller.[0045] FIG. 18 illustrates additional operative features of an array controller. [0046] FIG. 19 illustrates an example implementation of a data processing system for use with the inventive arrangements described herein.DETAILED DESCRIPTION[0047] This disclosure relates to integrated circuits (ICs) and to using multiple overlays with a data processing (DP) array implemented within an IC. This disclosure also relates to controlling operation of a DP array using one or more array controllers.[0048] A DP array includes a plurality of circuit blocks referred to as tiles. The tiles may include compute tiles and interface tiles and/or a mix of compute tiles, interface tiles, and memory tiles. The DP array is configurable to perform desired computational activities by loading configuration data, referred to as an “application,” into the DP array. Once configured with an application, the DP array is able to perform computational activities.[0049] In one aspect, the application loaded into the DP array specifies a plurality of kernels that are executable by the compute tiles. For example, the application may specify particular kernels that are to be executed by particular ones of the compute tiles, e.g., a mapping of kernels to compute tiles. The application may also specify configuration data that implements a plurality of stream channels that communicatively link the tiles of the DP array.[0050] Having implemented an application in the DP array, different overlays may be implemented in the DP array to execute the application. Each overlay that is implemented specifies a mode of data movement within the DP array. That is, each overlay specifies a mode of data movement among tiles of the DP array. For example, each overlay specifies the particular data items that are to be provided to the respective compute tiles via the stream channels implemented by the application. The data items may include feature maps and/or weights.[0051] In one aspect, the application is a multi-layered application. Different layers of the application may be implemented by loading a different overlay in the DP array. For each overlay implemented in the DP array, one or more runtime parameters may be provided to the tiles of the DP array to further adapt the overlay to the particular layer of the application implemented by the overlay. The DP array, as configured with the application, an overlay, and one or more runtime parameters, is capable of performing a workload for a layer of the application. In general, the term “workload” refers to performing the operations necessary to process the input data for a particular layer of a multi-layered application.[0052] Unlike static or fixed circuit architectures, the configurability of the DP array allows the DP array to adapt to different workloads (e.g., layers) over time. The DP array is adapted to the different layers without having to reconfigure the DP array by loading a different application therein. For purposes of illustration, consider an example where the DP array is used to perform one or more matrix multiply operations. Matrix multiply operations are utilized in many different computational contexts including, but not limited to, machine learning, image processing, computer vision, virtual and/or extended reality, and genetic analysis. In the case of machine learning, for example, different layers of a neural network may perform different matrix multiply operations where the matrices operated on in the different layers have differing dimensions. When using a fixed or static circuit architecture to implement these different layers, that circuit architecture may perform matrix multiply operations of certain layers efficiently, but matrix multiply operations of other, different layers of different dimensions less efficiently. This holds true for other types of workloads that do not involve matrix multiply operations.[0053] In accordance with the inventive arrangements described within this disclosure, a DP array may be adapted over time to perform a variety of different workloads efficiently. The DP array may be configured to execute a particular application. Different overlays may be loaded over time to implement different layers of the application at runtime. Each overlay may implement a particular mode of data movement in the DP array that is suited to implementing the particular layer of the application to which the overlay is mapped. Different runtime parameters for the overlays may be loaded as well, where the runtime parameters may be specific to each layer of the application.[0054] Consider the prior matrix multiply example. The DP array may be loaded with an application that includes kernels adapted to perform matrix multiply operations. The application further specifies the stream channels implemented in the DP array. Different overlays and runtime parameters may be loaded into the DP array over time to adapt the DP array, as configured with the application, to efficiently perform different matrix multiply operations (e.g., differently dimensioned matrix multiplies) corresponding to different layers of the application. Certain operative features of each overlay and the kernels being executed by the compute tiles may be changed on a per-layer basis through the loading of the runtime parameters. In one aspect, the runtime parameters may specify the particular dimensions of the layer being implemented by a given overlay.[0055] Loading an application may require a non-trivial number of clock cycles. By comparison, loading an overlay and the corresponding runtime parameters to implement a particular layer of the application consumes significantly less time (e.g., fewer clock cycles). By utilizing the application-overlay paradigm described herein, the DP array may be adapted to efficiently implement different layers of an application without having to continually reconfigure the DP array. That is, the DP array may be adapted from one layer to the next without having to load a different application for each layer of the application, which would cause the DP array to sit idle while being continually reconfigured thereby reducing computational efficiency and throughput.[0056] In some cases, controlling the loading of applications, overlays, and runtime parameters, and initiating workloads for the DP array requires significant computational resources. These operations may consume a significant amount of clock cycles for a processor tasked with such responsibilities leaving few clock cycles available for the processor to perform other functions or execute other applications. Accordingly, in one or more example implementations, one or more array controller(s) may be included in the same IC as the DP array to harness the significant computational power provided by the DP array. The array controller(s) may be dedicated to controlling operation of the DP array. [0057] Inclusion of the array controller(s) ensures smooth and efficient operation of the DP array. For example, since the array controller(s) are dedicated to managing the DP array and are not attempting to multitask with other non-DP array-related operations, the array controller(s) are able to keep the DP array busy to achieve higher data throughput. Inclusion of the array controller(s) also relieves other processors, whether disposed in the IC or external to the IC, from performing DP array-related control operations so that such processors may perform other tasks.[0058] For IC architectures that include programmable logic, one or more of the array controllers may be implemented in programmable logic. In other examples, for IC architectures that include programmable logic, one or more array controllers may be implemented in programmable logic while one or more other array controllers may be implemented as hardwired circuit blocks. In still other examples, for IC architectures that do not include programmable logic, the array controller(s) may be implemented as hardwired circuit blocks. It should be appreciated that array controller(s) also may be implemented as hardwired circuit blocks in ICs that do include programmable logic. Further aspects of the inventive arrangements are described below with reference to the figures.[0059] FIG. 1 illustrates an example system 100. In the example, system 100 includes a DP array 102, an array controller 106, an interconnect 108, and one or more subsystems 112, 114, 118, and/or 120. DP array 102 includes an array interface 104.[0060] In one or more example implementations, system 100 is implemented as an integrated circuit (IC). System 100 may be implemented within a single IC package. In one aspect, system 100 is implemented using a single die disposed in a single IC package. In another aspect, system 100 is implemented using two or more interconnected dies disposed within a single IC package.[0061] DP array 102 is formed of a plurality of circuit blocks referred to as tiles. The tiles may include compute tiles, memory tiles, and/or interface tiles. For purposes of discussion, the term “array tiles” is used herein to refer to compute tiles or a mixture of compute tiles and memory tiles. Compute tiles and memory tiles are hardwired and are programmable. Array interface 104 includes a plurality of circuit blocks referred to as “interface tiles.” The interface tiles communicatively link array tiles of DP array 102 with circuits outside of DP array 102. Interface tiles are hardwired and programmable.[0062] Array controller 106 is communicatively linked to DP array 102 and/or array interface 104. Array controller 106 may be coupled to DP array 102 and/or array interface 104 directly and/or via interconnect 108. In one aspect, array controller 106 is dedicated to configuring DP array 102 and controlling the operation of DP array 102. That is, array controller 106 performs only functions relating to configuration and/or control of DP array 102. Array controller 106 may be implemented as a state machine or as a processor capable of executing program code. In one example, array controller 106 is implemented as a hardwired circuit block. In another example, array controller 106 is implemented using programmable logic. In one or more example implementations, array controller 106 may be omitted. In that case, a processor that may be implemented as one of subsystems 112-120 may perform the operations attributed to array controller 106. In the alternative, a processor external to system 100 may perform the operations attributed to array controller 106.[0063] Interconnect 108 is coupled to array interface 104, array controller 106, and one or more of subsystems 112-120. Interconnect 108 may be implemented as an on-chip interconnect. An example of an on-chip interconnect is an Advanced Microcontroller Bus Architecture (AMBA) extensible Interface (AXI) bus. An AXI bus is an embedded microcontroller bus interface for use in establishing on-chip connections between circuit blocks and/or systems. Other example implementations of interconnect 108 may include, but are not limited to, other buses, a crossbar, a Network-on-Chip (NoC), and so forth. For purposes of illustration, interconnect 108 may include, or be coupled to, a memory controller that is capable of reading and/or writing to one or more memories.[0064] Subsystems 1 12-120 may represent any of a variety of different types of electronic subsystems and/or circuits. For purposes of illustration, examples of subsystems 112-120 may include, but are not limited to, any combination of a processor or processor system, programmable logic, hardwired circuit blocks (e.g., application-specific circuit blocks), memories, and the like. It should be appreciated that the number of subsystems illustrated in the example of FIG. 1 is for purposes of illustration. System 100 may include more or fewer subsystems than shown. Some example implementations of system 100 may include only DP array 102 or only DP array 102 and one or more array controllers 106, for example.[0065] A processor that is implemented as one of subsystems 112-120 is capable of executing computer-readable instructions. In an example, the processor is implemented as a hardwired processor. In another example, the processor is implemented as a soft-processor using programmable logic. In some cases where a processor is implemented as one of subsystems 112-120, array controller 106 may be omitted. In that case, the processor may be programmed to configure DP array 102 and control the operation of DP array 102.[0066] In another aspect, a processor may be external to the IC including DP array 102. In that case, the processor may be part of another data processing system (e.g., a host computer) that is communicatively linked to the IC including DP array 102. In cases where a processor is included as part of a host computer, the processor may communicate with array controller 106 to control operation of array controller 106. In one aspect, the processor may write runtime data that is executed by array controller 106 to control operation of DP array 102. In example implementations in which array controller 106 is omitted, the particular processor used to control operation of DP array 102, whether external or implemented within one of subsystems 1 12-120, may or may not be dedicated for controlling DP array 102.[0067] In an example, one or more of subsystems 1 12-120 may be implemented as a memory. The memory may be implemented as a random-access memory (RAM). In one example, the memory may be implemented as a High Bandwidth Memory (HBM). The memory, for example, may be a RAM circuit (e.g., an HBM) implemented on the same die as DP array 102 or on a different die within the same IC package. In another aspect, one or more memories may be implemented external to the IC including DP array 102.[0068] In one or more example implementations, certain elements of system 100 such as array controller 106, interconnect 108, and one or more or all of subsystems 112-120 are optional and may be omitted.[0069] FIG. 2 illustrates an example of an implementation flow 200 for generating an application for a DP array. The implementation flow 200 of FIG. 2 may be performed or implemented by a data processing system. An example of a data processing system that is capable of performing implementation flow 200 is described in connection with FIG. 19.[0070] In the example of FIG. 2, application 202 may be provided to a compiler 204. Application 202 may be specified in source code. In one or more examples, application 202 is specified in a high-level programming language such as C and/or C++. In one or more examples, application 202 may be specified as a data flow graph that specifies one or more kernels that are to be compiled and executed by compute tiles of DP array 102.[0071] In general, compiler 204 is capable of generating an executable version of an application that may be executed by DP array 102 (e.g., the compute tiles included therein). Compiler 204 is also capable of generating a control application that is executable by array controller 106 or other processor for controlling operation of DP array 102. In executing the control application, array controller 106 is capable of loading an application, overlays for the application, and runtime parameters for layers of the application. Array controller 106, in executing the control application, is also capable of initiating workloads in the DP array 102 as configured with an application, overlay, and runtime parameters.[0072] In one or more example implementations, application 202 is a multilayered application. In one example, application 202 is implemented as a neural network. In another example, application 202 may be implemented as a machine learning model. Examples of different types of machine learning models that may be implemented by application 202 may include, but are not limited to, a Convolutional Neural Network (CNN), a Long-Short Term Memory (LSTM) Network, a Deep Learning Recommendation Model (DLRM), or the like.[0073] In one aspect, each different type of machine learning model may be specified as a different application, where the application is built using kernels that are specific to the machine learning model being implemented. Kernels refer to executable program code that may be executed by the compute tiles of DP array 102. Though the kernels are tailored for a particular type of machine learning model, each kernel may be generalized in the sense that certain operative features of the kernel may be altered or configured at runtime through the use of runtime parameters. Thus, depending on the type of machine learning model that is implemented by application 202, application 202 will utilize a different type of kernel. In addition, in one or more example implementations, multiple kernels may be loaded into a same compute tile. The particular kernel or kernels to be executed in that case, in a given compute tile, may be selected on a per layer basis for application 202.[0074] Within this disclosure, a kernel represents one or more functions. In some arrangements, a kernel includes a plurality of different functions. In other arrangements, the program code is arranged so that different functions are implemented as different (e.g., multiple) kernels. In either case, runtime parameters are capable of configuring one or more operational parameters of a kernel. In some cases, the configuration selectively enables/disables one or more functions of a kernel so that the function(s) execute or do not execute. In some cases, runtime parameters may select a particular function or kernel from a plurality of such functions/kernels for execution.[0075] In the example of FIG. 2, application 202 may specify a plurality of layers 1 through M. As an example, each layer 1 -M of application 202 may correspond to a particular set of operations referred to as a workload that is performed by the layer. In one example, each layer may specify a particular matrix multiply operation that is to be performed. Different layers may have different dimensions of the matrices that are to be multiplied together. For example, the matrices to be multiplied by layers 1 -M may have different numbers of columns and/or different numbers of rows from one layer to the next. For example, two matrix multiply operations that multiply matrices of different dimensions may be considered different matrix multiply operations.[0076] Each layer of application 202 may include one or more particular functions to be performed. Examples of different functions that may be performed in different layers of application 202 can include, but are not limited to, convolution, General Matrix Multiply (GEMM), Rectified Linear Unit (ReLU), batch normalization, or other function(s) generally known in the field of machine learning and/or neural networks.[0077] As an illustrative and non-limiting example, consider the case where application 202 implements a CNN. The CNN may include different layers 1 -M where the different layers have different dimensions that process differing columns and rows of pixels of an image. Further, for purposes of illustration, layer 1 of application 202 may be a 2-dimensional (2D) convolution layer. Layer 2 of application 202 may be a 2D convolution layer with batch normalization. Layer M of application 202 may be a 2D convolution layer with ReLU. The example application and layers are provided for purposes of illustration and not limitation.[0078] Compiler 204 is capable of receiving application 202 and one or more overlays 206. In one aspect, each of overlays 206 may be a prebuilt definition of how data is to move among tiles of DP array 102 to implement a layer (or a portion of a layer) of application 202 (e.g., a particular machine learning model). In general, overlays 206 represent all possible overlays available for the particular type of machine learning model implemented by application 202. Each overlay 206, for example, may specify a different mode of data movement for the application as implemented in DP array 102. The mode of data movement uses stream channels implemented in DP array 102 by application 202 as compiled. That is, the stream channels established by application 202 may remain in place while different modes of data movement are implemented over time using different ones of overlays 206. Each overlay uses the same stream channel implementation for application 202. [0079] In one aspect, an overlay may specify data movement via the stream channels by dictating the type of input data that is conveyed over the various stream channels. Examples of different types of input data include feature maps and weights. Some stream channels may convey feature maps while others convey weights. In one aspect, each overlay 206 defines stream channels as logical connections among different tiles of DP array 102 that are needed to implement, e.g., efficiently implement, particular layers of a given machine learning model.Example overlays 206 and the corresponding modes of data movement implemented by the overlays are further illustrated in the example of FIG. 8. [0080] Accordingly, as defined within this disclosure, the term “overlay” means data that is provided to a DP array during runtime of an application implemented therein, where the data defines a mode of data movement in at least a portion of the DP array to implement a particular layer of the application.[0081] Continuing with the example where application 202 specifies a CNN type of machine learning model, each overlay 206 is prebuilt for a CNN type of machine learning model to implement layers of such a machine learning model within DP array 102. In one aspect, each overlay 206 is suited to process data for a layer of application 202 having a particular shape. In the example, overlay 206-1 is capable of efficiently processing data for a square-shaped layer. Overlay 206-2 is capable of efficiently processing data for a tall rectangular-shaped layer. Overlay 206-N is capable of efficiently processing data for a wide rectangular-shaped layer. Thus, in this example, overlays 206 are not limited to processing layers having particular dimensions, though this also may be the case, but rather are intended to handle layers of particular shapes. It should be appreciated that fewer or more overlays for a given type of application may be created for shapes as described herein or for different shapes.[0082] Compiler 204 is capable of comparing the available, prebuilt overlays 206 with the layers 1 -M of the application 202 to determine a mapping of overlays 206 to layers 1 -M of application 202. Overlays 206 are particular to the type of application 202. Overlays 206 also may be particular to the architecture of DP array 102. Were application 202 to implement a different type of machine learning model, for example, the prebuilt overlays available for compiler 204 to map to layers of the application would be different. The overlays available would be suited to implement the particular types of data movements needed for the particular type of machine learning model being implemented. Accordingly, the overlays 206 used in the mapping by compiler 204 will include only those overlays that are prebuilt for the particular type of machine learning model implemented by application 202.[0083] In one aspect, compiler 204 is capable of mapping overlays 206 to layers 1-M of application 202 by determining a shape of each layer. The shape may be given by the particular weights or weight matrix of the layer. Compiler 204 is capable of matching the shape of each layer to a particular overlay 206 (e.g., a shape of an overlay 206) that is suited for operating on layers of the determined shape. While same shape and/or similarity in shape is used for purposes of mapping overlays to layers, in another aspect, compiler 204 is capable of determining the dimensions of each layer and mapping that layer to a particular (e.g., one) overlay 206 suited to the layer based on dimensions, which may be used as a proxy for shape. By mapping overlays 206 to layers 1 -M according to shape, the data throughput achieved by DP array 102 in implementing each layer of application 202 using the mapped overlay may be increased or optimized.[0084] Though overlays 206 appear to correspond to the layers of application 202 in the example of FIG. 2 on a one-to-one basis, this need not be the case. That is, compiler 204 may have access to or include a plurality of pre-built overlays 206 for different types of machine learning models that are available for compiling applications. The number of overlays 206 may be higher or lower than the number of layers of the application being compiled.[0085] Compiler 204 is capable of generating an executable version of application 202 shown as application 208. Application 208 is executable by DP array 102. For example, application 208 specifies executable versions of the kernels that are executed by particular ones of the compute tiles of DP array 102. In this regard, application 208 not only specifies kernels, but also may specify which compute tile executes each respective kernel. In one aspect, application 208 utilizes a single, or same, kernel, where each compute tile used to execute application 208 executes an instance of the kernel. The kernel may include a plurality of different and selectable functions. In other examples, each compute tile used to execute application 208 executes an instance of each of a plurality or set of different kernels. The set of kernel instance(s) executed by each compute tile executing application 208 may be the same or different from one compute tile to another. As part of application 208, compiler 204 also generates configuration data that, when loaded into DP array 102, implements the stream channels in DP array 102 that convey data. Application 208 may also specify initialization data for the various memories of DP array 102.[0086] As noted, compiler 204 is also capable of generating a control application 214 that is executable by array controller 106. Control application 214 can include a mapping 210 and runtime parameters 212. Mapping 210 specifies which overlay 206 to use for each of layers 1 -M of application 208 during execution (e.g., runtime) of application 208. Runtime parameters 212 may be generated for one or more or for each of layers 1 -M of application 208. That is, runtime parameters 212 are layerspecific. Further, runtime parameters 212 may be specific to particular compute tiles. In general, runtime parameters 212 may be provided to different compute tiles of DP array 102 during runtime to configured kernels for execution. Runtime parameters 212, for example, may select a particular kernel for execution and/or enable and/or disable particular functions of kernels to execute (e.g., effectuate a change in the execution flow of any of the various kernels being executed by a compute tile). Further details relating to the runtime parameters are described in greater detail below.[0087] In one aspect, control application 214 may specify a schedule that is followed by array controller 106 that initiates implementation of overlays 206 and runtime parameters 212 for the different layers of application 208 during runtime. The schedule further may specify the particular tasks to be performed and an ordering of the tasks to initiate the workloads of the various layers of application 208 during runtime.[0088] In implementing an application in DP array 102, array controller 106 is capable of loading application 208 into program memories of compute tiles, loading configuration data of application 208 into control registers to configure stream switches to implement the stream channels, and initializing memories of DP array 102. In executing control application 214, array controller 106 is capable of implementing different overlays and loading runtime parameters in DP array 102 for application 208 during runtime per the schedule specified. Further, array controller 106, in executing control application 214, initiates workloads for application 208 corresponding to the different layers of application 208 over time per the schedule. [0089] Within this disclosure, reference is made to loading and executing an application in DP array 102. It should be appreciated that DP array 102 may be subdivided into 1 , 2, or more partitions, where each partition may include one or more compute tiles and one or more interface tiles; or, a combination of one or more compute tiles, one or more memory tiles, and one or more interface tiles. Each partition is capable of operating independently of the other partition(s) such that each partition may execute a different application and do so concurrently with other partitions. Accordingly, within this disclosure, references to loading, executing, or implementing an application in a partition of the DP array 102, loading overlays, loading runtime parameters, and/or executing workloads may refer to the case where the entire DP array 102 is viewed as a single partition and such operations are performed for the single partition, or where DP array 102 is subdivided into two or more smaller partitions and the operations are performed for each of the two or more smaller partitions independently under control of one or more array controllers.[0090] FIG. 3 illustrates an example implementation of DP array 102. In the example, DP array 102 includes compute tiles 302, memory tiles 306, and interface tiles 304. Interface tiles 304 are part of array interface 104. In the example, compute tiles 302 and memory tiles 306 are arranged in a grid having a plurality of rows and columns. Interface tiles 304 are arranged in a row where the individual interface tiles 304 are aligned with the columns of the grid arrangement of DP array 102. Compute tiles 302 include compute tiles 302-1 , 302-2, 302-3, 302-4, 302-5, 302-6, 302-7, 302-8, 302-9, 302-10, 302-1 1 , 302-12, 302-13, 302-14, 302-15, 302- 16, 302-17, and 302-18. Interface tiles 304 include interface tiles 304-1 , 304-2, 304- 3, 304-4, 304-5, and 304-6. Memory tiles 306 include memory tiles 306-1 , 306-2, 306-3, 306-4, 306-5, and 306-6. In the example, each tile is coupled to an adjacent tile to the left (west), right (east), above (north), and below (south) if such a tile is located in such position(s).[0091] The example of FIG. 3 is provided for purposes of illustration only. The number of tiles in a given column and/or row, the number of tiles included in DP array 102 and/or array interface 104, the sequence or order of tile types (e.g., memory and compute tiles) in a column and/or row is for purposes of illustration and not limitation. Other arrangements may be included with varying numbers of tiles, rows, columns, mixtures of tile types, and the like. For example, rows of FIG. 3 are homogeneous in terms of tile type while columns are not. In other arrangements, rows may be heterogeneous in terms of tile type while columns are homogeneous. Further, additional rows of memory tiles 306 may be included in DP array 102. Such rows of memory tiles 306 may be grouped together without intervening rows of compute tiles 302 or distributed throughout DP array 102 such that rows of compute tiles 302 do intervene between rows or groups of rows of memory tiles 306.[0092] In another example implementation of DP array 102, memory tiles 306 may be omitted such that the bottom row of compute tiles 302 couples directly to interface tiles 304. For example, with memory tiles 306 omitted, interface tile 304-1 would connect directly to compute tile 302-3, etc. In such cases, the various example implementations described herein may read data from and write data to a memory (e.g., one of subsystems 112-120) in lieu of memory tiles 306. The inclusion of memory tiles 306, however, may increase the data throughput of DP array 102 in that data may be stored closer to compute tiles 302 without having to continually read data from a RAM and/or write data to a RAM external to DP array 102.[0093] FIG. 4 illustrates an example implementation of a compute tile 302. The example of FIG. 4 is provided to illustrate certain architectural features of compute tiles 302 and not as a limitation of the form of DP array 102 or the architecture of compute tiles 302 in general. Some connections between components and/or tiles are omitted for ease of illustration.[0094] In the example, each compute tile 302 includes a core 402, a RAM 404, a stream switch 406, a memory-mapped switch 408 (e.g., abbreviated as “MM” switch in the figures), control registers 414, and a direct memory access (DMA) circuit 434. Core 402 includes a processor 420 and a program memory 422. Control registers 414 may be written by memory-mapped switch 408 to control the operation of the various components included in compute tile 302. Though not shown, each memory component of compute tile 302 (e.g., program memory 422, control registers 414, and RAM 404) may be read and/or written via memorymapped switch 408 for purposes of configuration and/or initialization.[0095] Processor 420 may be any of a variety of different processor types. In one aspect, processor 420 is implemented as a vector processor. In another example, processor 420 may be implemented as a scalar processor. In another example, processor 420 may include a vector processor and a scalar processor. Program memory 422 may be loaded, e.g., by way of loading an application, with executable instructions referred to as a “kernel.” Each compute tile 302 is capable of performing data processing operations and operating on a large amount of data through execution of the kernel(s) stored in program memory 422 by processor 420.[0096] Each core 402, e.g., processor 420, is directly connected to the RAM 404 located in the same compute tile 302 through a memory interface 432. Within this disclosure, a memory interface is referred to as a “local memory interface” when the memory interface is used by circuits in the same tile to access a RAM. Memory interface 432-1 is an example of a local memory interface since processor 420 in the same tile utilizes the memory interface to access RAM 404. By comparison, a memory interface used by circuitry external to the tile to access RAM 404 is referred to as an adjacent memory interface. Memory interfaces 432-2, 432-3, and/or 432-4 are examples of adjacent memory interfaces because such memory interfaces are used by circuitry in other adjacent tiles to access RAM 404.[0097] As such, each processor 420 is capable of accessing (e.g., reading and/or writing) the RAM 404 in the same compute tile 302 and one or more other RAMs 404 in adjacent tiles via standard read and write operations directed to such memory interfaces. RAM 404 is configured to store application data. RAM 404 may be read and/or written via memory-mapped switch 408 for purposes of configuration and/or initialization. RAM 404 may be read and/or written by a processor 420 and/or by DMA circuits 434 during runtime.[0098] DMA circuit 434 is capable of reading and writing data to RAM 404 located in the same compute tile 302. DMA circuit 434 may receive data via stream switch 406 from a source outside of compute tile 302 and store such data in RAM 404. DMA 434 may read data from RAM 404 and output the data to stream switch 406 for conveyance to one or more other destinations outside of compute tile 302. [0099] Each core 402, e.g., processor 420, may be directly connected to RAMs 404 located in adjacent compute tiles 302 (e.g., in the north, south, east, and/or west directions) via memory interfaces. As such, processor 420 may directly access such other adjacent RAMs 404 in the same manner as processor 420 is able to access the RAM 404 located in the same compute tile 302 without initiating read or write transactions over stream switch 406 and/or without using DMA circuit 434. As an illustrative example, processor 420 of compute tile 302-5 may read and/or write to the RAM 404 located in compute tiles 302-5, 302-2, 302-4, and 302-6 without submitting read or write transactions over stream switches 406 and/or using DMA circuits 434. It should be appreciated, however, that a processor 420 may initiate read and write transactions to the RAM 404 of any other compute tile 302 and/or memory tile 306 via stream switches 406 and DMA circuits 434.[0100] Processors 420 may also include direct connections, referred to as cascade connections (not shown), to processors 420 of adjacent cores (e.g., in the north, south, east, and/or west directions) that allow direct sharing of data stored in internal registers (e.g., an accumulation register) of processor 420 with other processors 420. This means that data stored in one or more internal registers of one processor 420 may be conveyed directly to one or more internal registers of a different processor 420 without first writing such data to RAM 404 and/or conveying such data over stream switches 406 using DMA circuits 434.[0101] In the example of FIG. 4, the loading of application 208 within DP array 102 by array controller 106 loads the executable program code of kernels in the respective program memories 422 of the compute tiles 302. Operation of other components of compute tile 302 such stream switches 406 may be controlled by loading configuration data of application 208 into control registers 414 to implement the stream channels (e.g., logical connections). Different overlays 206 may be loaded to implement different modes of data movement via the stream channels to implement different layers of application 208.[0102] Runtime parameters 212 may be loaded into RAMs 404 by array controller 106. That is, the kernels as executed by processors 420 may include instructions that cause the processor 420 to read values of the runtime parameters 212 from a particular area of RAM 404 that may be reserved for storing runtime parameters 212. Based on the values of any runtime parameters 212 that may be stored in RAM 404, kernel(s) executed by the compute tile 302 may be configured. For example, execution of the kernel(s) may be changed by loading certain runtime parameters 212. In another aspect, processor 420 may execute a function that selects a particular kernel or function of a kernel to be executed based on the runtime parameters 212 read from RAMs 404. It should be appreciated that the particular runtime parameters loaded into RAM 404 of one compute tile 302 may differ from the runtime parameters (if any) loaded into another RAM 404 of another, different compute tile 302. Runtime parameters 212 may be changed for each layer of application 208 implemented.[0103] For purposes of illustration, consider the prior example where application 208 implements a CNN. The runtime parameters 212 for one layer may configure the kernels executed by processors 420 to perform a particular matrix multiply operation. The runtime parameters, for example, may specify the dimension(s) of the matrix multiply operation to be performed. In another example, the runtime parameters 212 may specify particular functions of the kernel to be executed or a different kernel to be executed. For example, runtime parameters 212 for a first layer may indicate the dimensions of the layer and that a convolution operation is to be performed. Runtime parameters 212 loaded for a different layer may specify different dimensions of the layer and that convolution and batch normalization are to be performed. Runtime parameters 212 loaded for yet a different layer may specify the dimensions of the layer and that convolution and ReLU are to be performed. In this example, the different functions, e.g., convolution, batch normalization, and ReLU may be implemented as different functions of the general CNN kernel that may be selectively executed based on the particular runtime parameters 212 loaded for that layer. That is, the runtime parameters 212 configure the kernel to execute particular functions. In another example, the different functions may be implemented as different kernels that are selected for execution and configured by runtime parameters 212.[0104] FIG. 5 illustrates an example implementation of a memory tile 306. The example of FIG. 5 is provided to illustrate certain architectural features of memory tiles 306 and not as a limitation of the form of DP array 102 or architecture of memory tiles 306 in general. Some connections between components and/or tiles are omitted for ease of illustration.[0105] Each memory tile 306 includes a DMA circuit 502, a RAM 504, a stream switch 506, a memory-mapped switch 508, and/or control registers 514. Control registers 514 may be written by memory-mapped switch 508 to control the operation of the various components illustrated in memory tile 306. Though not shown, each memory component of memory tile 306 (e.g., RAM 504 and control registers 514) may be read and/or written via memory-mapped switch 508 for purposes of configuration and/or initialization.[0106] Each DMA circuit 502 of a memory tile 306 is coupled to the RAM 504 within the same memory tile 306 via a local memory interface 532-1 and may be coupled to one or more RAMs 504 of other adjacent memory tiles 306. In the example of FIG. 5, each DMA circuit 502 is capable of accessing (e.g., reading and/or writing) the RAM 504 included within the same memory tile 306 via local memory interface 532-1 . RAM 504 includes adjacent memory interfaces 532-2 and 532-3 through which the DMA circuits of the east and west memory tiles 306 may access RAM 504. For example, the DMA circuit 502 of memory tile 306-2 may access the RAM 504 of memory tile 306-1 and/or the RAM 504 of memory tile 306- 3. DMA circuit 502 in the example may read and/or write RAMs of adjacent memory tiles 306 by way of adjacent memory interfaces of the RAMs of such other memory tiles. DMA circuit 502 may place data read from RAM 504 onto stream switch 406 and write data received via stream switch to RAM 504.[0107] Similar to the example of FIG. 4, memory-mapped switch 508 is used for purposes of configuration and initialization of memory tile 306 and stream switch 506 is used for conveying data during runtime. In one aspect, RAM 504 may be initialized as part of the process of loading application 208 into DP array 102. Loading application 208 also loads configuration data into control registers 514 that configure stream switches 506 to implement the stream channels. Different overlays 206 described in connection with FIG. 2 may be loaded to implement particular modes of data movement.[0108] In the examples described herein, certain tiles may include one or more common or similar components such as memory-mapped switches, stream switches, and/or DMA circuits. It should be appreciated, however, that memory tiles 306 are generally characterized by the lack of a processing element (e.g., processor 420) included therein.[0109] FIG. 6 illustrates an example implementation of an interface tile 304. The example of FIG. 6 is provided to illustrate certain architectural features of interface tiles 304 and not as a limitation of the form of DP array 102. Some connections between components and/or tiles are omitted for ease of illustration.[0110] In the example, each interface tile 304 includes a DMA circuit 602, one or more interfaces 604, a stream switch 606, a memory-mapped switch 608, and control registers 614. In other example implementations, not every interface tile 304 includes a DMA circuit 602. Array interface 104 is operative as an interface between array tiles of DP array 102 and other circuits of system 100 by way of interconnect 108. In the example of FIG. 6, interface tiles 304 couple to memory tiles 306. In other example implementations, interface tiles 304 couple to compute tiles 302 depending on whether DP array 102 includes memory tiles 306 and/or the location of such memory tiles 306 within DP array 102. Through interconnect 108, interface tiles 304 are capable of coupling to one or more other circuits within system 100 and/or external to the system. Such other circuits may include one or more hardwired circuits and/or subsystems, circuits and/or subsystems implemented in programmable logic, or the like.[0111] In the example of FIG. 6, interface(s) 604 are capable of connecting to other systems and/or circuits of the system. For purposes of illustration, interface(s) 604 are capable of coupling to a NoC, to programmable logic, to an embedded processor and/or processor system (independent of DP array 102), to a platform management controller embedded in the IC, and/or one or more other hardwired circuit blocks (e.g., ASIC blocks) within the IC. For example, interface 604 may include or provide direct connections to array controller 106 and/or one or more of the subsystems 112-120. In another arrangement, interfaces 604 may be configured to communicate with circuits and/or systems located in the same package as DP array 102 but implemented in a different die within the package. In still another arrangement, interfaces 604 may be configured to communicate with circuits and/or systems located external to the IC that includes DP array 102 (e.g., to circuits and/or systems external to the package).[0112] Interface tiles 304 are capable of conveying data, whether application runtime data via stream switches 606 or an application via memory-mapped switches 608, to the array tiles located above each respective interface tile 304 as received via interconnect 108 and/or send such data out to other circuits via interconnect 108. Further, interface tiles 304 are configurable by loading an application (e.g., including configuration data) into control registers 614 of each respective interface tile 304 by way of memory-mapped switches 608. Array controller 106, for example, may write the configuration data to control registers 614.[0113] Within DP array 102, taken collectively, the stream switches (406, 506, and 606) form a stream network that is capable of conveying application runtime data (as differentiated from an application itself). Application runtime data includes data that is received, operated on, or generated (e.g., output) by an array tile (e.g., a compute tile 302) of DP array 102 during runtime of an application. Application runtime data is generally stored, during runtime, in RAMs 404 and RAMs 504 and conveyed over the stream channels implemented by the stream switches as configured by the application. Taken collectively, the memory-mapped switches (408, 508, and 608) form a memory-mapped network through which an application may be loaded into DP array 102. In one aspect, overlays 206 and/or runtime parameters 212 may be conveyed over the memory-mapped network. In another aspect, overlays 206 and/or runtime parameters 212 may be conveyed over the stream network. Tasks that initiate workloads may be conveyed (e.g., to DMA circuits 434, 502, and/or 602) over the memory-mapped network. In another aspect, the tasks may be conveyed over the stream network.[0114] Referring to DP array 102, configuration data written to the control registers (414, 514, and 614) of a tile may also control whether the stream switch of the tile operates as a circuit-switching stream interconnect or a packet-switched stream interconnect. A circuit-switching stream interconnect is capable of implementing point-to-point, dedicated streams that are suitable for high-bandwidth communication among tiles of DP array 102. A packet-switching stream interconnect allows streams to be shared to time-multiplex multiple logical streams onto one physical channel for medium bandwidth communication. As such, stream switches may be configured to implement a packet-switched stream network over which application data may be conveyed.[0115] FIG. 7 illustrates an example of cascade connectivity between compute tiles 302. For purposes of illustration, only a subset of the compute tiles 302 of DP array 102 are illustrated. In the example, processors 420 of cores 402 may be directly connected to one or more other processors 420 of adjacent cores 402. The direct connections between processors 420 are referred to herein as “cascade connections” and are labeled as “CO” in the example of FIG. 7. The cascade connections are operable independently of sharing data via RAMs 404, 504 and/or stream switches. In the example of FIG. 7, each processor 420 is coupled to an adjacent processor 420 via a cascade connection. In other examples, processors 420 may be connected to other processors via a plurality of cascade connections. [0116] Each cascade connection may be seen by a processor as an outgoing cascade connection or an incoming cascade connection. For example, the cascade connection from compute tile 302-3 to compute tile 302-6, from the perspective of processor 420 of compute tile 302-6, may be referred to as the incoming cascade connection. The cascade connection from compute tile 302-6 to the adjacent compute tile to the right, from the perspective of processor 420 of compute tile 302- 6, may be referred to as the outgoing cascade connection.[0117] Each cascade connection may convey a multi-bit data stream (e.g., up to hundreds of bits in parallel) from one processor 420 to another. In one aspect, the cascade connections are capable of outputting the contents of an accumulation register within processor 420 and conveying the contents, e.g., multiple bits each clock cycle, to another internal register of an adjacent processor 420. The receiving register may feed into or be coupled to the accumulation register in the receiving processor 420. An accumulation register is a type of register included in a processor that acts as a temporary storage location capable of holding an intermediate value generated during operation of the processor. Intermediate results of an operation may be progressively written to the accumulation register, overwriting previous values. As noted, each cascade connection allows data to be conveyed from one processor 420 directly to another processor 420 without first storing the data in a RAM or utilizing a stream switch and/or DMA circuit. [0118] Each cascade connection may be independently enabled so that data is propagated on the cascade connection from one processor 420 to another or disabled so that no data is propagated on the cascade connection. In one aspect, each cascade connection may be selectively enabled based on the program code of the kernel executed by the respective processor 420. That is, the program code of the kernel may include instructions that cause a processor 420 to write data to an outgoing cascade connection or to read data from an incoming cascade connection. These instructions may be executed or skipped by way of writing suitable runtime parameters 212 for an overlay 206 that causes a given processor 420 to execute the functions for reading data from and/or writing data to cascade connections.[0119] In another example, runtime parameters 212 may be used to specify addressing used by a processor 420 in executing a kernel. The runtime parameters 212, for example, may be used to shift the addressing so that the processor writes to the RAM 404 in the same compute tile, to a particular adjacent RAM 404, and/or to another memory via DMA circuit and stream switch. In this manner, the movement of data within DP array 102 may be further modified by way of loading appropriate runtime parameters 212 for the respective overlays 206 loaded during runtime of application 208.[0120] In another example, the runtime parameters 212 may select a kernel to execute in a compute tile 302 that is configured to communicate using an incoming and/or outgoing cascade connection or select a different kernel that may be functionally similar or the same but that does not utilize cascade connections.[0121] FIG. 8. illustrates an example in which compute tile 302-1 is configured to operate without the use of a cascade connection to another compute tile. The configuration illustrated in FIG. 8 may be implemented by loading an overlay and optionally runtime parameters into DP array 102. For purposes of discussion, an overlay that does not utilize cascade connections is referred to herein as a “noncascade overlay.” Similarly, the mode of operation implemented in DP array 102 by a non-cascade overlay may be referred to as a “non-cascade mode.” In noncascade mode, processors 420 of compute tiles 302 do not communicate by way of cascade connections.[0122] In the example of FIG. 8, using a non-cascade overlay, compute tiles 302 are configured to perform matrix multiply operations. In other examples, compute tiles 302 may perform other types of operations. For purposes illustration, DP array 102 is used to multiply matrices A and B to generate matrix C. Each compute tile 302 of a partition of DP array 102 in the non-cascade mode is configured to generate one element of matrix C.[0123] In the example, compute tile 302-1 generates the dot product of the first row of matrix A with the first column of matrix B to generate element Coo- That is, compute tile 302-1 is programmed to calculate (Aoo x Boo)+(Aoi x B ). In the example of FIG. 8, the elements Aoo, Boo, A01, and B are provided to compute tile 302-1 via one or more input stream channels implemented in the stream network as part of the application.[0124] As such, a DP array (or partition thereof) having 8 compute tiles is capable of generating 8 output elements in parallel. In this configuration using the non-cascade overlay, DP array 102 is capable of computing matrix C in parallel using 4 compute tiles 302. Each of the 4 compute tiles 302 computes one of elements Coo, C01, C10, and On of matrix C in parallel.[0125] FIG. 9. illustrates an example in which compute tiles 302-1 and 302-2 are configured to operate using a cascade connection. The configuration illustrated in FIG. 9 may be implemented by loading an overlay and optionally runtime parameters into DP array 102. For purposes of discussion, an overlay that does utilize one or more cascade connections is referred to herein as a “cascade overlay.” Similarly, the mode of operation implemented by a cascade overlay may be referred to as a “cascade mode” where processors 420 of selected compute tiles 302 communicate by way of cascade connections. It should be appreciated that in some cases, selected processors 420 may communicate solely using cascade connections whereas in other cases such processors may communicate using a combination of cascade connections and stream channels (e.g., the stream network).[0126] In the example of FIG. 9, using a cascade overlay, compute tiles 302 are configured to perform matrix multiply operations. In other examples, compute tiles 302 may perform other operations. For purposes illustration, DP array 102 is used to multiply matrices A and B to generate matrix C. In the example of FIG. 9, pairs of compute tiles 302 operate cooperatively to generate one element of the matrix C. FIG. 9 shows that the processors 420 of compute tile 302-1 and compute tile 302-2 are coupled by a cascade connection. As such, compute tile 302-2 is capable of calculating Aoo x Boo while compute tile 302-1 is capable of calculating A01 x B and summing the products.[0127] For example, Aoo and Boo are provided to compute tile 302-2 via one or more input stream channels implemented in the stream network. Elements A01 and B are provided to compute tile 302-1 via one or more input stream channels implemented in the stream network. The result of Aoo x Boo may be output from the accumulation register of the processor 420 of compute tile 302-2 via a cascade connection to processor 420 of compute tile 302-1 . Processor 420 of compute tile 302-1 then computes A01 x B and sums the two products.[0128] The configuration of FIG. 9 is capable of computing element Coo of matrix C in less time (e.g., using fewer clock cycles) than the example of FIG. 8, but utilizes two compute tiles 302 rather than 1 to compute each element of matrix C. Accordingly, a DP array having 8 compute tiles using the cascade mode of FIG. 9 is able to generate 4 elements concurrently as opposed to 8. Each cascade connected pair of compute tiles 302 is capable of calculating an output element using fewer clock cycles than one compute unit from the example of FIG. 8. In this configuration, using the cascade overlay, computing matrix C may be performed in parallel using all 8 compute tiles of DP array 102 where each set of two cascade connected compute tiles computes one of Coo, C01, C10, and On in parallel.[0129] In one or more example implementations, cascade connections may be disabled by the processor 420 of a compute tile 302 executing a non-cascade kernel. A non-cascade kernel is a kernel that does not include any programming or instructions that cause the processor 420 to read data from a cascade connection or write data to a cascade connection. Similarly, cascade connections may be enabled by the processor 420 of a compute tile 302 executing a cascade kernel. A cascade kernel is a kernel that does include programming or instructions that cause the processor 420 to read data from a cascade connection or write data to a cascade connection.[0130] For example, in one or more example implementations, each overlay may specify a particular kernel to be executed by each compute tile 302 to achieve desired connectivity and/or functionality. Upon initial configuration of DP array 102, each program memory 422 may be loaded with one or more different kernels. Each kernel, as executed by the processor 420 in the same compute tile 302, dictates whether cascade connections are to be used. In this example, kernels may be of a first type that uses cascade connections or a second type that does not use cascade connections. Of the first type of kernel that uses cascade connections, one or more kernels may be configured to read data from a cascade connection (e.g., a read cascade kernel), one or more kernels may be configured to write data to a cascade connection (e.g., a write cascade kernel), and one or more kernels may be available to read data from a cascade connection and write data to a cascade connection. Another type of kernel, referred to as an activation kernel, also may be included in program memory 422. The activation kernel may implement a selected activation function. In one aspect, the activation kernel may implement the Rectified Linear (ReLU) activation function. It should be appreciated that an activation kernel may implement other activation functions. In an example, the particular kernel(s) to be executed (e.g., cascade and/or non-cascade and/or the particular activation function to be executed) may be specified by runtime parameters 212.[0131] Referring to the example of FIG. 7, compute tiles connected by enabled cascade connections in the cascade mode may operate cooperatively with one another by way of selecting the appropriate kernels for execution. For example, compute tile 302-3 may execute a write cascade kernel that writes data to a cascade connection to send data to compute tile 302-6. Compute tile 302-6 may execute a read cascade kernel that reads data from a cascade connection to receive data from compute tile 302-3 and so forth.[0132] Referring again to the example of FIG. 9, a write cascade kernel executed by compute tile 302-2 may calculate (Aoo x Boo) and write the result to a cascade connection. A read cascade kernel executed by compute tile 302-1 is capable of reading the result from the incoming cascade connection, calculating (A01 x Bio), and summing the results.[0133] FIGS. 10A, 10B, and 10C illustrate certain operative features of example overlays. FIGS. 10A, 10B, and 10C illustrate examples of logical connectivity implemented by different overlays. In the examples of FIGS. 10A, 10B, and 10C, the A terms represent feature maps while the B terms represent weights. The C terms represent the output data items that are generated by operation of the compute tiles 302. In the examples of FIGS. 10A, 10B, and 10C, the overlays are implemented using 4 compute tiles 302. For example, a partition used to implement an application includes 4 compute tiles. [0134] FIG. 10A illustrates an example implementation of an overlay and corresponding mode of data movement. In the example of FIG. 10A, the overlay illustrated is characterized by the broadcasting of weights. The term “broadcast” refers to conveying a same data item over a selected (e.g., single) channel to multiple, different endpoints or destinations. In the example, weights are broadcast to each of the 4 compute tiles 302 over a single stream channel. As shown, the weight Boo is initially broadcast to each compute tile 302. The weight is used as part of a matrix multiply operation with a feature map (A) also provided to the compute tile. The stream channels over which the feature maps are provided are not illustrated. Appreciably, since each of the compute tiles 302 illustrated in FIG. 10A receives a different feature map, 4 stream channels are needed to convey the feature maps (e.g., one stream channel to each of the compute tiles 302 illustrated). No cascade connections are utilized between compute tiles 302 in the example of FIG. 10A.[0135] In this example, each compute tile 302 receives a same weight and a different feature map. For example, compute tile 302-2 initially receives Aoo and Boo; compute tile 302-1 initially receives A and Boo; compute tile 302-3 initially receives A20 and Boo; and compute tile 302-6 initially receives A30 and Boo. Each of compute tiles 302 performs a matrix multiply operation. Subsequently, weight Bw is broadcast to each of the 4 compute tiles. Compute tile 302-2 receives A01 and Bw; compute tile 302-1 receives An and Bw; compute tile 302-3 receives A21 and Bw; and compute tile 302-6 receives A31 and Bw. Each compute tile 302 then performs a matrix multiply operation. Each compute tile 302 is capable of summing the results of the two matrix multiply operations and outputting the sum.[0136] FIG. 10B illustrates another example implementation of an overlay and corresponding mode of data movement. In the example of FIG. 10B, the overlay illustrated is characterized by the broadcasting of feature maps. Feature maps are broadcast to each of the 4 compute tiles 302. The feature maps may be broadcast over a single stream channel. As shown, the feature map Aoo is initially broadcast to each compute tile 302. The feature map is used as part of a matrix multiply operation with a weight also provided to the compute tile. The stream channels over which the weights are provided are not illustrated. Appreciably, since each of the compute tiles 302 illustrated in FIG. 10B receives a different weight, 4 stream channels are needed to convey the weights (e.g., one to each of the compute tiles 302 illustrated). In this example, each compute tile 302 receives a same feature map and a different weight. For example, compute tile 302-2 initially receives Aoo and Boo; compute tile 302-1 initially receives Aoo and B01; compute tile 302-3 initially receives Aoo and B02; and compute tile 302-6 initially receives Aoo and B03. Each of the compute tiles 302 performs a matrix multiply operation. Subsequently, compute tile 302-2 receives A01 and B10; compute tile 302-1 receives A01 and Bn ; compute tile 302-3 receives A01 and B12; and compute tile 302-6 receives A01 and B13. Each compute tile 302 is capable of performing a matrix multiply operation. Each compute tile 302 is capable of summing the results of the two matrix multiply operations and outputting the sum.[0137] FIG. 10C illustrates another example implementation of an overlay and corresponding mode of data movement. In the example of FIG. 10C, the overlay illustrated is characterized by the broadcasting of multiple weights. A first weight is broadcast over one stream channel to 2 different compute tiles. A second weight is broadcast over one stream channel to 2 different compute tiles. A first stream channel broadcasts weight Boo to compute tiles 302-2 and 302-3, while a second and different stream channel concurrently broadcasts weight B10 to compute tiles 302-1 and 302-6. In this example, two compute tiles 302 are used to perform the two matrix multiply operations and summation, thereby resulting in usage of a larger number of compute tiles with faster operation (higher throughput).[0138] In the example of FIG. 10C, compute tile 302-2 performs a matrix multiply operation of Aoo x Boo- The result is passed to compute tile 302-1 via a cascade connection. Compute tile 302-1 performs a matrix multiply operation of A01 and B10. Compute tile 302-1 sums the two matrix multiply results and outputs the resulting sum. Compute tile 302-3 performs a matrix multiply operation of A x Boo- The result is passed to compute tile 302-6 via a cascade connection. Compute tile 302- 6 performs a matrix multiply operation of An and B . Compute tile 302-6 sums the two matrix multiply results and outputs the resulting sum.[0139] The examples of FIGS. 10A, 10B, and 10C illustrate how different overlays may implement different modes of data movement for a given application implemented in a partition of DP array 102. For example, in the examples of FIGS. 10A and 10B, the compute tiles each generate an element of the resulting C matrix. In the example of FIG. 10C, two compute tiles are used to compute one element of the resulting C matrix. The example of FIG. 10C requires twice the number of compute tiles of the examples of FIGS. 10A and 10B to generate 4 elements of array C, but provides greater data throughput (e.g., greater computational speed in that the element of matrix C may be computed in fewer clock cycles). Each different overlay may be suited to implementing a layer having a particular shape.[0140] FIG. 11 is a table 1 100 illustrating attributes of example overlays used to configure an application for a partition of DP array 102. In the example of FIG. 11 , each overlay 0, 1 , and 2 implements a particular mode of data movement in DP array 102 or in a partition of DP array 102. Each overlay specifies a mode of data movement based on the parameters shown.[0141] In the example, the “Cascade” column indicates whether the overlay utilizes cascade connections. The “IFM Streams” column, where “IFM” stands for “input feature maps,” specifies the number of different feature maps sent over the stream channels created by an application to the particular compute tiles 302 implementing the overlay. The feature maps may be sent concurrently. The “W Streams” column specifies the number of different weights that are provided over the stream channels created by an application to the particular compute tiles 302 implementing the overlay. The weights may be sent concurrently.[0142] Accordingly, in the example of FIG. 11 , overlay 0 implements a mode of data movement referred to as mode 0. In mode 0, the “IFM Streams” parameter of 4 indicates that 4 different feature maps are conveyed over the stream channels. The “W Streams” parameter of 2 indicates that 2 different weights are conveyed over the stream channels. Mode 0 is a non-cascade mode as indicated by the cascade parameter.[0143] In the example of FIG. 11 , overlay 1 implements a mode of data movement referred to as mode 1 . In mode 1 , the “IFM Streams” parameter of 2 indicates that 2 different feature maps are conveyed over the stream channels. The “W Streams” parameter of 4 indicates that 4 different weights are conveyed over the stream channels. Mode 1 is a non-cascade mode as indicated by the cascade parameter.[0144] In the example of FIG. 11 , overlay 2 implements a mode of data movement referred to as mode 2. In mode 2, the “IFM Streams” parameter of 4 indicates that 4 different feature maps are conveyed over the stream channels. The “W Streams” parameter of 4 indicates that 4 different weights are conveyed over the stream channels. Mode 2 is a cascade mode as indicated by the cascade parameter.[0145] FIG. 12A illustrates an example of the stream channels implemented by an application and the implementation of overlay 0 using the stream channels. In the example of FIG. 12A, the different stream channels used to convey feature maps and weights to compute tiles 302 are depicted as stream channels 0, 1 , 2, 3, 4, 5, 6, and 7. In the example, since the stream channels are providing data to compute tiles 302, the stream channels are considered “input” stream channels. Stream channels 0-7 convey feature maps and weights to the respective compute tiles 302. The particular overlay that is implemented defines which stream channels convey which particular weights and which stream channels convey which particular feature maps.[0146] For purposes of illustration and convenience, in FIGS. 12A, 12B, and 12C, the tiles are renumbered. Further, DP array 102, or a partition thereof, includes 8 compute tiles and 2 memory tiles in the examples.[0147] In the example of FIG. 12A, different data items (e.g., feature maps and/or weights) may be provided over the various stream channels 0-7 by feeding the data items to the various stream channels from different buffers located in memory tiles 306. That is, by connecting a particular buffer to a particular stream channel, the stream channel will convey the type of data item contained in that buffer. As discussed, in cases where memory tiles 306 are omitted, data may be fed to stream channels 0-7 from other buffers stored in other memories, whether on-chip memories or off-chip memories.[0148] In the example of FIG. 12A, 4 different feature maps are conveyed with 2 different weights. Each of 4 different stream channels conveys a different feature map (F0, F1 , F2, and F3). RAM 504 of memory tile 306-1 includes buffers B0, B1 , and B2. RAM 504 of memory tile 306-2 includes buffers B3, B4, and B5. Buffer B0 stores feature map F0. Buffer B1 stores feature map F1 . Buffer B2 stores weight W0. Buffer B3 stores weight W1 . Buffer B4 stores feature map F2. Buffer B5 stores feature map F3.[0149] In the example of FIG. 12A, buffer 0 feeds stream channel 0. Stream channel 0 is configured to convey feature map F0 to each of compute tiles 302-1 and 302-2. Buffer 1 feeds stream channel 1. Stream channel 1 is configured to broadcast feature map F1 to each of compute tiles 302-3 and 302-4. Stream channel 2 is fed data from buffer B2. Stream channel 2 is configured to broadcast weight WO to each of compute tiles 302-1 and 302-6. Stream channel 3 is fed data from buffer B2. Stream channel 3 is configured to broadcast weight W0 to each of compute tiles 302-3 and 302-8. Stream channel 4 is fed data from buffer B3. Stream channel 4 is configured convey weight W1 to each of compute tiles 302-2 and 302-5. Stream channel 5 is fed data from buffer B3. Stream channel 5 is configured to broadcast weight W1 to each of compute tiles 302-4 and 302-7. Stream channel 6 is fed data from buffer B4. Stream channel 6 is configured to convey feature map F2 to each of compute tiles 302-6 and 302-5. Stream channel 7 is fed data from buffer B5. Stream channel 7 is configured to convey feature map F3 to each of compute tiles 302-8 and 302-7.[0150] In the example of FIG. 12A, the particular data item, e.g., particular feature map and/or weight, provided to each stream channel depends on the configuration of memory tiles 306 and, more particularly, the particular buffer (B0, B1 , B2, B3, B4, and B5) in memory that is used to supply data to each respective stream channel. The overlays dictate the buffer to stream channel pairings by configuring the DMA circuits within the respective tiles (e.g., memory tiles 306 and compute tiles 302 in this example).[0151] Overlay 0 may be implemented in a partition of DP array 102 by array controller 106 programming the DMA circuits of memory tiles 306 with a particular buffer to stream channel mapping. In another aspect, where data is obtained from a memory other than memory tiles 306, DMA circuits of other tiles such as interface tiles 304 that access the other memories to provide data to compute tiles 302 may be programmed with a particular buffer to stream channel mapping. Array controller 106 implements overlay 0 of FIG. 12A, for example, by writing data to the appropriate DMA circuits to create the mapping of buffers to stream channels shown. Further, the buffers B0-B5 may be moved into memory tiles 306 from other memories by way of array controller 106 programming the DMA circuits of the interface tiles 304 and/or memory tiles 306 to move such data to implement a layer (e.g., the overlay) of the application.[0152] The particular kernel(s) and/or function(s) thereof that is executed in the respective processors 420 of each compute tile 302 provides the executable instructions necessary to correctly process the data received via the different stream channels. Though the data provided over the stream channels may change from one overlay to another, so too may the particular kernel(s) and/or function(s) executed in the various compute tiles 302 based on the configuration of such kernel(s) by providing appropriate runtime parameters 212 to the respective compute tiles for each overlay that is implemented. The runtime parameters 212 provided to each compute tile 302 ensure that the kernel(s) executed by the processor 420 therein interprets and applies the received data correctly in performing any computations for the particular layer being implemented based on the corresponding overlay that is used.[0153] In one or more other example implementations, each overlay may select the kernels to be executed in the respective compute tiles and runtime parameters 212 may configure such kernels.[0154] In the example of FIG. 12A, each compute tile 302 outputs a result via the output stream channels illustrated in FIG. 13. One or more of the compute tiles 302 may also be configured to execute an activation kernel subsequent to execution of the non-cascade kernel.[0155] FIG. 12B illustrates an example of the stream channels implemented by an application and the implementation of overlay 1 using the stream channels. The stream channels illustrated in FIG. 12B are input stream channels. In the example of FIG. 12B, the stream channels 0-7 are the same as described in connection with FIG. 12A. That is, FIGS. 12A and 12B illustrate stream channels implemented by a same application and may remain in place as different overlays are implemented. Accordingly, in the example of FIG. 12B, each stream channels 0-7 provide data to the same compute tiles 302 as in the example of FIG. 12A.[0156] In the example of FIG. 12B, different data items (e.g., feature maps and/or weights) may be provided over the various stream channels 0-7 by feeding the data items to the various stream channels from different buffers located in memory tiles 306. That is, by connecting a particular buffer to a particular stream channel, the stream channel will convey the type of data item contained in that buffer. As discussed, in cases where memory tiles 306 are omitted, data may be fed to stream channels 0-7 from other buffers stored in other memories, whether on-chip memories or off-chip memories.[0157] In the example of FIG. 12B, 2 different feature maps are conveyed with 4 different weights. RAM 504 of memory tile 306-1 includes buffers B0, B1 , and B2. RAM 504 of memory tile 306-2 includes buffers B3, B4, and B5. Buffer B0 stores feature map FO. Buffer B1 stores weight WO. Buffer B2 store weight W1 . Buffer B3 stores weight W2. Buffer B4 stores weight W3. Buffer B5 stores feature map F1 . [0158] In the example of FIG. 12B, 4 stream channels are used to convey feature maps. A first pair of 2 of the 4 stream channels convey the same feature map (e.g., FO). A second pair of 2 of the 4 stream channels convey the same feature map (e.g., F1 ), but a feature map that differs from the feature map conveyed by the first pair of stream channels. Four stream channels are used to convey 4 different weights.[0159] In the example of FIG. 12B, buffer 0 feeds stream channels 0 and 1 . With stream channels 0 and 1 being fed data from the same buffer, each conveys the same data, which is feature map F0 in this case. Stream channel 0 is configured to broadcast feature map F0 to each of compute tiles 302-1 and 302-2. Stream channel 1 is configured to broadcast feature map F0 to each of compute tiles 302-3 and 302-4. Stream channel 2 is fed data from buffer B1 . Stream channel 2 is configured to broadcast weight W0 to each of compute tiles 302-1 and 302-6. Stream channel 3 is fed data from buffer B2. Stream channel 3 is configured to broadcast weight W1 to each of compute tiles 302-3 and 302-8. Stream channel 4 is fed data from buffer B3. Stream channel 4 is configured to broadcast weight W2 to each of compute tiles 302-2 and 302-5. Stream channel 5 is fed data from buffer B4. Stream channel 5 is configured to broadcast weight W3 to each of compute tiles 302-4 and 302-7. Stream channel 6 and stream channel 7 are fed data from the same buffer B5. Stream channel 6 is configured to broadcast feature map F1 to each of compute tiles 302-6 and 302-5. Stream channel 7 is configured to broadcast feature map F1 to each of compute tiles 302-8 and 302-7.[0160] In the example of FIG. 12B, feature maps F0 and F1 and weights W0, W1 , W2, and W3 are provided to compute tiles 302 from memory tiles 306. The particular data item, e.g., particular feature map and/or weight, provided to each stream channel depends on the configuration of memory tile 306 and, more particularly, the particular buffer (B0, B1 , B2, B3, B4, and B5) in memory that is used to supply data to each respective stream channel. The overlays dictate the buffer to stream channel pairings by configuring the DMA circuits within the respective tiles (e.g., memory tiles 306 in this example).[0161] Overlay 1 may be implemented in a partition of DP array 102 by array controller 106 programming the DMA circuits of memory tiles 306 with a particular buffer to stream channel mapping. In another aspect, where data is obtained from a memory other than memory tiles 306, DMA circuits of other tiles such as interface tiles 304 that access the other memories to provide data to compute tiles 302 may be programmed with a particular buffer to stream channel mapping. Array controller 106 implements overlay 1 of FIG. 10B, for example, by writing data to the appropriate DMA circuits to create the mapping of buffers to stream channels shown and to move data to create the buffers within the memory tiles 306 as illustrated.[0162] The particular kernel(s) and/or function(s) thereof that is executed in the respective processors 420 of each compute tile 302 provides the executable instructions necessary to correctly process the data received via the different stream channels. Though the data provided over the stream channels may change from one overlay to another, so too may the particular kernel(s) and/or function(s) executed in the various compute tiles 302 based on the configuration of such kernel(s) by providing appropriate runtime parameters 212 to the respective compute tiles for each overlay that is implemented. The runtime parameters 212 provided to each compute tile 302 ensure that the kernel(s) executed by the processor 420 therein interprets and applies the received data correctly in performing any computations for the particular layer being implemented based on the corresponding overlay that is used.[0163] In one or more other example implementations, each overlay may select the kernels to be executed in the respective compute tiles and runtime parameters 212 may configure such kernels.[0164] In the example of FIG. 12B, each compute tile 302 outputs a result via the output stream channels illustrated in FIG. 13. One or more of the compute tiles 302 may also be configured to execute an activation kernel subsequent to execution of the non-cascade kernel.[0165] FIG. 12C illustrates an example of the stream channels implemented by an application and the implementation of overlay 2 using the stream channels. The stream channels illustrated in FIG. 12C are input stream channels. In the example of FIG. 12C, the stream channels 0-7 are the same as described in connection with FIGS. 12A and 12B. That is, FIGS. 12A, 12B, and 12C illustrate stream channels implemented by a same application and may remain in place as different overlays are implemented. Accordingly, in the example of FIG. 12C, each stream channel 0- 7 provides data to the same compute tiles 302 as in the example of FIG. 12B.[0166] In the example of FIG. 120, 4 different feature maps are conveyed with 4 different weights. RAM 504 of memory tile 306-1 includes buffers B0, B1 , B2, and B3. RAM 504 of memory tile 306-2 includes buffers B4, B5, B6, and B7. Buffer B0 stores feature map F0. Buffer B1 stores feature map F1. Buffer B2 stores weight W0. Buffer B3 stores weight W1 . Buffer B4 stores weight W2. Buffer B5 stores weight W3. Buffer B6 stores feature map F2. Buffer B7 stores feature map F3. [0167] As noted, overlay 2 is a cascade overlay implementing a cascade mode. In the example of FIG. 12C, selected processors 420 of compute tiles 302 are connected, e.g., configured to communicate, using cascade connections. In the cascade mode, the cascade connections, e.g., at least selected ones of the cascade connections, are enabled. That is, enabled ones of the cascade connections are able to pass data. Though the example of FIG. 12C utilizes vertical cascade connections (e.g., cascade connections between processors in a same column), it should be appreciated that cascade connections may run horizontally (row-wise) and/or vertically (column-wise) in accordance with the particular DP array architecture and overlay that is implemented.[0168] An example in which cascade connections are enabled is by the processor 420 of a compute tile 302 executing a kernel and/or function that is configured, by way of runtime parameters 212, to write data to an outgoing cascade connection and another kernel and/or function in another processor 420 coupled to the same cascade connection configured, by way of runtime parameters 212, to read data from an incoming cascade connection. In the example of FIG. 12C, the cascade connected pairs of compute tiles are compute tiles (302-1 and 302-3); (302-2 and 302-4); (302-5 and 302-7); and (302-6 and 302-8).[0169] In the example of FIG. 12C, being configured to implement overlay 2 for the application, each of stream channels 0-7 is fed data from a different buffer stored in memory tiles 306. In the example of FIG. 12C, each of stream channels 0- 7 is fed data from a respective one of buffers B1 , B2, B3, B4, B5, B6, and B7. In the example of FIG. 12C, 4 stream channels are used to convey 4 different feature maps and 4 stream channels are used to convey 4 different weights.[0170] In consequence, stream channel 0 is configured to broadcast feature map F0 to each of compute tiles 302-1 and 302-2. Stream channel 1 is configured to broadcast feature map F1 to each of compute tiles 302-3 and 302-4. Stream channel 2 is configured to broadcast weight W0 to each of compute tiles 302-1 and 302-6. Stream channel 3 is configured to broadcast weight W1 to each of compute tiles 302-3 and 302-8. Stream channel 4 is configured to broadcast weight W2 to each of compute tiles 302-2 and 302-5. Stream channel 5 is configured to broadcast weight W3 to each of compute tiles 302-4 and 302-7. Stream channel 6 is configured to broadcast feature map F2 to each of compute tiles 302-5 and 302- 6. Stream channel 7 is configured to broadcast feature map F3 to each of compute tiles 302-7 and 302-8.[0171] Overlay 2 may be implemented in a partition of DP array 102 by array controller 106 programming the DMA circuits of memory tiles 306 with a particular buffer to stream channel mapping. In another aspect, where data is obtained from a memory other than memory tiles 306, DMA circuits of other tiles such as interface tiles 304 that access the other memories to provide data to compute tiles 302 may be programmed with a particular buffer to stream channel mapping. Array controller 106 implements overlay 2 of FIG. 12C, for example, by writing data to the appropriate DMA circuits to create the mapping of buffers to stream channels and creates the buffers illustrated in the example of FIG. 12C.[0172] The particular kernel(s) and/or function(s) thereof that is executed in the respective processors 420 of each compute tile 302 provides the executable instructions necessary to correctly process the data received via the different stream channels. Though the data provided over the stream channels may change from one overlay to another, so too may the particular kernel(s) and/or function(s) executed in the various compute tiles 302 based on the configuration of such kernel(s) by providing appropriate runtime parameters 212 to the respective compute tiles for each overlay that is implemented. The runtime parameters 212 provided to each compute tile 302 ensure that the kernel(s) executed by the processor 420 therein interprets and applies the received data correctly in performing any computations for the particular layer being implemented based on the corresponding overlay that is used.[0173] In one or more other example implementations, each overlay may select the kernels to be executed in the respective compute tiles and runtime parameters 212 may configure such kernels. [0174] The examples of FIGS. 12A, 12B, and 12C illustrate that by loading overlays into a partition of a DP array, different data may be distributed throughout tiles of the partition thereby achieving different modes of data movement among the tiles. The different modes of data movement may be achieved at least by virtue of sending different weights and/or feature maps through different ones of the established stream channels. This allows different modes of data movement to be implemented for a same application. That is, for a given application specifying kernels to be executed by compute tiles and particular stream channels, the different modes may be implemented without reconfiguring DP array 102.[0175] FIG. 13 illustrates another example of the stream channels implemented by an application. The example of FIG. 13 illustrates output stream channels for the application. That is, the stream channels illustrated in FIG. 13 may be implemented by the same application referenced in FIGS. 12A, 12B, and 12C to output data from compute tiles 302 of the partition illustrated for the different overlays described.[0176] In the example of FIG. 13, stream channels (e.g., output stream channels) 0, 1 , 2, 3, and 4 are implemented. The output stream channels, like the input stream channels previously described, may be implemented by configuring the stream switches of the various tiles included in the partition. In the example, stream channel 0 conveys output data items (e.g., C) generated by compute tiles 302-1 and 302-2 to memory tile 306-1 (or other memory as discussed). Stream channel 1 conveys output data items generated by compute tiles 302-3 and 302-4 to memory tile 306-1 . Stream channel 2 conveys output data items generated by compute tiles 302-5 and 302-6 to memory tile 306-2. Stream channel 3 conveys output data items generated by compute tiles 302-7 and 302-8 to memory tile 306- 2.[0177] In cases where a cascade overlay is used, the stream channel located at the end (e.g., destination tile) of the set of cascade connected compute tiles 302 may be used. The stream channels indicated with dashed lines (0 and 3), for example, would not be used. Rather, stream channels 1 and 2 would be used to convey the output data items generated by compute tiles 302-3, 302-4, 302-7, and 302-8 to memory tiles 306-1 and 306-2.[0178] In one or more other example implementations, the kernels executing in the compute tiles 302 illustrated in FIG. 13 may be configured using runtime parameters to direct where output data items are directed or written. Kernels may be configured, by way of runtime parameters, to write data to the appropriate addresses (e.g., a particular stream switch or an outgoing cascade interface) for each overlay. For example, while implementing a non-cascade overlay, the kernel executed by compute tile 302-1 directs output to output stream channel 0. The kernel executed by compute tile 302-3 directs output to output stream channel 1 . By way of comparison, when implementing a cascade overlay, the kernel executed by compute tile 302-1 directs output to compute tile 302-3 via the cascade connection. The kernel executed by compute tile 302-3 directs output to output stream channel 1 .[0179] Within this disclosure, different overlays have been described. It should be appreciated that other overlays may be implemented that use more than 1 cascade connection to link more than 2 compute tiles 302. That is, while the cascade mode illustrated herein is created using computing clusters of 2 compute tiles 302, in other arrangements, computing clusters of 3, 4, or more compute tiles 302 linked by cascade connections may be formed. Further, a partition of DP array 102 may be configured, by way of loading an application and loading overlays sequentially over time corresponding to different layers of the application being executed. This allows the partition to perform the workload for a given layer of the application entirely or in part in an iterative manner where the size of a layer is larger than the partition. It should be appreciated that the dimensions of any matrix multiply operations performed by a partition may vary from those illustrated, particularly from one workload (e.g., overlay/mode) to another.[0180] FIG. 14 illustrates an example of a method 1400 illustrating certain operative features of system 100 of FIG. 1 . For purposes of illustration, array controller 106 is capable of performing the operations described in connection with method 1400. It should be appreciated that in other example implementations, a processor may perform the operations attributed to array controller 106. Further, in other example implementations, a processor is capable of providing instructions to array controller 106 for controlling operation of DP array 102.[0181] In the example of FIG. 14, reference is made to a partition of DP array 102. As discussed, a partition may encompass the entirety of DP array 102 or a subset of the tiles of DP array 102. Method 1400 may be performed for either type of partition. Further, an array controller may perform the operations of FIG. 14 for multiple partitions operating concurrently. In other example implementations, the operations described in connection with FIG. 14 may be performed by two or more different array controllers operating concurrently to control different partitions each implementing a different application. Each partition may operate independently of the other regardless of whether the partitions are under control of a same array controller or different array controllers.[0182] In block 1402, array controller 106 loads an application into a partition of DP array 102. The DP array 102 includes a plurality of compute tiles each having a processor. The application specifies kernels executable by the processors and implements stream channels that convey data to the plurality of compute tiles (e.g., input stream channels). The application also implements output stream channels. [0183] For example, loading an application in DP array 102 performs an initial configuration of the partition of DP array 102. In performing block 1402, array controller 106 is capable of loading the executable kernels into the program memories 422 of the compute tiles 302 of the partition, initializing any memory of the partition (e.g., RAMs 404 of compute tiles 302 and/or RAMs 504 of memory tiles 306), and implementing the stream channels by loading configuration data into control registers 414, 514, and/or 614. The loading of the application, which includes initialization data and configuration data, may be performed by array controller 106 writing such data via the memory-mapped network formed of the memory-mapped switches of the tiles.[0184] In block 1404, array controller 106 is capable of loading an overlay corresponding to a layer of the application that is to be executed by the partition of DP array 102.[0185] In one aspect, each overlay specifies a different mapping of buffers to stream channels implemented by the application. Each buffer may include a particular data type (e.g., feature map or weight). Further, each buffer may include a particular element of the data type. In one or more examples, implementing a selected overlay of the plurality of overlays is performed by array controller 106 programming a plurality of DMA circuits to convey data from particular buffers to selected ones of the compute tiles via selected ones of the stream channels.[0186] In another aspect, the mode of data movement of each overlay is characterized by a number of input feature maps and a number of weights conveyed over the stream channels. [0187] In one aspect, sequentially implementing the plurality of overlays includes, for each overlay, programming a plurality of DMA circuits with a different mapping of buffers to the stream channels. As an example, a selected overlay may be implemented in the partition for the application by programming a plurality of DMA circuits to convey data from particular buffers to selected ones of the compute tiles via selected ones of the stream channels.[0188] In another aspect, sequentially implementing the plurality of overlays includes setting up the various buffers that are mapped to the stream channels. Array controller 106 is capable of moving data, by programming the DMA circuits of interface tiles 304 and/or memory tiles 306, for example, to create the various buffers mapped to the stream channels to include the correct data.[0189] In one aspect, the application implements a neural-network. Each layer of the neural-network is mapped to one of the plurality of overlays. Different ones of the plurality of overlays are loaded over time to implement respective layers of the neural-network.[0190] In one example, array controller 106 is capable of executing a control application specifying a schedule stored in memory. The schedule specifies workloads to be executed by the application as implemented in the partition. The workloads may be generated by compiler 204. The schedule may specify which overlays are to be loaded as part of a sequence of overlays to be loaded for the application to perform the sequence of workloads (e.g., to implement the layers of the application and perform a workload for each layer). In another aspect, another processor such as a host processor may instruct array controller 106 to initiate loading of a particular overlay in the partition of the DP array 102. In that case, the other processor dictates the schedule or sequence of overlays to be implemented in DP array 102 by array controller 106.[0191] In block 1406, array controller 106 loads runtime parameters into the partition for the overlay loaded in block 1404. Each layer of the application may be associated with a set of runtime parameters. The runtime parameters may be compute tile specific. The runtime parameters configure the various kernels for execution. Accordingly, in block 1406, array controller 106 selects the runtime parameters for the layer being implemented by the overlay loaded into the partition in block 1404 and loads the runtime parameters into RAMs 404 of compute tiles 302. The runtime parameters that are loaded may be for one or more selected compute tiles or all compute tiles of the partition of DP array 102.[0192] In one aspect, array controller 106 is capable of, for a selected overlay of the plurality of overlays, providing a runtime parameter to a selected compute tile of the plurality of compute tiles. The runtime parameter configures an operational parameter of a kernel executed by the selected compute tile. For example, the runtime parameter is used by a processor of the selected compute tile in executing the kernel stored therein to change an operational feature of the selected compute tile. It should be appreciated, however, that the runtime parameters that are loaded may be for one or more selected compute tiles or all compute tiles of the partition of DP array 102.[0193] In one aspect, a runtime parameter for a selected compute tile is capable of changing the execution flow of the kernel executed by the selected compute tile. For example, the kernel may be configured to read values from the runtime parameters and, based on the values read, selectively execute particular functions (e.g., execute particular functions and/or skip execution of particular functions). Thus, as different runtime parameters are loaded into the partition of the DP array during runtime for different layers, functionality and/or runtime behavior of kernels of the application may be modified.[0194] This allows each kernel to execute different operations based on the particular runtime parameter values read for the different layers being implemented and in accordance with the overlay used for each layer. For example, different layers of the application may utilize different functions such as matrix multiply, convolution, batch normalization, ReLU, other activation functions, or other operations. The runtime parameters loaded for an overlay may specify which of the functions available in the kernel or in different kernels are to be executed on a per compute tile basis for a given overlay. A runtime parameter may cause a kernel to execute an activation function for example or not depending on the value of the runtime parameter.[0195] Accordingly, the particular function(s) executed by each kernel may depend on the runtime parameters loaded into the compute tile and may change from one layer to another based on the particular runtime parameters loaded. Accordingly, for purposes of illustration, the last compute tile 302 in a cascade connected configuration may be instructed to execute an activation function while the other compute tiles 302 in the cascade connected configuration may not.[0196] In one or more examples, the runtime parameter is capable of activating or deactivating a cascade connection between a selected compute tile and at least one other compute tile of the plurality of compute tiles. For example, the runtime parameter may cause the processor of the selected compute tile to provide data to another compute tile by writing to an outgoing cascade connection or receive data from another compute tile by reading from an incoming cascade connection.[0197] In one example, the overlays correspond to particular layers of the application. In that case, for each layer, the runtime parameter specifies one or more dimensions of the particular layer as implemented using the overlay loaded into the partition for that layer. For example, a runtime parameter may specify at least one of a number of rows of a matrix to be processed or a number columns of the matrix to be processed.[0198] In one or more example implementations, a runtime parameter may cause a kernel to read from and/or write to a particular location (e.g., memory) in DP array 102. For example, the runtime parameter may cause the kernel to read from and/or write to a local RAM 404, a particular RAM 404 of an adjacent compute unit, and/or a RAM 504 of a particular memory tile 306.[0199] In another aspect, the runtime parameters may specify or select the particular kernel(s) of a plurality of kernels in the compute tiles to be executed in the respective compute tiles. In other aspects, the overlay may specify the kernel(s) to be executed with the runtime parameters configuring the respective kernels.[0200] In block 1408, the partition of the DP array 102 performs a workload as configured by the application and based on the overlay and the runtime parameters. In response to completing the workload, method 1400 may loop back to block 1404 where array controller 106 is capable of starting the process anew for a different layer of the application.[0201] For example, in one aspect, array controller 106, in implementing a next layer of the application, loads a different overlay into the partition of DP array 102 for that layer. In that case, array controller 106 may continue and load runtime parameters for the different overlay. In another aspect, the overlay to be used for the next layer may be the same overlay used for the prior layer of the application. In that case, array controller 106 may leave the overlay loaded and proceed to block 1406. The runtime parameters may or may not be the same.[0202] Method 1400 illustrates that during runtime of the application, the plurality of overlays are sequentially implemented in the partition of DP array 102. Each overlay implements a different mode of data movement in DP array 102 using the stream channels. As noted, each overlay may be used to implement a particular layer of the application in the partition. For each overlay (e.g., layer) implemented, a workload may be performed by moving data to the plurality of compute tiles based on the respective mode of data movement.[0203] For example, sequentially implementing a plurality of overlays can include implementing a first overlay of the plurality of overlays to perform a first workload including a first matrix multiply operation. A second overlay of the plurality of overlays can be implemented to perform a second workload including a second matrix multiply operation. The first matrix multiply operation and the second matrix multiply operation can be of different dimensions. In one aspect, the linking of a particular buffer to an input stream channel for purposes of conveying data may be configured by the loading of an overlay. That is, while the input stream channels may be established in terms of connectivity to particular tiles, the buffer from which each such input stream channel obtains data to provide to a tile is determined by the overlay that is loaded into DP array 102.[0204] The different layers of the application may be implemented in the partition since different overlays and runtime parameters may be loaded into the partition of DP array 102 without loading a different application into DP array 102 that loads different kernels into the compute tiles or modifies the stream channels.[0205] As discussed, DP array 102 may be subdivided into a plurality of partitions. Each partition may include a subset of the plurality of compute tiles. Each partition is adapted to concurrently implement a different application and sequentially implement a plurality of different overlays specific to the application executed by the partition.[0206] The inventive arrangements described within this disclosure provide efficient and flexible techniques for adapting a DP array to implement different layers of a machine learning or other layered application. Loading an application, as compared to loading an overlay, may be time consuming as the size of the application (e.g., including the kernels and configuration data) is large compared to the size of an overlay and/or runtime parameters. Thus, the application may be loaded at the start and adapted to different workloads through loading of overlays and runtime parameters. Were one to attempt to reconfigure an entire partition of the DP array for each layer (e.g., with a new application for each layer), the DP array would lose significant clock cycles undergoing continued reconfiguration. By separating certain elements, e.g., application from data movement, the DP array may be adapted for different layers of the application without incurring a substantial timing penalty for reconfiguration. Further, the DP array operates in a more computationally efficient manner for each of the respective layers of the application. [0207] In one or more other example implementations, the application loaded into the DP array may cause multiple kernels to be loaded into RAMs 404 of compute tiles. In that case, the runtime parameters may be used to select the particular kernel that is executed for each overlay, wherein each kernel is adapted for the data movement of the overlay that is loaded. As such, the particular kernel selected for execution for a given compute tile 302 may differ from the particular kernel selected for execution for a different compute tile 302.[0208] In one aspect, array controller 106 is capable of providing tasks to task queues of the various DMA circuits 434, 502, 602 to move data into and out from DP array 102. In one example, as each task completes, the DMA circuits are capable of generating a notification that the task has completed thereby allowing array controller 106 to track the progress of the workload as performed by DP array 102.[0209] As discussed, the overlays specify particular input buffers to be used to feed data into the input stream channels that are established in DP array 102 and/or particular output buffers to receive data from the output stream channels. The input and/or output buffers specified may differ from one overlay to another. [0210] FIG. 15 illustrates an example in which DP array 102 includes multiple partitions each controlled by array controller 106. In the example of FIG. 15, DP array 102 is partitioned into a plurality of partitions 1502, 1504. Each partition 1502, 1504 includes one or more compute tiles 302, optionally one or more memory tiles 304 (e.g., if included in DP array 102), and one or more interface tiles 306.[0211] In the example of FIG. 15, a single array controller 106 is capable of controlling operation of multiple partitions. Each of partitions 1502, 1504 is capable of operating independently of the other, though under control of array controller 106. As such, partition 1502 may implement one application while, e.g., concurrently with, partition 1504 implements a different application. Array controller 106 is capable of controlling each partition in terms of loading an application, loading overlays, loading runtime parameters, and initiating workloads for layers of the application.[0212] FIGS. 16A, 16B, 16C, 16D, 16E, 16F, and 16G illustrate different example architectures for an IC including DP array 102 and array controller 106. In the example of FIG. 16A, the IC includes programmable logic 1602, which is used to implement array controller 106. In one aspect, array controller 106 may be implemented as a state machine circuit. In another example, array controller 106 may be implemented as a soft processor. A soft processor refers to a processor, e.g., a circuit capable of executing program code, that is formed or implemented using programmable logic 1602.[0213] In one or more examples, array controller 106 may execute control application 214 from a memory (not shown) to control operation of DP array 102. In another example implementation, array controller 106 may operate under control of processor 1604. Processor 1604 may be implemented as a hardwired processor. [0214] The example of FIG. 16B may operate substantially as described in connection with FIG. 16A with the exception that array controller 106 may be implemented as a hardwired circuit block. In one aspect, array controller 106 may be implemented as a state machine circuit. In another example, array controller 106 may be implemented as a processor capable of executing program code.[0215] In the example of FIG. 16C, more than one array controller is implemented and shown as array controller 106-1 and array controller 106-2. In one example, both array controllers 106-1 and 106-2 are implemented in programmable logic 1602. In one aspect, array controller 106-1 may be allocated or apportioned a particular subset of tiles of DP array 102, e.g., partition 1502, while array controller 106-2 may be allocated another non-overlapping subset of tiles of DP array 102, e.g., partition 1504. For example, viewing DP array 102 as a grid of columns 1 -N, array controller 106-1 may control tiles in columns 1 -(M-1 ), while array controller 106-2 controls tiles in columns M-N, where M and N are integers and M<N. In one aspect, each subset of tiles may be considered a partition that is independent of the other partition. Each partition may implement and execute a different application therein and be controlled completely independently of the other partition. The tiles and stream channels within different partitions in the examples provided herein are isolated from one another.[0216] In one or more examples, each array controller 106-1 and 106-2 of FIG. 16C may execute its own control application 214 from a memory (not shown) to control operation of the respective partitions of DP array 102. In another example implementation, array controllers 106-1 and 106-2 may operate under control of processor 1604. Processor 1604 may be implemented as a hardwired processor or as a soft processor. In either case, processor 1604 may control each of array controllers 106-1 and 106-2 independently to effectuate independent operation of the partitions controlled by each respective array controller. For example, processor 1604 may write the control applications 214 to memories accessible by array controllers 106-1 and 106-2.[0217] The example of FIG. 16D may operate substantially as described in connection with FIG. 16C with the exception that array controller 106-1 and array controller 106-2 each may be implemented as a hardwired circuit block. The array controllers may be implemented as state machine circuits or as processors capable of executing program code.[0218] In one or more other example implementations, array controller 106-1 of FIG. 16C and/or 16D may be implemented using programmable logic 1602 (e.g., as a state machine circuit or a soft processor) while array controller 106-2 is implemented as a hardwired circuit block (e.g., an ASIC block) implementing a state machine circuit or a processor.[0219] In the example of FIG. 16E, processor 1604 is not implemented or embedded in the IC. For example, processor 1604 may be implemented as an x86 type of processor or another type of processor having another instruction set architecture. Processor 1604 may be disposed in, or part of, another data processing system to which the IC is communicatively linked.[0220] In one or more examples, each array controller 106-1 and 106-2 may execute its own control application 214 from a memory (not shown) to control operation of the respective partitions of DP array 102. In another example implementation, array controllers 106-1 and 106-2 may operate under control of processor 1604. In the various examples described herein, an array controller operating under control of a processor may include the processor 1604 writing the control application 214 executed by the array controller to the memory accessible by array controller 106 for execution.[0221] In the example of FIG. 16E, the IC does not include any programmable logic. Accordingly, array controllers 106-1 and 106-2 may be implemented as hardwired circuit blocks (e.g., ASIC circuit blocks). In the example of FIG. 16E, array controllers 106-1 and/or 106-2 may be implemented as hardwired state machine circuits or hardwired processors.[0222] The example of FIG. 16F may operate substantially as described in connection with FIG. 16E with the exception that the IC does include programmable logic 1602. Accordingly, one or both of array controllers 106-1 and/or 106-2 may be implemented using programmable logic whether as a state machine or a soft- processor.[0223] In the example of FIG. 16G, the IC architecture includes a single array controller 106 that is implemented as a hardwired circuit block (e.g., an ASIC block). The array controller 106 may be implemented as a hardwired state machine circuit or a hardwired processor. The single array controller may control more than one partition (e.g., partitions 1502, 1504) of DP array 102 through execution of control application 214.[0224] In the example of FIG. 16H, the IC architecture includes programmable logic 1602. In the example of FIG. 16H, the IC includes a single array controller 106 that is implemented using programmable logic 1602. The array controller 106 may be implemented as a state machine circuit or a soft-processor. The single array controller may control more than one partition (e.g., partitions 1502, 1504) of DP array 102 through execution of control application 214.[0225] In the examples of FIG. 16A, 16B, 16C, 16D, 16E, 16F, 16G, and 16H, the particular number of array controllers 106 shown is provided for purposes of illustration. One, two, or more array controllers 106 may be included in the IC to control DP array 102. In one aspect, the plurality of array controllers 106 correspond on a one-to-one basis with partitions implemented in DP array 102. For example, each array controller 106 may be dedicated for controlling a particular partition of DP array 102. Each array controller 106 may control the loading of applications, loading of overlays and runtime parameters, and initiation of workloads for their respective partitions of DP array 102. In other examples, the array controller to partition ratio need not be one-to-one. [0226] In initiating the workloads, array controller 106 is capable of providing pointers (e.g., memory addresses) to the partition of DP array 102 being controlled to specify input data (e.g., feature maps and weights) to be processed from buffers. Each array controller 106 further can provide control information. In one aspect, array controllers 106 are capable of writing tasks to the various DMA circuits of tiles within their respective partitions. For purposes of illustration, the tasks may specify buffer descriptors, pointers, and/or control data. The tasks may, for example, cause DMA circuits to move data to create buffers, program the DMA circuits to map particular buffers to particular stream channels, and/or specify pointers to data to provide data items to the compute tiles 302. Each DMA circuit, for example, may include one or more task queues. Array controllers 106 may write tasks to these task queues as part of executing control application 214. As an illustrative and nonlimiting example, array controllers 106 are capable of writing tasks, e.g., programming, DMA circuits via the various communication mechanisms described herein (e.g. memory-mapped switches and/or stream switches, via direct connections, and/or via connections to interfaces 604 of interface tiles 304) to effectuate movement of data. For example, array controllers 106 may implement overlays by writing buffer descriptors or other data to the DMA circuits.[0227] For purposes of illustration, referring to the example of FIG. 10B, array controller 106 may create buffers in memory tile 306. Array controller 106 may provide a pointer specifying an address for Aoo to a DMA circuit of a memory tile 306 so that the DMA circuit transfers Aoo via a stream channel to compute tile 302- 2. Similarly, array controller 106 is capable of providing another pointer specifying an address for A01 to the DMA circuit of the memory tile 306 so that the DMA circuit transfers A01 via a stream channel to compute tile 302-2. Array controller 106 is capable of continually providing pointers to convey the various data items illustrated so that the partition may perform the workload for each given layer using the correct sequence of operations based on the overlay that is used.[0228] In performing the functionality described herein, controllers 106 alleviate the workload imposed on other processors whether embedded in the IC itself or implemented external to the IC and located within a host data processing system. Though the size of DP array 102 is relatively small in the example figures disclosed herein for purposes of illustration, DP array 102 may include hundreds of tiles in various configurations. Thus, the number of data transfers and data movement operations required to keep DP array 102 operating at or near full capacity may be significant. Inclusion of one or more array controllers 106 frees up significant processing resources (e.g., clock cycles) of other processors. Further, including such controllers on the same IC as DP array 102 facilitates more efficient operation and greater data throughput.[0229] In one or more example implementations, array controller(s) 106 are capable of controlling operation of compute tiles 302, interface tiles 304, and memory tiles 306. In some arrangements, array controller(s) 106 may not control operation of compute tiles 302. For example, compute tiles 302 may operate under control of the kernels executed by the respective processors 420 of compute tiles 302. As noted, runtime parameters provided by compute tiles 302 may vary the functionality of kernels. In one or more other example implementations, array controller(s) 106 may control operation of compute tiles 302, interface tiles 304, and memory tiles 306.[0230] FIG. 17 illustrates an example method 1700 of operation of an IC including a DP array 102. Method 1700 illustrates various operations performed by array controller 106 to execute workloads using DP array 102.[0231] In block 1702, array controller 106 loads an application into a partition of DP array 102. The application includes a plurality of kernels that are executable by the compute tiles 302. More particularly, the kernels are executable by the processors 420 of the compute tiles 302. As discussed, the application loads kernels into compute tiles of the partition, initializes memories of the partition, and implements stream channels (e.g., input and output stream channels) for conveying data to the compute tiles and outputting data form the compute tiles.[0232] In block 1704, the array controller 106 loads an overlay to implement a layer of the application in the partition. The array controller 106 also loads runtime parameters for the layer.[0233] In block 1706, array controller 106 initiates a workload in the partition configured by the application, the overlay, and the runtime parameters. Array controller 106 is capable of initiating the workload by writing tasks to the DMA circuits of the tiles. The tasks, as specified by the control application, sequence the layers and the operations necessary to implement each layer. The tasks may move data to create buffers. The tasks may specify addresses of data, e.g., feature maps and weights, as contained in the buffers, to convey the data to the compute tiles over respective ones of the stream channels. The tasks may specify pointers to output buffers to be used in writing data generated by the compute tiles.[0234] In one or more example implementations, instructions executed by array controller 106 may be pre-generated by compiler 204. The instructions may be embodied as the control application 214 including mapping 210 and runtime parameters 212 and specifying the schedule described herein. Array controller 106 is capable of executing the instructions at runtime to execute the application and perform the various operations described herein.[0235] In another aspect, the schedule of the control application 214 specifies the number of times that each partition, in implementing an application as programmed with an overlay and runtime parameters, is to iterate to complete a given layer. That is, in some cases, a partition may be able to implement an entire layer of the application without having to perform loops. In other cases, the layer is broken out into sections where the partition iterates a number of times (e.g., corresponding to the number of sections) to complete the workload of a layer. It should be appreciated that the control application, as generated by the compiler 204, controls this aspect of operation of each partition for the different layers of the application being executed.[0236] After block 1706, method 1700 can loop back to block 1704 to continue processing further workloads. As such, the array controller is capable of controlling the loading of applications, overlays, runtime parameters into the partition and sequence workloads by providing pointers and/or control information to the DP array 102.[0237] In one or more other example implementations, where DP array 102 is partitioned into a plurality of partitions and includes a plurality of controllers 106, each controller may be dedicated to controlling a particular partition of DP array 102. In such cases, each controller is capable of independently controlling a partition of DP array 102. For example, each array controller 106 is capable of performing the operations described herein in connection with FIG. 17 with respect to the partition controlled by that array controller. Thus, DP array 102 may implement multiple applications therein independently wherein each application executes in a different partition controlled by a different array controller 106.[0238] Further, each array controller 106 is also capable of performing the operations described herein in connection with FIG. 17 with respect to the partition controlled by that controller. Thus, each partition may implement different overlays over time under control of the particular array controller for that partition. The overlays implemented by each partition will differ based on the application executed by each respective partition. This allows each partition to operate independently and with a dedicated array controller 106 for controlling the loading of applications, overlays, runtime parameters, and sequencing of workloads by providing pointers and/or control information.[0239] FIG. 18 illustrates additional operative features of array controller 106. In the example of FIG. 18, array controller 106 is capable of issuing tasks 1802 to array interface 104. Array controller 106 is further capable of receiving notifications 1804 of when particular tasks performed by compute tiles 302 have completed execution. In one aspect, notifications received by array controller 106 may be received via memory-mapped switches, via stream switches, and/or as interrupts provided through another interface that couples the particular tile or component issuing the interrupt with array controller 106.[0240] In this manner, array controller 106 is capable of continuing to provide tasks to DP array 102 so that DP array 102, or a plurality of partitions in DP array 102, may operate continually without intervention or involvement of a host processor (e.g., from a host computer). As an illustrative and non-limiting example, array controller 106 is capable of initiating data transfers among the DMA circuits of interface tiles 304 and/or memory tiles 306 to provide data to compute tiles 302 and receive data generated by compute tiles 302. Array controller 106 is capable of continuing to store tasks in task queues of DMA circuits so that such DMA circuits may operate continually so long as tasks remain to be processed.[0241] FIG. 19 illustrates an example implementation of a data processing system 1900. As defined herein, the term "data processing system" means one or more hardware systems configured to process data, each hardware system including at least one processor and memory, wherein the processor is programmed with computer-readable instructions that, upon execution, initiate operations. Data processing system 1900 can include a processor 1902, a memory 1904, and a bus 1906 that couples various system components including memory 1904 to processor 1902.[0242] Processor 1902 may be implemented as one or more processors. In an example, processor 1902 is implemented as a central processing unit (CPU). Processor 1902 may be implemented as one or more circuits capable of carrying out instructions contained in program code. The circuit may be an integrated circuit or embedded in an integrated circuit. Processor 1902 may be implemented using a complex instruction set computer architecture (CISC), a reduced instruction set computer architecture (RISC), a vector processing architecture, or other known architectures. Example processors include, but are not limited to, processors having an x86 type of architecture (IA-32, IA-64, etc.), Power Architecture, ARM processors, and the like.[0243] Bus 1906 represents one or more of any of a variety of communication bus structures. By way of example, and not limitation, bus 1906 may be implemented as a Peripheral Component Interconnect Express (PCIe) bus. Data processing system 1900 typically includes a variety of computer system readable media. Such media may include computer-readable volatile and non-volatile media and computer-readable removable and non-removable media.[0244] Memory 1904 can include computer-readable media in the form of volatile memory, such as random-access memory (RAM) 1908 and/or cache memory 1910. Data processing system 1900 also can include other removable/non-removable, volatile/non-volatile computer storage media. By way of example, storage system 1912 can be provided for reading from and writing to a non-removable, non-volatile magnetic and/or solid-state media (not shown and typically called a "hard drive"). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 1906 by one or more data media interfaces. Memory 1904 is an example of at least one computer program product.[0245] Memory 1904 is capable of storing computer-readable program instructions that are executable by processor 1902. For example, the computer- readable program instructions can include an operating system, one or more application programs, other program code, and program data. Processor 1902, in executing the computer-readable program instructions, is capable of performing the various operations described herein that are attributable to a computer. It should be appreciated that data items used, generated, and/or operated upon by data processing system 1900 are functional data structures that impart functionality when employed by data processing system 1900. As defined within this disclosure, the term "data structure" means a physical implementation of a data model's organization of data within a physical memory. As such, a data structure is formed of specific electrical or magnetic structural elements in a memory. A data structure imposes physical organization on the data stored in the memory as used by an application program executed using a processor.[0246] Data processing system 1900 may include one or more Input/Output (I/O) interfaces 1918 communicatively linked to bus 1906. I/O interface(s) 1918 allow data processing system 1900 to communicate with one or more external devices and/or communicate over one or more networks such as a local area network (LAN), a wide area network (WAN), and/or a public network (e.g., the Internet). Examples of I/O interfaces 1918 may include, but are not limited to, network cards, modems, network adapters, hardware controllers, etc. Examples of external devices also may include devices that allow a user to interact with data processing system 1900 (e.g., a display, a keyboard, and/or a pointing device) and/or other devices such as accelerator card.[0247] Data processing system 1900 is only one example implementation. Data processing system 1900 can be practiced as a standalone device (e.g., as a user computing device or a server, as a bare metal server), in a cluster (e.g., two or more interconnected computers), or in a distributed cloud computing environment (e.g., as a cloud computing node) where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.[0248] The example of FIG. 19 is not intended to suggest any limitation as to the scope of use or functionality of example implementations described herein. Data processing system 1900 is an example of computer hardware that is capable of performing the various operations described within this disclosure. In this regard, data processing system 1900 may include fewer components than shown or additional components not illustrated in FIG. 19 depending upon the particular type of device and/or system that is implemented. The particular operating system and/or application(s) included may vary according to device and/or system type as may the types of I/O devices included. Further, one or more of the illustrative components may be incorporated into, or otherwise form a portion of, another component. For example, a processor may include at least some memory.[0249] Data processing system 1900 is an example of a computer that is capable of executing the software framework illustrated in the example of FIG. 2. Data processing system 1900 is also an example of a computer that may be communicatively linked to an IC or system as described herein with a DP array, where data processing system 1900 uses the IC/system as an accelerator. For example, processor 1902 may be a “host processor.”[0250] While the disclosure concludes with claims defining novel features, it is believed that the various features described within this disclosure will be better understood from a consideration of the description in conjunction with the drawings. The process(es), machine(s), manufacture(s) and any variations thereof described herein are provided for purposes of illustration. Specific structural and functional details described within this disclosure are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the features described in virtually any appropriately detailed structure. Further, the terms and phrases used within this disclosure are not intended to be limiting, but rather to provide an understandable description of the features described.[0251] For purposes of simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numbers are repeated among the figures to indicate corresponding, analogous, or like features.[0252] As defined herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.[0253] As defined herein, the terms "at least one," "one or more," and "and/or," are open-ended expressions that are both conjunctive and disjunctive in operation unless explicitly stated otherwise. For example, each of the expressions "at least one of A, B, and C," "at least one of A, B, or C," "one or more of A, B, and C," "one or more of A, B, or C," and "A, B, and/or C" means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.[0254] As defined herein, the term “automatically” means without human intervention. As defined herein, the term "user" means a human being. [0255] As defined herein, the term "computer readable storage medium" means a storage medium that contains or stores program code for use by or in connection with an instruction execution system, apparatus, or device. As defined herein, a "computer readable storage medium" is not a transitory, propagating signal per se. A computer readable storage medium may be, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. The various forms of memory, as described herein, are examples of computer readable storage media. A non-exhaustive list of more specific examples of a computer readable storage medium may include: a portable computer diskette, a hard disk, a RAM, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an electronically erasable programmable read-only memory (EEPROM), a static random-access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, or the like.[0256] As defined herein, the term "if" means "when" or "upon" or "in response to" or "responsive to," depending upon the context. Thus, the phrase "if it is determined" or "if [a stated condition or event] is detected" may be construed to mean "upon determining" or "in response to determining" or "upon detecting [the stated condition or event]" or "in response to detecting [the stated condition or event]" or "responsive to detecting [the stated condition or event]" depending on the context.[0257] As defined herein, the term "responsive to" and similar language as described above, e.g., "if," "when," or "upon," means responding or reacting readily to an action or event. The response or reaction is performed automatically. Thus, if a second action is performed "responsive to" a first action, there is a causal relationship between an occurrence of the first action and an occurrence of the second action. The term "responsive to" indicates the causal relationship.[0258] As defined herein, the term "processor" means at least one circuit capable of carrying out instructions contained in program code. The circuit may be an integrated circuit or embedded in an integrated circuit.[0259] As defined herein, the term "substantially" means that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations, and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.[0260] The terms first, second, etc. may be used herein to describe various elements. These elements should not be limited by these terms, as these terms are only used to distinguish one element from another unless stated otherwise or the context clearly indicates otherwise.[0261] In some alternative implementations, the operations noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In other examples, blocks may be performed generally in increasing numeric order while in still other examples, one or more blocks may be performed in varying order with the results being stored and utilized in subsequent or other blocks that do not immediately follow. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
A Bitcoin mining hardware accelerator is described. A System on Chip implementing a Bitcoin mining hardware accelerator may include a processor core and a hardware accelerator coupled to the processor core, the hardware accelerator to mine digital currency. The hardware accelerator may include a first computational block, including a message digest datapath, wherein the first computational block is to: precompute a first summation of a 32- bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1), and store a result of the first summation in a state register (Hi). The Bitcoin mining hardware accelerator may further include a second computational block comprising a message scheduler datapath.
ClaimsWhat is claimed is: 1. A System on Chip (SoC) comprising:a processor core; anda hardware accelerator coupled to the processor core, the hardware accelerator to mine digital currency, the hardware accelerator comprising:a first computational block comprising a message digest datapath, wherein the first computational block is to:precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath. 2. The SoC of claim 1, wherein the first computational block is further to:compute a compliment of a content of a second shifted state register (Di-1);compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; andstore a result of the second summation in a state register (Ai). 3. The SoC of claim 2, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation. 4. The SoC of claim 1, wherein the first computational block is further to:precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; andstore a result of the second summation in a state register (Ai). 5. The SoC of claim 1, wherein the second computational block is to distribute a computation of a new message word across three cycles. 6. The SoC of claim 1, wherein the second computational block is to distribute a computation of a new message word across six cycles.7. The SoC of claim 6, wherein the message scheduler datapath comprises nine logic gates. 8. The SoC of claim 1, wherein the digital currency is a Bitcoin. 9. A logic device to mine digital currency, comprising:a first computational block comprising a message digest datapath, wherein the first computational block is to:precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); andstore a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath. 10. The logic device of claim 9, wherein the first computational block is further to:compute a compliment of a content of a second shifted state register (Di-1);compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; andstore a result of the second summation in a state register (Ai). 11. The logic device of claim 10, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation. 12. The logic device of claim 9, wherein the first computational block is further to:precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; andstore a result of the second summation in a state register (Ai). 13. The logic device of claim 9, wherein the second computational block is to distribute a computation of a new message word across three cycles. 14. The logic device of claim 9, wherein the second computational block is to distribute a computation of a new message word across six cycles.15. The logic device of claim 9, wherein the digital currency is a Bitcoin. 16. A system, comprising:a circuit board;a processor disposed in a first location of the circuit board;an off-chip logic device operatively coupled to the processor, disposed in a second location of the circuit board, wherein the off-chip logic device comprises:a first computational block comprising a message digest datapath, wherein the first computational block is to:precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); andstore a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath. 17. The system of claim 16, wherein the first computational block is further to:add a content of a shifted register (Bi-2) to the first summation;compute a compliment of a content of a second shifted state register (Di-1);compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; andstore a result of the second summation in a state register (Ai). 18. The system of claim 16, wherein the first computational block is further to:precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; andstore a result of the second summation in a state register (Ai). 19. The system of claim 16, wherein the second computational block is to distribute a computation of a new message word across three cycles. 20. The system of claim 16, wherein the message scheduler datapath comprises nine logic gates.21. An apparatus comprising:a processor core; anda hardware accelerator coupled to the processor core, the hardware accelerator to mine digital currency, the hardware accelerator comprising:a first computational block comprising a message digest datapath, wherein the first computational block is to:precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath. 22. The apparatus of claim 21, wherein the first computational block is further to:compute a compliment of a content of a second shifted state register (Di-1);compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; andstore a result of the second summation in a state register (Ai). 23. The apparatus of claim 22, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation. 24. The apparatus of claim 21, wherein the first computational block is further to:precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; andstore a result of the second summation in a state register (Ai). 25. The apparatus of claim 21, wherein the second computational block is to distribute a computation of a new message word across three cycles.
BITCOIN MINING HARDWARE ACCELERATOR WITH OPTIMIZED MESSAGE DIGEST AND MESSAGE SCHEDULER DATAPATH Technical Field[0001] The present disclosure pertains to the field of processors and, in particular, to a Bitcoin mining hardware accelerators.Background[0002] Digital currency is an internet-based medium of exchange. Digital currency may be based on exchange rates for physical currency (e.g., the United States Dollar). Various types of digital currency exist, and may be used to buy physical goods and services from retailers that have agreed to accept the type of digital currency offered.[0003] Bitcoin is the most popular type of (e.g., unit of) digital currency used in the digital currency eco-system. The Bitcoin transactional system is peer-to-peer, meaning transactions take place between users directly, without an intermediary (e.g., without involving a bank). Peer-to-peer Bitcoin transactions may be verified by network nodes and recorded in a public distributed ledger called a blockchain, which uses Bitcoin as its unit of accounting.[0004] As opposed to physical currency systems backed on natural resources (e.g., gold), Bitcoins may be created by using software and hardware systems to solve a series of mathematical algorithms (e.g., Secure Hash Algorithm 256 (SHA-256)). When the Bitcoin mining algorithms are solved in a way that satisfies certain predefined conditions, a new block is added to the blockchain and a certain number of Bitcoins are awarded to the miner; thereby introducing new Bitcoins into the eco-system. Bitcoin mining algorithms are inherently difficult to solve, and thus require large amounts of processing power. Because of the large amount of power utilized, and the relatively high cost of that power, mining Bitcoins can be a very costly endeavor. In some embodiments, the cost to mine a single Bitcoin may exceed the value of the mined Bitcoin.Brief Description of the Drawings[0005] Various embodiments of the present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific implementations, but are for explanation and understanding only. [0006] FIG.1 is a block diagram illustrating a computing system that implements a hardware accelerator according to one embodiment.[0007] FIG.2 is a block diagram illustrating a Bitcoin mining hardware accelerator according to one embodiment.[0008] FIG.3 is a block diagram illustrating a SHA-256 message digest round datapath according to one embodiment.[0009] FIG.4 is a block diagram illustrating a SHA-256 message digest datapath with WH- LookAhead according to one embodiment.[0010] FIG.5 is a block diagram illustrating a SHA-256 message digest datapath with 1- cycle deferred‘A’ according to one embodiment.[0011] FIG.6 is a block diagram illustrating a SHA-256 message digest datapath with pre- addition of‘D’ according to one embodiment.[0012] FIG.7 is a block diagram illustrating a SHA256 message digest datapath with 2- cycle deferred‘A' according to one embodiment.[0013] FIG.8A is a block diagram illustrating a message scheduler datapath according to one embodiment.[0014] FIG.8B is a block diagram illustrating a 3-cycle distributed message expansion according to one embodiment.[0015] FIG.9 is a block diagram illustrating a 6-cycle distributed message expansion according to one embodiment.[0016] FIG.10A is a block diagram illustrating a micro-architecture for a processor that implements Bitcoin mining operations according to one embodiment.[0017] FIG.10B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline according to one embodiment.[0018] FIG.11 illustrates a block diagram of the micro-architecture for a processor that includes logic circuits to perform Bitcoin mining operations according to one embodiment.[0019] FIG.12 is a block diagram of a computer system according to one embodiment.[0020] FIG.13 is a block diagram of a computer system according to another embodiment.[0021] FIG.14 is a block diagram of a system-on-a-chip according to one embodiment.[0022] FIG.15 illustrates another implementation of a block diagram for a computing system according to one embodiment. [0023] FIG.16 illustrates another implementation of a block diagram for a computing system according to one implementation.Description of Embodiments[0024] The Bitcoin system may solve a critical issue of "double spending" (e.g., using the same credit over and over again) in digital currency by using the concept of block chaining, where a public ledger captures the transactions that occur in the digital currency system. New Bitcoins are created during the mining process that validates transactions and adds new blocks to the blockchain. This process of validating transactions and computing new blocks of the chain is known as Bitcoin mining. Bitcoin mining relies upon using brute force to repeatedly solve a series of SHA-256 hashing functions, and compare the result to a predefined threshold value. In one embodiment, if the output of the SHA-256 function is less than the threshold value, a new block is created and added to the blockchain. Because the software and hardware utilized in Bitcoin mining uses brute force to repeatedly and endlessly perform SHA-256 functions, the process of Bitcoin mining can be very power-intensive and utilize large amounts of hardware space. The embodiments described herein optimize Bitcoin mining operations by reducing the space utilized and power consumed by Bitcoin mining hardware.[0025] The Bitcoin mining operation may consist of two stages of Secure Hash Algorithm 256 (SHA-256) hashing to compress a 1024-bit message, followed by another round of SHA- 256 hashing of the intermediate hash. The 1024-bit message consists of a 32-bit nonce that may be incremented every cycle. A valid nonce may be found if the final hash is less than a predefined threshold value. This may be verified by checking if the final hash contains a predefined number of leading zeros. The challenge for miners is to search through the entire nonce space in a brute force manner while minimizing energy consumption per hash and maximizing performance per watt.[0026] The most expensive operation in mining may involve the computationally intensive task of finding the 32-bit nonce (e.g., a 32-bit (4-byte) field whose value is set so that the hash of the block will contain a run of zeros), which when appended to the Merkle root (e.g., a hash of the transaction hashes in the blockchain), previous hash and other headers, produces a 256-bit hash value, which is less than a pre-defined threshold value. A typical SHA-256 datapath consists of two major computational blocks– a message digest and a message scheduler with SHA-256 specific functions that combine multiple 32-bit words followed by 32-bit additions. The performance of the fully unrolled datapath is limited by these two datapaths. This hashing operation may be the largest recurring cost a miner incurs in the process of creating a Bitcoin and therefore there is a strong motivation to reduce the energy consumption of this process.[0027] The embodiments described herein may address computationally expensive Bitcoin mining limitations by describing a Bitcoin mining hardware accelerator with optimized SHA- 256 message digest and message scheduler datapaths. The message digest datapath optimizations may include (i) WH-Look Ahead and (ii) Pre-addition of 'D', each resulting in a possible 18% improvement in the critical path of new 'E' computation, and (iii) 2-cycle Deferred 'A' possibly resulting in 31.5% improvement in the critical path of new 'A' computations. These optimizations may result in a 15% combinational area and 35% combinational power improvement in the message digest logic. The optimizations in message scheduler datapath may include 3-cycle and 6-cycle distributed message expansion techniques possibly resulting in 37% and 43% improvement in critical paths, respectively.[0028] The operations described herein are described with respect to Application Specific Integrated Circuit (ASIC) implementations for convenience. In other embodiments, any other logic device may be used, including, but not limited to Processors, SoCs, and FPGA platforms. In one embodiment, SHA-256 implementations have pipeline boundaries exactly at the end of each round computation. Since Bitcoin mining utilizes the final hash value at the end of 120 rounds, the logic in SHA-256 rounds can be re-distributed across pipeline stages to reduce the critical path. Optimizing the critical paths in the computation intensive message digest and scheduler datapaths may result in extra time, which can be used to reduce switching capacitance or scale the supply voltage. Both of these optimizations may reduce the overall energy utilized per hash. Furthermore, it should be noted that although the operations and embodiments herein are described with respect to Bitcoin mining, they are generally applicable to all hashing functions (e.g., SHA-256).[0029] FIG.1 is a block diagram illustrating a computing system that implements a Bitcoin mining hardware accelerator according to one embodiment. The computing system 100 is formed with a processor 110 that includes a memory interface 112. The computing system 100 may be any device or combination of devices, but the description of variousembodiments described herein is directed to processing devices and programmable logic devices. [0030] System 100 includes a memory interface 112 and memory 130. In one embodiment, memory interface 112 may be a bus protocol for communication from processor 110 to memory 130. Memory 130 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 130 stores instructions and/or data represented by data signals that are to be executed by the processor 110. The processor 110 is coupled to the memory 130 via a processor bus 120. A system logic chip, such as a memory controller hub (MCH) may be coupled to the processor bus 120 and memory 130. An MCH can provide a high bandwidth memory path to memory 130 for instruction and data storage and for storage of graphics commands, data and textures. The MCH can be used to direct data signals between the processor 110, memory 130, and other components in the system 100 and to bridge the data signals between processor bus 120, memory 130, and system I/O, for example. The MCH may be coupled to memory 130 through a memory interface (e.g., memory interface 112). In some embodiments, the system logic chip can provide a graphics port for coupling to a graphics controller through an Accelerated Graphics Port (AGP) interconnect. The system 100 may also include an I/O controller hub (ICH). The ICH can provide direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 130, chipset, and processor 110. Some examples are the audio controller, firmware hub (flash BIOS), wireless transceiver, data storage, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller. The data storage device can include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0031] System 100 is representative of processing systems based on the PENTIUM III™, PENTIUM 4™, Xeon™, Itanium, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, system 100 executes a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software. [0032] Embodiments described herein are not limited to computer systems. Alternative embodiments of the present disclosure can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.[0033] Processor 110 may include one or more execution units. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments may be included in a multiprocessor system. System 100 may be an example of a‘hub’ system architecture. The computer system 100 includes a processor 110 to process data signals. The processor 110, as one illustrative example, includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC)microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 110 is coupled to a processor bus 120 that transmits data signals between the processor 110 and other components in the system 100. Other elements of system 100 may include a graphics accelerator, memory controller hub, I/O controller hub, wireless transceiver, Flash BIOS, Network controller, Audio controller, Serial expansion port, I/O controller, etc.[0034] In one embodiment, the processor 110 includes a Level 1 (L1) internal cache memory. Depending on the architecture, the processor 110 may have a single internal cache or multiple levels of internal caches. Other embodiments include a combination of both internal and external caches depending on the particular implementation and needs.[0035] For another embodiment of a system, a Bitcoin mining hardware accelerator may be included on a system on a chip (SoC). One embodiment of a SoC includes of a processor and a memory. The memory of the SoC may be a flash memory. The flash memory can be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller can also be located on a SoC.[0036] System 100 includes a logic device (LD) 101 operatively coupled to the processor 110. LD may be a programmable logic device (PLD) or a non-programmable logic device. In one embodiment, LD 101 may be a field-programmable gate array (FPGA). In other embodiments, LD 101 may be an Application Specific Integrated Circuit (ASIC), complex programmable logic device, Generic array logic, programmable logic array, or other type of LD. In one embodiment, processor 110 and LD 101 may be included on a single circuit board, each in their respective locations.[0037] LD 101 is an integrated circuit used to build reconfigurable and/or non- reconfigurable digital circuits. The LD 101 can be an electronic component used in connection with other components or other integrated circuits, such as processor 110. In general, PLDs can have undefined functions at the time of manufacturing and can be programmed or reconfigured before use. The LD 101 can be a combination of a logic device and a memory device. The memory of the LD 101 can store a pattern that was given to the integrated circuit during programming. Data can be stored in the integrated circuit using various technologies, such as antifuses, Static Random Access Memory (SRAM), EPROM cells, EEPROM cells, flash memory, or the like. The LD 101 can use any type of logic device technology.[0038] In one embodiment, LD 101 includes hardware accelerator 111 to perform the optimized digital currency mining operations described herein. In one embodiment, hardware accelerator 111 is a Bitcoin mining hardware accelerator - described in further detail with respect to figures 2-16.[0039] FIG.2 is a block diagram illustrating a Bitcoin mining hardware accelerator according to one embodiment. In one embodiment, the Bitcoin mining process starts with a 1024-bit message consisting of a 32-bit version 201, 256-bit hash 202 from the previous block, 256-bit Merkle root 203 of the transaction, 32-bit time stamp 204, 32-bit target value 205, 32-bit nonce 206 and a 384-bit padding 207. The 1024-bit message is compressed using two stages of 64-round SHA-256 to generate a 256 bit hash 208. This is padded with a 256- bit constant 209 and is compressed again to obtain the final 256-bit hash 210.[0040] The process of mining may involve identifying a nonce for a given header, which generates a final hash that is less than a pre-defined target value. This may be achieved by looking for a minimum number of leading zeros that would ensure the hash to be smaller than the target. The target, and hence the leading zero requirement, may change depending on the rate of new block creation to maintain the rate at approximately one block every ten minutes. Decreasing the target may decrease the probability of finding a valid hash and hence increase the overall search space to generate a new block for the chain. In one embodiment, for a given header, the Bitcoin mining hardware accelerator traverses the search space of 232options to potentially find a valid nonce. If no valid nonce is found, the Merkle root may be changed by choosing a different set of pending transactions and starting over with the nonce search. The three stages of hashing may be implemented as fully unrolled 64 rounds of SHA256 message digest and parallel message expansion logic. The computation intensive SHA-256 hashing may be the major contributor to the energy consumption in a Bitcoin mining accelerator.[0041] FIG.3 is a block diagram illustrating a SHA-256 message digest round datapath 300 according to one embodiment. Each round in the single SHA-256 message digest may combine eight 32-bit words known as states Aithrough Hi(301-308) along with a 32-bit message, Wi310, and a 32-bit round constant, Ki309, to generate two new 32-bit states Ai+1(311) and Ei+1(312). The new states Bi+1through Di+1may be equal to Aithrough Ci, and Fi+1through Hi+1may be equal to Eithrough Gi. The critical path for both Ai+1and Ei+1may be identical and may include four Carry Save Adders (CSA) followed by a Completion Adder (CA). This may equate to approximately 19 logic gate levels, as shown in FIG.3.[0042] FIG.4 is a block diagram illustrating a SHA-256 message digest datapath with WH- LookAhead according to one embodiment. In one embodiment,T1i= Σ1(Ei) + Ch(Ei, Fi, Gi) + Hi+ Ki+ WiT2i= Σ0(Ai) + Maj(Ai, Bi, Ci)Ei+1= Di+ T1iAi+1= T1i+ T2i[0043] In one embodiment, the sum of (Hi+ Ki+ Wi) may be pre-computed to reduce addition in the critical path. H may be a shifted version of G (e.g., Hi= Gi-1). Therefore, with WH-LookAhead, in one embodiment,T1i= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’iwhere H’i= Gi-1+ Ki+ WiT2i= Σ0(Ai) + Maj(Ai,Bi, Ci)Ei+1= Di+ T1iAi+1= T1i+ T2i[0044] The precomputed (Hi+ Ki+ Wi) may be stored in the 32-bit register 402 dedicated for Hi. This optimization reduces the critical path for the computation of Ei+1by one CSA or approximately three logic gates. A similar addition for the next round may be performed in parallel using an additional adder to add (Gi+ Ki+ Wi), as shown in FIG.4. [0045] FIG.5 is a block diagram illustrating a SHA-256 message digest datapath with 1- cycle deferred‘A’ according to one embodiment. In one embodiment for the computation of Ai+1,T1i= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’iT2i= Σ0(Ai) + Maj(Ai, Bi, Ci)Ei+1= Di+ T1iAi+1= T1i+ T2i= T2i– Di+ Ei+1[0046] The computation of Ai+1may depend on T1i, which may also be used in the computation of Ei+1. If the computation of Ai+1is deferred by one cycle, the computation of T1ican be removed from the critical path, as shown:T1i= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’iT2i-1= Σ0(Ai-1) + Maj(Ai-1, Bi-1, Ci-1)Ei+1= Ci-1+ T1iAi= T2i-1– Di-1+ Ei[0047] The subtraction of 'D' may be achieved by adding the complement (~D) 502 and then setting the carry-in of the completion adder to 1'b1. Deferring the computation of 'A' by one cycle may reduce the critical path for the computation of Ai(or Ai+1) by one CSA. The critical path for computation of a new state A or E may be reduced to 16 logic gates, a 15% reduction in the critical path compared to an alternative embodiment, as shown in FIG.5. Deferring the computation may increase the overall SHA256 latency by one cycle. This may result in a negligible 0.8% increase in latency in a fully unrolled 120-round design with no impact on the Bitcoin mining throughput.[0048] FIG.6 is a block diagram illustrating a SHA-256 message digest datapath with pre- addition of‘D’ according to one embodiment. In the computation of Ei+1the state Di(or Ci-1) may be added to the intermediate term T1i, as shown:T1i= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’iEi+1= Ci-1+ T1iT2i-1= Σ0(Ai) + Maj(Ai-1, Bi-1, Ci-1)Ai= T2i-1– Di-1+ Ei[0049] Since Ci-1is equal to Bi-2, the addition of state 'D' can be moved to theprecomputation stage of H'i, as shown: Ei+1= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’’iwhere H’’I= Gi-1+ Ki+ Wi+ Bi-2T2i-1= Σ0(Ai) + Maj(Ai-1, Bi-1, Ci-1)Ai= T2i-1– Di-1+ Ei[0050] This may reduce a CSA in the critical path of the Ei+1computation. The pre-addition of state 'D' may reduce the critical path of Ei+1to 13 logic gates (e.g., a possible 31% improvement compared to an alternative design), as shown in FIG.6.[0051] FIG.7 is a block diagram illustrating a SHA256 message digest datapath with 2- cycle deferred‘A' according to one embodiment. The computation of Aifrom the previous optimizations may make use of the addition of Eiand subtraction of Di-1, as shown:Ei+1= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’’iT2i-1= Σ0(Ai) + Maj(Ai-1, Bi-1, Ci-1)Ai= T2i-1– Di-1+ Ei[0052] If the computation of Aiis deferred by an additional cycle, Ei– Di-1may be precomputed to remove a CSA in the critical path of new 'A' computation:Ei+1= Σ1(Ei) + Ch(Ei, Fi, Gi) + H’’I, where H'’i= Gi-1+ Ki+ Wi+ Ai-3T2i-2= Σ0(Ai-2) + Maj(Ai-2, Bi-2, Ci-2)Ai= T2i-2+ D’i-2where D’i-2= Ci-3+ Fi-1[0053] The 2-cycle delay in the 'A' computation may result in an overall critical path of 13 logic gates for the computation of new states 'A' and 'E', as shown in FIG.7.[0054] FIG.8A is a block diagram illustrating a message scheduler datapath according to one embodiment. In one embodiment, the 512-bit message input to SHA-256 is consumed by the message digest logic across the first 16 rounds in the form of 32-bit words. For the remaining 48 rounds, the message scheduler logic may combine the input message to generate a new 32-bit message word each round. In one embodiment, the datapath for a single round of message expansion logic is shown in figure 8a. The critical path in the message expansion datapath may include a sigma-function 802, two CSA 804, 806, and a CA 808. This results in a critical path of 16 logic gates, as shown in FIG.8A.[0055] FIG.8B is a block diagram illustrating a 3-cycle distributed message expansion according to one embodiment. In one embodiment, the new 32-bit message generated in each round (or cycle) is not consumed by the message digest logic for the subsequent 15 rounds. As a result, the computation of a new message word may be distributed across multiple rounds (or cycles) to reduce the critical path. The 3-cycle distributed message expansion datapath is shown in FIG.8B. Each of the three additions in the message expansion logic is distributed across three rounds, thereby limiting the critical path of each round to a maximum of one sigma-function and a CA. This may be equivalent to 10 logic gates, a possible 37% improvement in the critical path compared to alternative implementations. Since the computation of a Wiutilizes σ1(Wt-2), the completion of each new message computation may be delayed by 3-cycles.[0056] FIG.9 is a block diagram illustrating a 6-cycle distributed message expansion according to one embodiment. The critical path in the 3-cycle distributed message expansion may include the completion adder. In one embodiment, the 32-bit complete addition can be distributed across two rounds to obtain a 6-cycle distributed message expansion datapath, as shown in FIG.9. The 32-bit addition in each round may be replaced by a 16-bit addition, reducing the critical path by at least 1 logic gate. The 6-cycle distributed message expansion may have a critical path of 9 logic gates, resulting in ~44% improvement in critical path compared to an alternative embodiment.[0057] Since the message digest datapath may limit the ability to increase operating frequency, the extra timing slack in the message scheduler logic can be converted to energy reduction by operating the 120 rounds of message expansion logic at a scaled voltage.[0058] FIG.10A is a block diagram illustrating a micro-architecture for a processor 1000 that implements Bitcoin mining hardware accelerator operations, according to one embodiment. Specifically, processor 1000 depicts an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one embodiment of the disclosure. The embodiments of the Bitcoin mining hardware accelerator operations described herein can be implemented in processor 1000.[0059] Processor 1000 includes a front end unit 1030 coupled to an execution engine unit 1050, and both are coupled to a memory unit 1070. The processor 1000 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, processor 1000 may include a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like. In one embodiment, processor 1000 may be a multi-core processor or may be part of a multi- processor system.[0060] The front end unit 1030 includes a branch prediction unit 1032 coupled to an instruction cache unit 1034, which is coupled to an instruction translation lookaside buffer (TLB) 1036, which is coupled to an instruction fetch unit 1038, which is coupled to a decode unit 1040. The decode unit 1040 (also known as a decoder) may decode instructions and generate as an output one or more micro-operations, micro-code entry points,microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder 1040 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 1034 is further coupled to the memory unit 1070. The decode unit 1040 is coupled to a rename/allocator unit 1052 in the execution engine unit 1050.[0061] The execution engine unit 1050 includes the rename/allocator unit 1052 coupled to a retirement unit 1054 and a set of one or more scheduler unit(s) 1056. The scheduler unit(s) 1056 represents any number of different schedulers, including reservations stations (RS), central instruction window, etc. The scheduler unit(s) 1056 is coupled to the physical register file(s) unit(s) 1058. Each of the physical register file(s) units 1058 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register file(s) unit(s) 1058 is overlapped by the retirement unit 1054 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s), using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.).[0062] Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 1054 and the physical register file(s) unit(s) 1058 are coupled to the execution cluster(s) 1060. The execution cluster(s) 1060 includes a set of one or more execution units 1062 and a set of one or more memory access units 1064. The execution units 1062 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and operate on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point).[0063] While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1056, physical register file(s) unit(s) 1058, and execution cluster(s) 1060 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster – and in the case of a separate memory access pipeline, certain embodiments areimplemented in which only the execution cluster of this pipeline has the memory access unit(s) 1064). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0064] The set of memory access units 1064 is coupled to the memory unit 1070, which may include a data prefetcher 1080, a data TLB unit 1072, a data cache unit (DCU) 1074, and a level 2 (L2) cache unit 1076, to name a few examples. In some embodiments DCU 1074 is also known as a first level data cache (L1 cache). The DCU 1074 may handle multiple outstanding cache misses and continue to service incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 1072 is a cache used to improve virtual address translation speed by mapping virtual and physical address spaces. In one exemplary embodiment, the memory access units 1064 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1072 in the memory unit 1070. The L2 cache unit 1076 may be coupled to one or more other levels of cache and eventually to a main memory.[0065] In one embodiment, the data prefetcher 1080 speculatively loads/prefetches data to the DCU 1074 by automatically predicting which data a program is about to consume.Prefetching may refer to transferring data stored in one memory location (e.g., position) of a memory hierarchy (e.g., lower level caches or memory) to a higher-level memory location that is closer (e.g., yields lower access latency) to the processor before the data is actually demanded by the processor. More specifically, prefetching may refer to the early retrieval of data from one of the lower level caches/memory to a data cache and/or prefetch buffer before the processor issues a demand for the specific data being returned. [0066] The processor 1000 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).[0067] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0068] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes a separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include acombination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[0069] FIG.10B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline implemented by processor 1000 of FIG.10A according to some embodiments of the disclosure. The solid lined boxes in FIG.10B illustrate an in-order pipeline, while the solid lined boxes in combination with the dashed lined boxes illustrate a register renaming, out-of-order issue/execution pipeline. In FIG.10B, a processor pipeline 1001 includes a fetch stage 1002, a length decode stage 1004, a decode stage 1006, an allocation stage 1008, a renaming stage 1010, a scheduling (also known as a dispatch or issue) stage 1012, a register read/memory read stage 1014, an execute stage 1016, a write back/memory write stage 1018, an exception handling stage 1022, and a commit stage 1024. In some embodiments, the ordering of stages 1002-1024 may be different than illustrated and are not limited to the specific ordering shown in FIG.9B.[0070] FIG.11 illustrates a block diagram of the micro-architecture for a processor 1100 that includes logic circuits to perform Bitcoin mining hardware accelerator operations, according to one embodiment. In some embodiments, Bitcoin mining hardware accelerator operation instructions in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 1101 is the part of the processor 1100 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. The embodiments of the Bitcoin mining hardware accelerator operations disclosed herein can be implemented in processor 1100.[0071] The front end 1101 may include several units. In one embodiment, the instruction prefetcher 1126 fetches instructions from memory and feeds them to an instruction decoder 1128 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called“micro-instructions” or “micro-operations” (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 1130 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 1134 for execution. When the trace cache 1130 encounters a complex instruction, the microcode ROM 1132 provides the uops needed to complete the operation.[0072] Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, the decoder 1128 accesses the microcode ROM 1132 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 1128. In another embodiment, an instruction can be stored within the microcode ROM 1132 should a number of micro-ops be needed to accomplish the operation. The trace cache 1130 refers to an entry pointprogrammable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 1132. After the microcode ROM 1132 finishes sequencing micro-ops for an instruction, the front end 1101 of the machine resumes fetching micro-ops from the trace cache 1130.[0073] The out-of-order execution engine 1103 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re- order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 1102, slow/general floating point scheduler 1104, and simple floating point scheduler 1106. The uop schedulers 1102, 1104, 1106, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 1102 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0074] Register files 1108, 1110, sit between the schedulers 1102, 1104, 1106, and the execution units 1112, 1114, 1116, 1118, 1120, 1122, 1124 in the execution block 1111. There is a separate register file 1108, 1110, for integer and floating point operations, respectively. Each register file 1108, 1110, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 1108 and the floating point register file 1110 are also capable of communicating data with the other. For one embodiment, the integer register file 1108 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 1110 of one embodiment has 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0075] The execution block 1111 contains the execution units 1112, 1114, 1116, 1118, 1120, 1122, 1124, where the instructions are actually executed. This section includes the register files 1108, 1110, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 1100 of one embodiment includes a number of execution units: address generation unit (AGU) 1112, AGU 1114, fast ALU 1116, fast ALU 1118, slow ALU 1120, floating point ALU 1122, floating point move unit 1124. For one embodiment, the floating point execution blocks 1112, 1114, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 1112 of oneembodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present disclosure, instructions involving a floating point value may be handled with the floating point hardware. [0076] In one embodiment, the ALU operations go to the high-speed ALU execution units 1116, 1118. The fast ALUs 1116, 1118, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 1110 as the slow ALU 1110 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 1112, 1114. For one embodiment, the integer ALUs 1116, 1118, 1120, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 1116, 1118, 1120, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 1112, 1114, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 1112, 1114, can operate on 128-bits wide packed data operands in conjunction with SIMD and multimedia instructions.[0077] In one embodiment, the uops schedulers 1102, 1104, 1106, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 1100, the processor 1100 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instruction sequences for text string comparison operations.[0078] The processor 1100 also includes logic to implement Bitcoin mining hardware accelerator operations according to one embodiment. In one embodiment, the execution block 1111 of processor 1100 may include a microcontroller (MCU), to perform Bitcoin mining operations according to the description herein.[0079] The term“registers” may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer’s perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty-two bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data.[0080] For the discussions herein, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as‘mm’ registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as“SSEx”) technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0081] Embodiments may be implemented in many different system types. Referring now to FIG.12, shown is a block diagram of a multiprocessor system 1200 in accordance with an implementation. As shown in FIG.12, multiprocessor system 1200 is a point-to-point interconnect system, and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnect 1250. As shown in FIG.12, each of processors 1270 and 1280 may be multicore processors, including first and second processor cores, although potentially many more cores may be present in the processors. The processors each may include hybrid write mode logics in accordance with an embodiment of the present. Bitcoin mining hardware accelerator operations discussed herein can be implemented in the processor 1270, processor 1280, or both.[0082] While shown with two processors 1270, 1280, it is to be understood that the scope of the present disclosure is not so limited. In other implementations, one or more additional processors may be present in a given processor.[0083] Processors 1270 and 1280 are shown including integrated memory controller units 1272 and 1282, respectively. Processor 1270 also includes as part of its bus controller units point-to-point (P-P) interfaces 1276 and 1288; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a point-to- point (P-P) interface 1250 using P-P interface circuits 1278, 1288. As shown in FIG.12, IMCs 1272 and 1282 couple the processors to respective memories, namely a memory 1232 and a memory 1234, which may be portions of main memory locally attached to the respective processors.[0084] Processors 1270, 1280 may each exchange information with a chipset 1290 via individual P-P interfaces 1252, 1254 using point to point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 may also exchange information with a high-performance graphics circuit 1238 via a high-performance graphics interface 1239.[0085] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0086] Chipset 1290 may be coupled to a first bus 1216 via an interface 1292. In one embodiment, first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[0087] As shown in FIG.12, various I/O devices 1214 may be coupled to first bus 1216, along with a bus bridge 1218 which couples first bus 1216 to a second bus 1220. In one embodiment, second bus 1220 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 1220 including, for example, a keyboard and/or mouse 1222, communication devices 1227 and a storage unit 1228 such as a disk drive or other mass storage device which may include instructions/code and data 1230, in one embodiment. Further, an audio I/O 1224 may be coupled to second bus 1220. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG.12, a system may implement a multi-drop bus or other such architecture.[0088] Referring now to FIG.13, shown is a block diagram of a third system 1300 in accordance with an embodiment of the present disclosure. Like elements in FIGS.11 and 12 bear like reference numerals, and certain aspects of FIG.12 have been omitted from FIG.13 in order to avoid obscuring other aspects of FIG.13.[0089] FIG.13 illustrates that the processors 1370, 1380 may include integrated memory and I/O control logic (“CL”) 1372 and 1382, respectively. For at least one embodiment, the CL 1372, 1382 may include integrated memory controller units such as described herein. In addition. CL 1372, 1382 may also include I/O control logic. FIG.13 illustrates that the memories 1332, 1334 are coupled to the CL 1372, 1382, and that I/O devices 1314 are also coupled to the control logic 1372, 1382. Legacy I/O devices 1315 are coupled to the chipset 1390. Operations discussed herein can be implemented in the processor 1370, processor 1380, or both.[0090] FIG.14 is an exemplary system on a chip (SoC) 1400 that may include one or more of the cores 1402. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0091] FIG.14 is a block diagram of a SoC 1400 in accordance with an embodiment of the present disclosure. Dashed lined boxes are features on more advanced SoCs. In FIG.14 an interconnect unit(s) 1402 is coupled to: an application processor 1417 which includes a set of one or more cores 1402A-N, cache unit(s) 1404A-N, and shared cache unit(s) 1406; a system agent unit 1410; a bus controller unit(s) 1416; an integrated memory controller unit(s) 1414; a set or one or more media processors 1420 which may include integrated graphics logic 1408, an image processor 1424 for providing still and/or video camera functionality, an audio processor 1426 for providing hardware audio acceleration, and a video processor 1428 for providing video encode/decode acceleration; a static random access memory (SRAM) unit 1430; a direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. Bitcoin mining hardware accelerator operations discussed herein can be implemented by SoC 1400.[0092] Turning next to FIG.15, an embodiment of a system on-chip (SoC) design in accordance with embodiments of the disclosure is depicted. As an illustrative example, SoC 1500 is included in user equipment (UE). In one embodiment, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra- thin notebook, notebook with broadband adapter, or any other similar communication device. A UE may connect to a base station or node, which can correspond in nature to a mobile station (MS) in a GSM network. Bitcoin mining hardware accelerator operations discussed herein can be implemented by SoC 1500. [0093] Here, SoC 1500 includes 2 cores—1506 and 1507. Similar to the discussion above, cores 1506 and 1507 may conform to an Instruction Set Architecture, such as a processor having the Intel® Architecture Core™, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 1506 and 1507 are coupled to cache control 1508 that is associated with bus interface unit 1509 and L2 cache 1510 to communicate with other parts of system 1500. Interconnect 1511 includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnects discussed above, which can implement one or more aspects of the described disclosure.[0094] Interconnect 1511 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 1530 to interface with a SIM card, a boot ROM 1535 to hold boot code for execution by cores 1506 and 1507 to initialize and boot SoC 1500, a SDRAM controller 1540 to interface with external memory (e.g. DRAM 1560), a flash controller 1545 to interface with non-volatile memory (e.g. Flash 1565), a peripheral control 1550 (e.g. Serial Peripheral Interface) to interface with peripherals, power control 1555 to control power, video codecs 1520 and Video interface 1525 to display and receive input (e.g. touch enabled input), GPU 1515 to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the embodiments described herein.[0095] In addition, the system illustrates peripherals for communication, such as aBluetooth module 1570, 3G modem 1575, GPS 1580, and Wi-Fi 1585. Note as stated above, a UE includes a radio for communication. As a result, these peripheral communication modules may not all be included. However, in a UE some form of a radio for external communication should be included.[0096] FIG.16 illustrates a diagrammatic representation of a machine in the example form of a computing system 1600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The embodiments of the page additions and content copying can be implemented in computing system 1600.[0097] The computing system 1600 includes a processing device 1602, main memory 1604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), a static memory 1626 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1618, which communicate with each other via a bus 1630.[0098] Processing device 1602 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1602 may also be one or more special- purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one embodiment, processing device 1602 may include one or processor cores. The processing device 1602 is configured to execute the processing logic 1626 for performing the Bitcoin mining hardware accelerator operations discussed herein. In one embodiment, processing device 1602 can be part of a computing system. Alternatively, the computing system 1600 can include other components as described herein. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0099] The computing system 1600 may further include a network interface device 1622 communicably coupled to a network 1620. The computing system 1600 also may include a video display unit 1608 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1610 (e.g., a keyboard), a cursor control device 1614 (e.g., a mouse), a signal generation device 1616 (e.g., a speaker), or other peripheral devices.Furthermore, computing system 1600 may include a graphics processing unit 1622, a video processing unit 1628 and an audio processing unit 1632. In another embodiment, the computing system 1600 may include a chipset (not illustrated), which refers to a group of integrated circuits, or chips, that are designed to work with the processing device 1602 and controls communications between the processing device 1602 and external devices. For example, the chipset may be a set of chips on a motherboard that links the processing device 1602 to very high-speed devices, such as main memory 1604 and graphic controllers, as well as linking the processing device 1602 to lower-speed peripheral buses of peripherals, such as USB, PCI or ISA buses.[00100] The data storage device 1618 may include a computer-readable storage medium 1624 on which is stored software 1626 embodying any one or more of the methodologies of functions described herein. The software 1626 may also reside, completely or at least partially, within the main memory 1604 as instructions 1626 and/or within the processing device 1602 as processing logic 1626 during execution thereof by the computing system 1600; the main memory 1604 and the processing device 1602 also constituting computer- readable storage media.[00101] The computer-readable storage medium 1624 may also be used to store instructions 1626 utilizing the processing device 1602 and/or a software library containing methods that call the above applications. While the computer-readable storage medium 1624 is shown in an example embodiment to be a single medium, the term“computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term“computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present embodiments. The term“computer-readable storage medium” shallaccordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.[00102] The following examples pertain to further embodiments.[00103] Example 1 is a System on Chip (SoC) comprising: a processor core; and a hardware accelerator coupled to the processor core, the hardware accelerator to mine digital currency, the hardware accelerator comprising: a first computational block comprising a message digest datapath, wherein the first computational block is to: precompute a first summation of a 32- bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00104] Example 2, the subject matter of Example 1, wherein the first computational block is further to: compute a compliment of a content of a second shifted state register (Di-1); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).[00105] Example 3, the subject matter of Example 2, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation.[00106] Example 4, the subject matter of Example 1, wherein the first computational block is further to: precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; and store a result of the second summation in a state register (Ai).[00107] Example 5, the subject matter of Example 1, wherein the second computational block is to distribute a computation of a new message word across three cycles.[00108] Example 6, the subject matter of Example 1, wherein the second computational block is to distribute a computation of a new message word across six cycles.[00109] Example 7, the subject matter of Example 6, wherein the message scheduler datapath comprises nine logic gates.[00110] Example 8, the subject matter of Example 1, wherein the digital currency is a Bitcoin.[00111] Example 9 is a logic device to mine digital currency, comprising: a firstcomputational block comprising a message digest datapath, wherein the first computational block is to: precompute a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00112] Example 10, the subject matter of Example 9, wherein the first computational block is further to: compute a compliment of a content of a second shifted state register (Di-1); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).[00113] Example 11, the subject matter of Example 10, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation.[00114] Example 12, the subject matter of Example 9, wherein the first computational block is further to: precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; and store a result of the second summation in a state register (Ai).[00115] Example 13, the subject matter of Example 9, wherein the second computational block is to distribute a computation of a new message word across three cycles.[00116] Example 14, the subject matter of Example 9, wherein the second computational block is to distribute a computation of a new message word across six cycles.[00117] Example 15, the subject matter of Example 9, wherein the digital currency is a Bitcoin.[00118] Example 16 is a system, comprising: a circuit board; a processor disposed in a first location of the circuit board; an off-chip logic device operatively coupled to the processor, disposed in a second location of the circuit board, wherein the off-chip logic device comprises: a first computational block comprising a message digest datapath, wherein the first computational block is to: precompute a first summation of a 32-bit message (Wi), a 32- bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00119] Example 17, the subject matter of Example 16, wherein the first computational block is further to: add a content of a shifted register (Bi-2) to the first summation; compute a compliment of a content of a second shifted state register (Di-1); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).[00120] Example 18, the subject matter of Example 16, wherein the first computational block is further to: precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; and store a result of the second summation in a state register (Ai). [00121] Example 19, the subject matter of Example 16, wherein the second computational block is to distribute a computation of a new message word across three cycles.[00122] Example 20, the subject matter of Example 16, wherein the message scheduler datapath comprises nine logic gates.[00123] Example 21 is an apparatus comprising: a processor core; and a hardware accelerator coupled to the processor core, the hardware accelerator to mine digital currency, the hardware accelerator comprising: a first computational block comprising a message digest datapath, wherein the first computational block is to: precompute a first summation of a 32- bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and store a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00124] Example 22, the subject matter of Example 21, wherein the first computational block is further to: compute a compliment of a content of a second shifted state register (Di-1); compute a second summation of the compliment, a content of a second state register (Ei), and a computed value; and store a result of the second summation in a state register (Ai).[00125] Example 23, the subject matter of Example 22, wherein to precompute the first summation of the 32-bit message, the first computational block is further to add a content of a shifted register (Bi-2) to the first summation.[00126] Example 24, the subject matter of Example 21, wherein the first computational block is further to: precompute a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; and store a result of the second summation in a state register (Ai).[00127] Example 25, the subject matter of Example 21, wherein the second computational block is to distribute a computation of a new message word across three cycles.[00128] Example 26 is a System on Chip (SoC) comprising: a processor core; and a hardware accelerator coupled to the processor core, the hardware accelerator comprising means for mining digital currency, the hardware accelerator comprising: a first computational block comprising a message digest datapath, wherein the first computational block comprises means for: precomputing a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and storing a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath. [00129] Example 27, the subject matter of Example 26, wherein the first computational block further comprises means for: computing a compliment of a content of a second shifted state register (Di-1); computing a second summation of the compliment, a content of a second state register (Ei), and a computed value; and storing a result of the second summation in a state register (Ai).[00130] Example 28, the subject matter of Example 27, wherein to precompute the first summation of the 32-bit message, the first computational block further comprises means for adding a content of a shifted register (Bi-2) to the first summation.[00131] Example 29, the subject matter of Example 26, wherein the first computational block further comprises means for: precomputing a second summation of a complement of a content of a shifted state register (Ci-3), a shifted state register (Fi-1), and a computed value; and storing a result of the second summation in a state register (Ai).[00132] Example 30, the subject matter of Example 26, wherein the second computational block comprises means for distributing a computation of a new message word across three cycles.[00133] Example 31, the subject matter of Example 26, wherein the second computational block comprises means for distributing a computation of a new message word across six cycles.[00134] Example 32, the subject matter of Example 31, wherein the message scheduler datapath comprises nine logic gates.[00135] Example 33, the subject matter of Example 26, wherein the digital currency is a Bitcoin.[00136] Example 34 is a logic device to mine digital currency, comprising: a first computational block comprising a message digest datapath, wherein the first computational block comprises means for: precomputing a first summation of a 32-bit message (Wi), a 32- bit round constant (Ki), and a content of a first shifted state register (Gi-1); and storing a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00137] Example 35, the subject matter of Example 34, wherein the first computational block further comprises means for: computing a compliment of a content of a second shifted state register (Di-1); computing a second summation of the compliment, a content of a second state register (Ei), and a computed value; and storing a result of the second summation in a state register (Ai).[00138] Example 36, the subject matter of Example 35, wherein to precompute the first summation of the 32-bit message, the first computational block further comprises means for adding a content of a shifted register (Bi-2) to the first summation.[00139] Example 37, the subject matter of Example 34, wherein the second computational block comprises means for distributing a computation of a new message word across three cycles.[00140] Example 38 is a system, comprising: a circuit board; a processor disposed in a first location of the circuit board; an off-chip logic device operatively coupled to the processor, disposed in a second location of the circuit board, wherein the off-chip logic device comprises: a first computational block comprising a message digest datapath, wherein the first computational block comprises means for: precomputing a first summation of a 32-bit message (Wi), a 32-bit round constant (Ki), and a content of a first shifted state register (Gi-1); and storing a result of the first summation in a state register (Hi); and a second computational block comprising a message scheduler datapath.[00141] Example 39, the subject matter of Example 38, wherein the first computational block further comprises means for: adding a content of a shifted register (Bi-2) to the first summation; computing a compliment of a content of a second shifted state register (Di-1); computing a second summation of the compliment, a content of a second state register (Ei), and a computed value; and storing a result of the second summation in a state register (Ai).[00142] Example 40, the subject matter of Example 38, wherein the second computational block comprises means for distributing a computation of a new message word across six cycles.[00143] While embodiments of the present disclosure have been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerousmodifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present disclosure.[00144] In the description herein, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etc. in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice embodiments of the present disclosure. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system have not been described in detail in order to avoid unnecessarily obscuring embodiments of the present disclosure.[00145] The embodiments are described with reference to Bitcoin mining hardware accelerator operations in specific integrated circuits, such as in computing platforms or microprocessors. The embodiments may also be applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed embodiments are not limited to desktop computer systems or portable computers, such as the Intel® Ultrabooks™ computers. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. It is described that the system can be any kind of computer or embedded system. The disclosed embodiments may especially be used for low-end devices, like wearable devices (e.g., watches), electronic implants, sensory and control infrastructure devices, controllers, supervisory control and data acquisition (SCADA) systems, or the like. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatuses, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a‘green technology’ future balanced with performance considerations.[00146] Although the embodiments herein are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present disclosure can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present disclosure are applicable to any processor or machine that performs data manipulations. However, embodiments of the present disclosure are not limited to processors or machines that perform 512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation or management of data is performed. In addition, the description herein provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples ofembodiments of the present disclosure rather than to provide an exhaustive list of all possible implementations of embodiments of the present disclosure.[00147] Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present disclosure can be accomplished by way of a data or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the disclosure. In one embodiment, functions associated with embodiments of the present disclosure are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present disclosure. Embodiments of the present disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present disclosure. Alternatively, operations of embodiments of the present disclosure might be performed by specific hardware components that contain fixed-function logic for performing the operations, or by any combination of programmed computer components and fixed- function hardware components.[00148] Instructions used to program logic to perform embodiments of the disclosure can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[00149] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process.Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.[00150] A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.[00151] Use of the phrase‘configured to,’ in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still‘configured to’ perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate ‘configured to’ provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term ‘configured to’ does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.[00152] Furthermore, use of the phrases‘to,’‘capable of/to,’ and or‘operable to,’ in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.[00153] A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1’s and 0’s, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes anyrepresentation of information capable of being held in a computer system.[00154] Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state,respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.[00155] The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine- accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from.[00156] Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD- ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer)[00157] Reference throughout this specification to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases“in one embodiment” or“in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.[00158] In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.[00159] Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. The blocks described herein can be hardware, software, firmware or a combination thereof.[00160] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as“encrypt,”“decrypt,”“perform,” multiplications,”“key expansion,”“add,”“mix,” “reduce,”“merge,” or the like, refer to the actions and processes of a computing system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage, transmission or display devices.[00161] The words“example” or“exemplary” are used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as“example’ or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words“example” or“exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term“or” is intended to mean an inclusive“or” rather than an exclusive“or.” That is, unless specified otherwise, or clear from context,“X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then“X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles“a” and“an” as used in this application and the appended claims should generally be construed to mean“one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term“an embodiment” or“one embodiment” or“an implementation” or“one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Also, the terms "first," "second," "third," "fourth," etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
Methods and apparatuses for pushing data from a system agent to a cache memory.
CLAIMS What is claimed is: 1. A method comprising: receiving a request to push data to a cache memory associated with a processor in a multi-processor system, wherein the data is to be pushed to the cache memory without a corresponding read request form the processor; storing the data in a push buffer in the processor; and transferring the data from the push buffer to the cache memory. 2. The method of claim 1 further comprising: snooping a cache request queue to determine whether a number of push buffer entries equals or exceeds a threshold level; generating a retry request corresponding to the request to push data if the number of push buffer entries equals or exceeds the threshold level; and determining whether data corresponding to the request to push data is stored in the cache memory if the number of push buffer entries does not equal or exceed the threshold level. 3. The method of claim 2 further comprising: determining whether the request to push data is a retried request to push data; and restoring a state of data corresponding to the request to push data if the request is retried. 4. The method of claim 1 further comprising: analyzing the push to request data to determine whether a device receiving the request is a target for the request; generating an acknowledgement if the device receiving the request is the target for the request; and allocating an entry in a push buffer for the data to be pushed if the device receiving the request is the target for the request. 5. The method of claim 4 further comprising snooping data bus transactions to identify data being pushed in response to the acknowledgement. 6. The method of claim 5 further comprising storing the data being pushed in the allocated entry of the push buffer. 7. The method of claim 1 wherein transferring the data form the push buffer to the cache memory comprises: scheduling a write operation to cause the data to be written to an entry in the cache memory; requesting data arbitration for the entry in the cache memory; storing the data in the entry in cache memory; and deallocating the data from the push buffer. 8. The method of claim 7 wherein the entry in the cache memory comprises a complete cache line. 9. The method of claim 7 wherein the entry in the cache memory comprises a partial cache line. 10. The method of claim 1 wherein the request to push data is received from a direct memory access (DMA) device. 11. The method of claim 1 wherein the request to push data is received from a digital signal processor (DSP). 12. The method of claim 1 wherein the request to push data is received from a packet processor. 13. An apparatus comprising: a cache memory; an address bus interface to receive a push request from an address bus; a data bus interface to receive data to be pushed to a cache memory from a data bus; a bus queue coupled with the address bus interface to store push requests received from the address bus; a push buffer coupled with the data bus interface to store data to be pushed to the cache memory; a cache request queue coupled with the push buffer, the bus queue and the cache memory to schedule a cache write operation to cause the data to be written to the cache memory. 14. The apparatus of claim 13 further comprising one or more inner level caches coupled with the bus queue that do not receive the data from the cache request queue. 15. The apparatus of claim 14 wherein the address bus interface snoops transactions involving the cache request queue. 16. The apparatus of claim 14 wherein the address bus interface snoops transactions involving the bus queue. 17. The apparatus of claim 14 wherein the address bus interface snoops transactions involving the inner level caches. 18. The apparatus of claim 13 wherein the cache request queue operates to schedule a write operation to cause the data to be written to an entry in the cache memory, request data arbitration for the entry in the cache memory, store the data in the entry in cache memory, and deallocate the data from the push buffer. 19. The apparatus of claim 13 wherein the address bus interface operates to analyze the push request to determine whether the address bus interface corresponds to a target for the request and generate an acknowledgement if the device receiving the request is the target for the request. 20. A system comprising: a cache memory; an address bus interface to receive a push request from an address bus; a data bus interface to receive data to be pushed to a cache memory from a data bus; a bus queue coupled with the address bus interface to store push requests received from the address bus; a push buffer coupled with the data bus interface to store data to be pushed to the cache memory; a cache request queue coupled with the push buffer, the bus queue and the cache memory to schedule a cache write operation to cause the data to be written to the cache memory; and one or more substantially omnidirectional antennae coupled with the data bus. 21. The system of claim 20 further comprising one or more inner level caches coupled with the bus queue that do not receive the data from the cache request queue. 22. The system of claim 21 wherein the address bus interface snoops transactions involving the cache request queue. 23. The system of claim 21 wherein the address bus interface snoops transactions involving the bus queue. 24. The system of claim 21 wherein the address bus interface snoops transactions involving the inner level caches. 25. The system of claim 20 wherein the cache request queue operates to schedule a write operation to cause the data to be written to an entry in the cache memory, request data arbitration for the entry in the cache memory, store the data in the entry in cache memory, and deallocate the data from the push buffer. 26. The system of claim 20 wherein the address bus interface operates to analyze the push request to determine whether the address bus interface corresponds to a target for the request and generate an acknowledgement if the device receiving the request is the target for the request. 27. An apparatus comprising: a cache memory; an address bus interface to receive a push request from an address bus; a data bus interface to receive data to be pushed to a cache memory from a data bus; a bus queue coupled with the address bus interface to store push requests received from the address bus, wherein the address bus interface snoops transactions involving the bus queue; a push buffer coupled with the data bus interface to store data to be pushed to the cache memory; a cache request queue coupled with the push buffer, the bus queue and the cache memory to schedule a cache write operation to cause the data to be written to the cache memory, wherein the address bus interface snoops transactions involving the cache request queue; and one or more inner level caches coupled with the bus queue that do not receive the data from the cache request queue, wherein the address bus interface snoops transactions involving the inner level caches. 28. The apparatus of claim 27 wherein the cache request queue operates to schedule a write operation to cause the data to be written to an entry in the cache memory, request data arbitration for the entry in the cache memory, store the data in the entry in cache memory, and deallocate the data from the push buffer. 29. The apparatus of claim 27 wherein the address bus interface operates to analyze the push request to determine whether the address bus interface corresponds to a target for the request and generate an acknowledgement if the device receiving the request is the target for the request.
DIRECT PROCESSOR CACHE ACCESS WITHIN A SYSTEM HAVING A COHERENT MULTI-PROCESSOR PROTOCOLTECHNICAL FIELD[0001] Embodiments of the invention relate to multi-processor computer systems.More particularly, embodiments of the invention relate to allowing external bus agents to push data to a cache corresponding to a processor in a multi-processor computer system.BACKGROUND[0002] In current multi-processor systems, including Chip Multi-Processors, it is common for an input/output (I/O) device such as, for example, a network media access controller (MAC), a storage controller, a display controller, to generate temporary data to be processed by a processor core. Using traditional memory-based data transfer techniques, the temporary data is written to memory and subsequently read from memory by the processor core. Thus, two memory accesses are required for a single data transfer.[0003] Because traditional memory-based data transfer techniques require multiple memory accesses for a single data transfer, these data transfers may be bottlenecks to system performance. The performance penalty can be further compounded by the fact that these memory accesses are typically off-chip, which results in further memory access latencies as well as additional power dissipation. Thus, current data transfer techniques result in system inefficiencies with respect to performance and power. BRIEF DESCRIPTION OF THE DRAWINGSThe invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.Figure 1 is a block diagram of one embodiment of a computer system.Figure 2 is a conceptual illustration of a push operation from an external agent.Figure 3 is a conceptual illustration of a pipelined system bus architecture.Figure 4 is a flow diagram of one embodiment of a direct cache access for pushing data from an external agent to a cache of a target processor.Figure 5 is a control diagram of one embodiment of a direct cache access PUSH operation. DETAILED DESCRIPTION[0004] In the following description, numerous specific details are set forth. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. [0005] Described herein are embodiments of an architecture that supports direct cache access (DCA, or "push cache"), which allows a device to coherently push data to an internal cache of a target processor. In one embodiment the architecture includes a pipelined system bus, a coherent cache architecture and a DCA protocol. The architecture provides increased data transfer efficiencies as compared to the memory transfer operations described above.[0006] More specifically, the architecture may utilize a pipelining bus feature and internal bus queuing structure to effectively invalidate internal caches, and effectively allocate internal data structures that accept push data requests. One embodiment of the mechanism may allow devices connected to a processor to directly move data into a cache associated with the processor. In one embodiment a PUSH operation may be implemented with a streamlined handshaking procedure between a cache memory, a bus queue and/or an external (to the processor) bus agent.[0007] The handshaking procedure may be implemented in hardware to provide high-performance direct cache access. In traditional data transfer operations an entire bus may be stalled for a write operation to move data from memory to a processor cache. Using the mechanism described herein, a non-processor bus agent may use a single write operation to move data to a processor cache without causing extra bus transactions and/or stalling the bus. This may decrease the latency associated with data transfer and may improve processor bus availability.[0008] Figure 1 is a block diagram of one embodiment of a computer system. The computer system illustrated in Figure 1 is intended to represent a range of electronic systems including computer systems, network traffic processing systems, control systems, or any other multi-processor system. Alternative computer (or non-computer) systems can include more, fewer and/or different components. In the description of Figure 1 the electronic system is referred to as a computer system; however, the architecture of the computer system as well as the techniques and mechanisms described herein are applicable to many types of multi -processor systems. [0009] In one embodiment, computer system 100 may include interconnect 110 to communicate information between components. Processor 120 may be coupled to interconnect 110 to process information. Further, processor 120 may include internal cache 122, which may represent any number of internal cache memories. In one embodiment, processor 120 may be coupled with external cache 125. Computer system 100 may further include processor 130 that may be coupled to interconnect 110 to process information. Processor 130 may include internal cache 132, which may represent any number of internal cache memories. In one embodiment, processor 130 may be coupled with external cache 135. [0010] While computer system 100 is illustrated with two processors, computer system 100 may include any number of processors and/or co-processors. Computer system 100 may also include random access memory controller 140 coupled with interconnect 110. Memory controller 140 may act as an interface between interconnect 110 and memory subsystem 145, which may include one or more types of memory. For example, memory subsystem 145 may include random access memory (RAM) or other dynamic storage device to store information and instructions to be executed by processor 120 and/or processor 130. Memory subsystem 145 also can be used to store temporary variables or other intermediate information during execution of instructions by processor 120 and/or processor 130. Memory subsystem may further include read only memory (ROM) and/or other static storage device to store static information and instructions for processors 120 and/or processor 130.[0011] Interconnect 110 may also be coupled with input/output (I/O) devices 150, which may include, for example, a display device, such as a cathode ray tube (CRT) controller or liquid crystal display (LCD) controller, to display information to a user, an alphanumeric input device, such as a keyboard or touch screen to communicate information and command selections to processor 120, and/or a cursor control device, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor 102 and to control cursor movement on a display device. Various I/O devices are known in the art. [0012] Computer system 100 may further include network interface(s) 160 to provide access to one or more networks, such as a local area network, via wired and/or wireless interfaces. A wired network interface may include, for example, a network interface card configured to communicate using an Ethernet or optical cable. A wireless network interface may include one or more antennae (e.g., a substantially omnidirectional antenna) to communicate according to one or more wireless communication protocols. Storage device 170 may be coupled to interconnect 110 to store information and instructions.[0013] Instructions are provided to memory subsystem 145 from storage device 170, such as magnetic disk, a read-only memory (ROM) integrated circuit, CD-ROM, DVD, via a remote connection (e.g., over a network via network interface 160) that is either wired or wireless, etc. In alternative embodiments, hard- wired circuitry can be used in place of or in combination with software instructions. Thus, execution of sequences of instructions is not limited to any specific combination of hardware circuitry and software instructions.[0014] An electronically accessible medium includes any mechanism that provides (i.e., stores and/or transmits) content (e.g., computer executable instructions) in a form readable by an electronic device (e.g., a computer, a personal digital assistant, a cellular telephone). For example, a machine-accessible medium includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals); etc. [0015] Figure 2 is a conceptual illustration of a push operation from an external agent. The example of Figure 2 corresponds to an external (to the target processor) agent that may push data a processor 220 in a multi-processor system 220, 222, 224,226. The agent may be, for example, a direct memory access (DMA) device, a digital signal processor (DSP), a packet processor, or any other system component external to the target processor.[0016] The data that is pushed by agent 200 may correspond to a full cache line or the data may correspond to a partial cache line, hi one embodiment, during push operation 210, agent 200 may push data to an internal cache of processor 220. Thus, the data may be available for a cache hit on a subsequent load to the corresponding address by processor 220.[0017] In the example of Figure 2, push operation 210 is issued by agent 200 that is coupled to peripheral bus 230, which may also be coupled with other agents (e.g., agent 205). Push operation 210 may be passed from peripheral bus 230 to system interconnect 240 by bridge/agent 240. Agents may also be coupled with system interconnect 260 (e.g., agent 235). The target processor (processor 220) may receive push operation 210 from bridge/agent 240 over system interconnect 260. Any number of processors may be coupled with system interconnect 260. Memory controller 250 may also be coupled with system interconnect 260.[0018] Figure 3 is a conceptual illustration of a pipelined system bus architecture. In one embodiment, the bus is a free running non-stall bus. In one embodiment, the pipelined system bus includes separate address and data buses, both of which have one or more stages. In one embodiment, the address bus stages may operate using address request stage 310, address transfer stage 320 and address response stage 330. In one embodiment, one or more of the stages illustrated in Figure 3 may be further broken down into multiple sub-stages. [0019] In one embodiment, snoop agents may include snoop stage 360 and snoop response stage 370. The address stages and the snoop stages may or may not be aligned based on, for example, the details of the bus protocol being used. Snooping is known in the art and is not discussed in further detail herein. In one embodiment, the data bus may operate using data request stage 340 and data transfer stage 350. [0020] In one embodiment the system may support a cache coherency protocol, for example, MSI, MESI, MOESI, etc. In one embodiment, the following cache line states may be used.Table 1 : Cache Line States for Target Processor[0021[ <m one> embodiment, PUSH requests and PUSH operations are performed at the cache line level; however, other granularities may be supported, for example, partial cache lines, bytes, multiple cache lines, etc. In one embodiment, initiation of a PUSH request may be identified by a write line operation with a PUSH attribute. The PUSH attribute may be, for example, a flag or a sequence of bits or other signal that indicates that the write line operation is intended to push data to a cache memory. If the PUSH operation is used to push data that does not conform to a cache line different operations may be used to initiate the PUSH request.[0022] In one embodiment, the agent initiating the PUSH operation may provide a target agent identifier that may be embedded in an address request using, for example, lower address bits. The target agent identifier may also be provided in a different manner, for example, through a field in an instruction or by a dedicated signal path. In one embodiment, a bus interface of a target agent may include logic to determine whether the host agent is the target of a PUSH operation. The logic may include, for example, comparison circuitry to compare the lower address bits with an identifier of the host agent.[0023] In one embodiment, the target agent may include one or more buffers to store an address and data corresponding to a PUSH request. The target agent may have one or more queues and/or control logic to schedule transfer of data from the buffers to the target agent cache memory. Various embodiments of the buffers, queues and control logic are described in greater detail below. Data may be pushed to a cache memory of a target agent by an external agent without processing by the core logic of the target agent. For example, a direct memory access (DMA) device or a digital signal processor (DSP) may use the PUSH operation to push data to a processor cache without requiring the processor core to coordinate the data transfer. [0024] Figure 4 is a flow diagram of one embodiment of a direct cache access for pushing data from an external agent to a cache of a target processor. The agent having data to be pushed to the target device issues a PUSH request, 400. The PUSH request may be indicated by a specific instruction (e.g., write line) that may have a predetermined bit or bit sequence. In one embodiment the PUSH request may be initiated as a cache line granular level. In one embodiment, the initiating agent may specify the target of the PUSH operation by specifying a target identifier during the address request stage of the PUSH operation.[0025] In one embodiment a processor or other potential target agent may snoop internal caches and/or bus queues, 405. The snooping functionality may allow the processor to determine whether that processor is the target of a PUSH request. Various snooping techniques are known in the art. In one embodiment, the processor snoops the address bus to determine whether the lower address bits correspond to the processor. [0026] In one embodiment, if the target processor push buffer is full, 410, a PUSH request may result in a retry request, 412. In one embodiment, if a request is not retried, the potential target agent may determine whether it is the target of the PUSH request, 415, which may be indicated by a snoop hit. A snoop hit may be determined by comparing an agent identifier with a target agent identifier that may be embedded in the PUSH request.[0027] In one embodiment, if the target agent experiences a snoop hit, 415, the cache line corresponding to the cache line to be pushed is invalidated, 417. If the target agent experiences a snoop miss, 415, a predetermined miss response is performed, 419. The miss response can be any type of cache line miss response known in the art and may be dependent upon the cache coherency protocol being used. [0028] After either the line invalidation, 417 or the miss response, 419, the target agent may determine whether the current PUSH request is retried, 420. If the PUSH request is retried, 420, the target agent determines whether the line was dirty, 425. If the line was dirty, 425, the cache line state may be updated to dirty, 430, to restore the cache line to its original state.[0029] If the PUSH request is not retried, 420, the target agent may determine whether it is the target of the PUSH request, 435. If the target agent is the target of the PUSH request, 435, the target agent may acknowledge the PUSH request and allocate a slot in a PUSH buffer, 440. In one embodiment, the allocation of the PUSH buffer, 440 completes the address phase of the PUSH operation and subsequent functionality is part of a data phase of the PUSH operation. That is, in one embodiment, procedures performed through allocation of the PUSH buffer, 440, may be performed in association with the address bus using the address bus stages described above. Procedures performed subsequent to allocation of the PUSH buffer, 440, may be performed in association with the data bus using the data bus stages described above. [0030] In one embodiment, the target agent may monitor data transactions for transaction identifiers, 445, that correspond to the PUSH request causing the allocation of the PUSH buffer, 440. When a match is identified, 450, the data may be stored in the PUSH buffer, 455.[0031] In one embodiment, in response to the data being stored in the PUSH buffer, 455, bus control logic (or other control logic in the target agent) may schedule a data write to the cache of the target agent, 460. In one embodiment, the bus control logic may enter a write request corresponding to the data in a cache request queue. Other techniques for scheduling the data write operation may also be used. [0032] In one embodiment, control logic in the target agent may request data arbitration for the cache memory, 465, to allow the data to be written to the cache. The data may be written to the cache, 470. In response to the data being written to the cache, the PUSH buffer entry corresponding to the data may be deallocated, 475. If the cache line was previously in a dirty state (e.g., M or O), the cache line may be updated to its original state. If the cache line was previously in a clean state (e.g., E or S), the cache line may be left invalid.(0033] Figure 5 is a control diagram of one embodiment of a direct cache access PUSH operation. In one embodiment, target agent 590 may include multiple levels of internal caches. Figure 5 illustrates only one of many processor architectures including internal cache memories. In the example of Figure 5, the directly accessible cache is an outer layer cache with ownership capability and the inner level cache(s) is/are write- through cache(s). In one embodiment a PUSH operation may invalidate all corresponding cache lines stored in the inner level cache(s). In one embodiment, the bus queue may be a data structure that tracks in-flight snoop requests and bus transactions.[0034] In one embodiment, a PUSH request may be received by address bus interface 500 and data for the PUSH operation may be received by data bus interface 510. Data bus interface 510 may forward data from a PUSH operation to PUSH buffer 540. The data may be transferred from the PUSH buffer 540 to cache request queue 550 and then to directly accessible cache 560 as described above. [0035] In one embodiment, in response to a PUSH request, address bus interface 500 may snoop transactions between various functional components. For example, address bus interface 500 may snoop entries to cache request queue 550, bus queue 520 and/or inner level cache(s) 530. In one embodiment, invalidation and/or confirmation messages may be passed between bus queue 520 and cache request queue 550. [0036] In one embodiment, within a multi-processor system, each processor core may have an associated local cache memory structure. The processor core may access the associated local cache memory structure for code fetches and data reads and writes. The cache utilization may be affected by program cacheability and the cache hit rate of the program that is being executed.[0037] For a processor core that supports the PUSH operation, the external bus agent may initiate a cache write operation from outside of the processor. Both the processor core and the external bus agent may compete for cache bandwidth. In one embodiment, a horizontal processing model may be used in which multiple processors may perform equivalent tasks and data may be pushed to any processor. Allocation of traffic associated with PUSH operations may improve performance by avoiding unnecessary PUSH request retires.[0038] Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.[0039] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
The invention relates to multi-pitch leads. In some examples, a system comprises a die having multiple electrical connectors extending from a surface of the die and a lead coupled to the multiple electrical connectors. The lead comprises a first conductive member (1010); a first non-solder metal plating (1014) stacked on the first conductive member; an electroplated layer (1020) stacked on the first non-solder metal plating; a second non-solder metal plating (1008) stacked on the electroplated layer; and a second conductive member (1000) stacked on the second non-solder metal plating, the second conductive member being thinner than the first conductive member. The system also comprises a molding to at least partially encapsulate the die and the lead.
1.A system including:A die having a plurality of electrical connectors extending from the surface of the die;A lead coupled to the plurality of electrical connectors, the lead comprising:First conductive member;A first non-solder metal plating layer stacked on the first conductive member;An electroplating layer stacked on the first non-solder metal plating layer;A second non-solder metal plating layer stacked on the electroplated layer; andA second conductive member stacked on the second non-solder metal plating layer, the second conductive member being thinner than the first conductive member; andA molding that at least partially encapsulates the die and the lead.2.The system of claim 1, wherein the second conductive member has a larger volume than the first conductive member.3.The system of claim 1, wherein the second conductive member is oriented orthogonally to the first conductive member.4.The system according to claim 1, wherein in a common plane with the first conductive member, the interval between the first conductive member and the next conductive member is between 10 micrometers and 200 micrometers.5.The system according to claim 1, wherein the second conductive member extends in multiple directions in a common plane.6.The system of claim 1, wherein the first non-solder metal plating layer and the second non-solder metal plating layer are non-copper metal plating layers.7.The system of claim 1, wherein the die includes a power transistor, and wherein the first conductive member is coupled to one of a source lead or a drain lead of the power transistor.8.The system of claim 1, wherein the non-solder metal is selected from the group consisting of nickel; nickel palladium; nickel palladium gold; nickel tungsten; tin; tin gold; gold; and silver.9.The system of claim 1, wherein the electroplated layer comprises electroplated copper.10.A system including:A die having electrical connectors extending from the surface of the die, the electrical connectors being arranged in multiple rows;The first set of leads, which are arranged in multiple rows, and the multiple rows of leads are coupled to the multiple rows of electrical connectors;A second set of leads, which has a first lead and a second lead, the first lead is coupled to alternating leads in the plurality of rows of leads through a non-solder metal, and the second lead is coupled to all of the rows of leads through the non-solder metal Other alternating leads in the plurality of rows of leads, the first lead and the second lead are thicker than the first set of leads; andA molding that at least partially encapsulates the first set of leads and the second set of leads.11.The system of claim 10, wherein each row in the plurality of rows in the first set of leads is coupled to a different row in the plurality of rows of electrical connectors.12.The system of claim 10, wherein the non-solder metal is positioned on the first lead and the second lead in alignment with the plurality of rows of leads.13.The system of claim 10, further comprising an additional non-solder metal coupled to the non-solder metal.14.The system of claim 10, wherein the first lead and the second lead are exposed on the first surface of the molded part.15.The system of claim 10, wherein the first lead has a length that extends across all multiple rows of leads in the first set of leads.16.A system including:Die, which includes multiple rows of electrical connectors;A first plurality of leads, each of the first plurality of leads is coupled to a different row of electrical connectors; andA second plurality of leads positioned in a different plane from the first plurality of leads, and a first lead of the second plurality of leads is coupled to the first plurality of leads having a plurality of non-solder metals A plurality of non-continuous leads in the second plurality of leads, a second lead in the second plurality of leads is coupled to the other plurality of non-continuous leads in the first plurality of leads having a plurality of non-solder metals,Wherein the first lead and the second lead in the second plurality of leads are thicker than the lead in the first plurality of leads.17.The system of claim 16, wherein the spacing between the leads in the first plurality of leads is in a range between 10 microns and 200 microns.18.The system of claim 16, wherein the plurality of non-solder metals are the same.19.A method including:Coupling a first set of leads to a plurality of electrical connectors extending from the surface of the die, the first set of leads having leads arranged in multiple rows;Using a non-solder metal to selectively plate a second set of leads to produce a metal plate, the second set of leads having multiple parts and thicker than the first set of leads;Coupling the metal plates of the second set of leads to the rows using non-solder metal; andA molding is applied to at least partially encapsulate the first set of leads and the second set of leads.20.The method of claim 19, further comprising applying an insulating coating to the second set of leads, the metal plate remaining exposed after applying the insulating coating.21.The method of claim 20, further comprising:Placing the first set of leads and the second set of leads in an electroplating bath;Applying current to the second set of leads to create a non-solder metal connection between the metal plate and the leads in the first set of leads; andThe insulating coating is stripped from the second set of leads.22.A method including:Provide the first set of leads;Forming a first non-solder metal plate on the first set of leads;Provide a second set of leads, the spacing between the leads in the second set of leads is smaller than the spacing between the leads in the first set of leads, and the leads in the second set of leads are shorter than the lead in the first set of leads The lead in is thicker;Forming a second non-solder metal plate on the second set of leads;Coating the first group of leads and the second group of leads with a non-conductive material, after the coating is completed, the first non-solder metal plate and the second non-solder metal plate remain exposed;Forming a non-solder metal connection between the first non-solder metal plate and the second non-solder metal plate using electroplating technology; andThe coating is removed from the first set of leads and the second set of leads.23.The method of claim 22, wherein one of the leads in the second set of leads has a length that extends across the first set of leads.24.A method including:Providing a package including a die coupled to a first plurality of leads, the first plurality of leads being exposed on the surface of the package;A second plurality of leads is provided, the leads of the second plurality of leads are thicker than the leads of the first plurality of leads, and the first plurality of leads have thinner leads than the second plurality of leads spacing;Forming a metal plate on the second plurality of leads;Coating the second plurality of leads with a non-conductive material;Coupling the metal plate of the second plurality of leads to the first plurality of leads using an electroplating process;Removing the coating from the second plurality of leads; andA molding is used to encapsulate the second plurality of leads.25.The method of claim 24, wherein a lead in the second plurality of leads has a length that extends across the first plurality of leads.
Multi-pitch leadsSummary of the inventionIn some examples, a system includes a die and leads, the die having a plurality of electrical connectors extending from a surface of the die, the leads being coupled to the plurality of electrical connectors. The lead includes: a first conductive member; a first non-solder metal plating layer stacked on the first conductive member; an electroplating layer stacked on the first non-solder metal plating layer; a second non-solder metal plating layer stacked on the electroplating layer; And a second conductive member stacked on the second non-solder metal plating layer, the second conductive member being thinner than the first conductive member. The system also includes a molding to at least partially encapsulate the die and leads.In some examples, a method includes coupling a first set of leads to a plurality of electrical connectors extending from the surface of the die, wherein the first set of leads has leads arranged in multiple rows. The method also includes selectively plating a second set of leads with a non-solder metal to produce a metal plate, the second set of leads having multiple portions and thicker than the first set of leads. The method also includes coupling the metal plates of the second set of leads to the plurality of rows using non-solder metal. The method also includes applying a molding to at least partially encapsulate the first set of leads and the second set of leads.Description of the drawingsIn order to describe each example in detail, reference will now be made to the accompanying drawings, in which:Figure 1 depicts a perspective view of a die with multiple electrical connectors according to various examples.Figure 2 depicts a perspective view of a set of leads of a lead frame according to various examples.Figure 3 depicts a perspective view of a set of leads coupled to a lead frame of a die having a plurality of electrical connectors according to various examples.Figure 4A depicts a perspective view of a package with multiple leads according to various examples.4B depicts a perspective view of a package with multiple leads according to various examples.Figure 5A depicts a perspective view of a set of leads according to various examples.Figure 5B depicts a perspective view of a set of leads with multiple boards according to various examples.Figure 5C depicts a front view of a set of leads with multiple boards according to various examples.FIG. 6 depicts a first perspective view of a group of leads having a plurality of plates and a second perspective view of the group of leads having an insulating coating according to various examples.Figure 7A depicts a perspective view of a set of leads with multiple boards and a package with multiple leads according to various examples.7B depicts a front view of a set of leads with multiple boards and a package with multiple leads according to various examples.Figure 7C depicts a side view of a set of leads with multiple boards and a package with multiple leads according to various examples.Figure 7D depicts plating of a set of leads with multiple boards and a package with multiple leads according to various examples.Figure 7E depicts the electroplating of a set of leads with multiple boards and a package with multiple leads according to various examples.Figure 8A depicts a front view of a set of leads of a plurality of boards having a plurality of leads coupled to a package according to various examples.Figure 8B depicts a perspective view of a set of leads of a plurality of boards having a plurality of leads coupled to a package according to various examples.Figure 8C depicts a side view of a set of leads of multiple boards with multiple leads coupled to the package according to various examples.Figure 9A depicts a front view of a package assembly with an exposed set of leads according to various examples.Figure 9B depicts a front view of a package assembly with an exposed set of leads according to various examples.Figure 9C depicts a perspective view of a package assembly with an exposed set of leads according to various examples.Figure 9D depicts a side view of a package assembly with an exposed set of leads according to various examples.Figure 9E depicts another perspective view of the packaging assembly of Figure 9A according to various examples.FIG. 10A depicts a perspective view of a set of leads of a lead frame according to various examples.FIG. 10B depicts a perspective view of a metal plate on a set of leads of a lead frame according to various examples.FIG. 10C depicts a perspective view of a set of leads in a lead frame according to various examples.Figure 10D depicts a perspective view of a metal plate on a set of leads in a lead frame according to various examples.Figure 10E depicts a perspective view of a metal plate on a set of leads of a lead frame according to various examples, where the leads have an insulating coating.FIG. 10F depicts a perspective view of a metal plate on a lead in a set of leads of a lead frame according to various examples, where the lead has an insulating coating.10G depicts a perspective view of a set of leads of one lead frame aligned with a set of leads of another lead frame, the set of leads having an insulating coating according to various examples.10H depicts a side view of a set of leads of one lead frame aligned with a set of leads of another lead frame, the set of leads having an insulating coating according to various examples.10I depicts a side view of a set of leads of one lead frame coupled to a set of leads of another lead frame, the set of leads having an insulating coating according to various examples.10J depicts a perspective view of a set of leads of one lead frame coupled to a set of leads of another lead frame, the set of leads having an insulating coating according to various examples.10K depicts a perspective view of a set of leads of one lead frame coupled to a set of leads of another lead frame according to various examples.11A depicts a perspective view of a die, a first set of leads of one lead frame, and a second set of leads of another lead frame, according to various examples.FIG. 11B depicts a perspective view of a package of a die accommodating a first set of leads coupled to one lead frame and a second set of leads of another lead frame according to various examples.Figure 11C depicts a perspective view of a package with multiple exposed leads according to various examples.Figure 11D depicts a perspective view of a package with multiple exposed leads according to various examples.Figure 12 depicts a flowchart of a manufacturing method according to various examples.FIG. 13 depicts a flowchart of another manufacturing method according to various examples.FIG. 14 depicts a flowchart of another manufacturing method according to various examples.detailed descriptionThe present disclosure proposes various technically advantageous solutions to technical problems arising from the use of certain types of packaged devices, such as power field effect transistors (FETs). In particular, these devices are sometimes packaged in a manner that undesirably promotes the deterioration of the electrical connection between the device and a printed circuit board (PCB) (or other devices), thereby shortening the life of the device and increasing the cost. This problem is usually caused by the design of the lead frame used to package the device. Such a lead frame has leads arranged in rows, and these leads are exposed on the surface of the package so that the leads can be electrically coupled to the PCB. These rows are half-etched in an alternating manner so that the pitch between the rows is increased (e.g., doubled) relative to the pitch that should exist, thus precluding the use of rows with unacceptable fine pitches. Solder is then used to electrically couple the unetched portion of each row to the PCB or other device.The above configuration is problematic at least because the current passing between the packaged device and the PCB flows through the relatively narrow part of each lead, that is, through the unetched part of each lead. This narrow path concentrates the current, which has a deleterious effect on the solder joint to which the unetched portion of the lead is coupled. For example, the heat generated by this concentrated current flow can damage solder joints. After an undesirable short time, the joints (and possibly the leads themselves) may be damaged and the packaged device cannot achieve its intended purpose.The present disclosure describes various examples of improved package configurations of the aforementioned devices (eg, power FETs). The improved package configuration omits the aforementioned lead frame with half-etched leads, but uses the first lead frame with non-etched leads. A non-solder material (for example, any metal other than solder) is used to electrically couple the leads of the first lead frame to the leads of the second lead frame. In an example, the leads of the second lead frame are substantially thicker than the leads of the first lead frame. In an example, the number of leads of the second lead frame is less than the number of leads of the first lead frame. In an example, the leads of the second lead frame are positioned substantially orthogonal to the leads of the first lead frame (for example, within 10 degrees of the orthogonal). Each lead in the second lead frame is coupled to alternating leads in the first lead frame such that the leads in the second lead frame are not coupled to any common lead in the first lead frame. Then some or all of the one or two lead frames are encapsulated in a molding material (e.g., epoxy) to form a complete device. At least the leads of the second lead frame are exposed on the surface of the package to facilitate electrical connection with the PCB or other devices (for example, using solder).Because the half-etching is omitted, the current will not be concentrated as described above, thus reducing harmful effects on the structure of the packaged device. In addition, since in some examples, the solder is only used to couple the relatively large leads of the second lead frame to the PCB, the current is distributed over a wider lead surface area than in the case of the leads in the first lead frame , Thereby reducing the harmful effects on the solder joints and maintaining the structural integrity of the packaged device. An exemplary package configuration will now be described in detail with reference to the accompanying drawings.FIG. 1 depicts a perspective view of a die 100 according to various examples. In some examples, the die 100 includes a power transistor (such as a power FET). In other examples, the die 100 includes other types of devices. The die 100 includes a plurality of electrical connectors that are coupled to active portions of the die 100, such as the source and drain terminals of a power FET. For example, the electrical connector 108 is coupled to the source terminal of the power FET, and the electrical connector 110 is coupled to the drain terminal of the power FET. As shown, the electrical connectors 108 and the electrical connectors 110 are arranged in alternating rows. In this example, the electrical connector 104 is coupled to the source terminal of the power FET. The electrical connector 102 is coupled to, for example, the area of the die 100 that generates data and control signals. In some examples, the electrical connector includes copper. In some examples, the electrical connector is cylindrical. In some examples, the electrical connector is cubic. Other ingredients and shapes are conceivable and included within the scope of the present disclosure. In some examples, the spacing between the row containing electrical connectors 108 and the row containing electrical connectors 110 is about 50 microns.FIG. 2 depicts a perspective view of the first set of leads of the lead frame 200 according to various examples. As shown in the figure, the lead frame 200 includes leads 208 and leads 210 arranged in an alternating configuration. The lead frame 200 further includes a lead 204 and a lead 202. A dam bar 201 connects the leads 202, 204, 208, and 210 together. In an example, the lead frame 200 includes bare copper. In an example, the lead frame 200 includes plated metal, such as an electroplated copper lead frame. Because the various parts of the lead frame 200 are in electrical contact, electroplating is a feasible plating technique. As shown, in some examples, the lead 208 and the lead 210 have a rectangular shape with rounded edges. In some examples, the lead 208 and the lead 210 have a rectangular shape with non-rounded edges or with rounded and non-rounded edges. In some examples, the lead 202 has a fan-out shape as depicted. In some examples, the length of the lead 208 is similar to the length of the row containing the electrical connector 108 (ie, within a reasonable tolerance as judged by one of ordinary skill in the art, such as +/- 5 mm). In some examples, the length of the lead 210 is similar to the length of the row containing the electrical connector 110. In some examples, the spacing between the lead 208 and the lead 210 is about 500 microns. In an example, lead 208 and lead 210 are approximately 2 millimeters long and approximately 300 microns wide.FIG. 3 depicts the leads of FIG. 2 coupled to the die 100 of FIG. 1 according to various examples. As shown, each lead 208 is coupled to a corresponding row containing electrical connectors 108. Similarly, each lead 210 is coupled to a corresponding row containing electrical connectors 110. The lead 204 is coupled to the electrical connector 104 and the lead 202 is coupled to the electrical connector 102. In the example, solder is used to establish the aforementioned connection, but other conductive substances (for example, metals, alloys) are conceivable and included in the scope of the present disclosure. In FIG. 3 and subsequent drawings, for clarity and ease of explanation, the dam strip 20) 1 is not shown.FIG. 4A depicts a perspective view of a package 400 according to various examples. The package 400 includes the assembly depicted in FIG. 3 at least partially encapsulated in a molding 401 (e.g., epoxy). In an example, the leads 202, 204, 208, and 210 of FIG. 3 are exposed on the surface 402 of the package 400. In the example, the leads 202, 204, 208, and 210 are not half-etched, in other words, the exposed portions of the leads have a uniform thickness. Therefore, in such an example, the exposed surface of the lead may be evenly flush with the surface 402 of the package 400. Similarly, in such an example, if the lead is raised above the surface 402 of the package 400, the lead has a uniform thickness (eg, 50 microns) above the surface 402. The scope of the present disclosure is not necessarily limited to any specific thickness of leads or any specific degree of uniformity between leads. For example, a lead having a "uniform" thickness may have a thickness that is not exactly the same but is similar enough to achieve the purpose described in this disclosure. FIG. 4B depicts a perspective view of another example of the package 400. FIG. In this view, the package 400 and its leads have dimensions different from those shown in FIG. 4A.Figure 5A depicts a perspective view of a set of leads 500 according to various examples. The set 500 includes leads 502, 504, 506, and 508. In an example, the leads in the group 500 are substantially thicker than the leads of the lead frame 200 (eg, 200 microns). In the example, the number of leads of the group 500 is less than the number of leads of the lead frame 200. In an example, the leads in the group 500 include unplated copper.FIG. 5B depicts a perspective view of the leads in the group 500 according to an example. As shown, the lead depicted in FIG. 5B has a conductive plating 510 applied to the lead. In an example, a plating layer 510 is applied to the lead 502 to facilitate the coupling of the lead 502 to the lead 202 on the package 400. In an example, a plating layer 510 is applied to the lead 504 to facilitate coupling to the lead 210 on the package 400. In an example, a plating layer 510 is applied to the lead 506 to facilitate coupling to the lead 208 on the package 400. In an example, a plating layer 510 is applied to the lead 508 to facilitate coupling to the lead 204 on the package 400. To facilitate coupling of lead 504 to lead 210 only and lead 506 to only lead 208, as shown, plating 510 is applied to lead 504 and lead 506 in a staggered manner. Figure 5C depicts a front view of a set of leads 500. In an example, the plating layer 510 (and the same for the other plating layers described herein) may be composed of the following materials: nickel; nickel palladium; nickel palladium gold; nickel tungsten; tin; tin gold; gold; and silver.Figure 6 depicts a perspective view of a set of leads 500 undergoing a coating process. Specifically, a set of lead wires 500 are sprayed with a mixture of mercaptopropyltrimethoxysilane (MPTS) and methanol or ethanol or immersed in the mixture. The application of MPTS to the copper surface of a set of leads 500 results in the leads having a non-conductive coating 512. Although the scope of the present invention is not limited to any particular coating technique or resulting coating composition, in the example, the coating 512 is copper sulfide. MPTS does not react with the plating layer 510, so no coating is applied to the plating layer 510. The coating 512 can be reinforced by thermal curing (for example, holding at 50 to 60 degrees Celsius for 2-10 minutes). The enhanced coating 512 can better withstand chemical and thermal damage during subsequent electroplating.FIG. 7A depicts a perspective view of a set of leads 500 (with coating) in the process of coupling to the package 400. The leads 502 of the group 500 are aligned with the leads 202 of the package 400 but are not yet coupled. Lead 504 is aligned with lead 210 but not yet coupled. Lead 506 is aligned with lead 208 but is not yet coupled. Lead 508 is aligned with lead 204 but is not yet coupled.FIG. 7B depicts a front view of the coupling process between the set of leads 500 and the package 400. FIG. In an example, at least some leads of the set of leads 500 are positioned substantially orthogonal to the leads of the lead frame 200 (eg, within 10 degrees of the orthogonal). In an example, the lead 504 is long enough so that it spans the entire array of leads 208, 210. In an example, the lead 506 is long enough so that it spans the entire array of leads 208, 210. In an example, each lead in the set 500 has a larger surface area and volume than the corresponding lead depicted in FIG. 2.Figure 7C depicts a side view of the aforementioned coupling process. As shown, each instance of the plating layer 510 is aligned with the corresponding lead on the package 400 but has not yet been coupled. In some examples, the distance between the plating layer 510 and each lead of the package 400 is between 4 micrometers and 60 micrometers.7D and 7E depict a process in which the spacing between the plating layer 510 and each lead of the package 400 is bridged. Specifically, in FIG. 7D, a set of leads 500 and package 400 are placed in an electroplating bath (such as a copper or nickel electroplating bath). The current is then applied to the leads of the group 500. Although not explicitly shown in the drawings, the leads in the group 500 use, for example, conductive dam bars to form a common electrical path. As depicted in FIG. 7E, the current and the electroplating bath cause a conductive plating layer 700 (eg, a copper plating layer) to be formed between the plating layer 510 and the corresponding lead on the package 400.Figure 8A depicts a front view of the assembly of Figure 7E, but with the non-conductive coating 512 peeled off. Any suitable solvent such as acetone or pyrrolidone may be used to peel the coating 412. Figure 8B depicts a perspective view of the assembly of Figure 8A. Figure 8C depicts a side view of the assembly of Figure 8A.9A depicts a front view of the package assembly 900, the package assembly 900 includes a package 400 (not explicitly shown in FIG. 9A, but depicted in FIGS. 9C and 9D) and a molded part (e.g., epoxy resin) ) 902, the molded part encapsulates those components depicted in FIGS. 8A to 8C that are not included in the package 400. As shown in the figure, the lead 500 is exposed on the surface of the molded part 902. In the example, the lead 500 is evenly flush with the surface of the molded part 902. Figure 9B depicts the package assembly 900 of Figure 9A, but with plating applied on the leads 500 (for example, using any suitable metal). 9C, 9D, and 9E provide a front perspective view, a side view, and a rear perspective view of the package assembly 900, respectively.In addition to the technical advantages already described, the foregoing examples provide flexibility in coupling fine pitch devices (eg, dies) to coarser pitch devices (eg, PCBs). More specifically, fine-pitch leads (such as leads 208 and leads 210) are electrically coupled to die 100. Leads 208 and 210 are coupled to their respective coarse-pitch leads 504 and 506, which in turn are coupled to devices (such as PCBs) that are well suited for coarse-pitch electrical connections.In many such examples, a package containing die 100 and fine-pitch leads 208 and 210 may already be manufactured and commercially available. In this case, as described above, lead 504 and lead 506 are coupled to lead 208 and lead 210, and then the complete assembly is ready to be coupled to a device such as a PCB. However, in some cases, it may be desirable to design a fine-pitch lead frame and a coarse-pitch lead frame and couple them together before adding any moldings. An example with respect to FIGS. 10A to 11D is now provided.FIG. 10A depicts a perspective view of the leads in the first set of leads 1000 of the lead frame according to various examples. (As mentioned above, for clarity, the dam bars and related mechanical connections normally present in the lead frame are omitted in these figures.) A set of leads 1000 includes leads 1002, 1004, and 1006. The leads in the group 1000 are fine pitch (for example, the pitch is between 10 microns and 200 microns). The specific configuration of the leads in the set 1000 depends on the configuration of the electrical connections on the die that will be coupled to the leads and changeable. Fig. 10A and the following figures assume that the lead structure is similar to the lead structure described above. Similar to the leads 208, 210 described above, the leads 1002, 1004 are arranged in an alternating configuration. The leads 1006 may be arranged in a fan-out configuration to meet the electrical connection interval requirements of the PCB to be coupled with the leads 1006. In an example, each of the leads 1002, 1004 has a width of about 50 microns and a length of about 2 millimeters. In an example, the leads 1002, 1004, and 1006 are formed using thin (eg, 10 to 100 microns) foil (such as copper foil) using chemical etching, laser cutting, plasma cutting, or other suitable processes.FIG. 10B depicts a perspective view of a metal plate 1008 on the leads of a set of leads 1000 according to various examples. The board 1008 may include any suitable conductive materials such as nickel, nickel-palladium, nickel-palladium-gold, tin, nickel-tungsten, etc. other than copper and solder. Avoid copper to selectively differentiate from the base metal of the lead frame (because you want to chemically coat the rest of the lead frame later and keep the contact area uncoated), and avoid solder to prevent the aforementioned solder-related challenges .Figure 10C depicts a perspective view of a set of leads 1010 according to various examples. The group 1010 includes leads 1012, 1013, and 1015. Each of the leads 1012, 1013 is thicker than each of the leads 1002, 1004. Similarly, each of the leads 1015 is thicker than each of the leads 1006. In an example, the thickness of the leads 1012, 1013, and 1015 is between 50 micrometers and 250 micrometers. In the example, the leads 1012, 1013, 1015 are made of copper foil using chemical etching, laser cutting, plasma cutting, mechanical stamping, or any other suitable process. In an example, each of the leads 1012, 1013 has a length of approximately 3 millimeters and a width of approximately 2 millimeters. In an example, each of the leads 1015 has a length of approximately 0.45 millimeters and a width of approximately 0.3 millimeters.FIG. 10D depicts a perspective view of the metal plating layer 1014 deposited on the leads 1012, 1013, 1015 according to various examples. The metal plating material may include any non-copper and non-solder materials, such as nickel, nickel palladium, nickel palladium gold, tin, nickel tungsten, and the like. In the example, the metal plating layer 1014 is deposited in a configuration in which the metal plating layer 1014 is aligned with the metal plating layer 1008 when the leads 1012, 1013, and 1015 are mated with the leads 1002, 1004, and 1006, respectively.Figures 10E and 10F depict perspective views of a set of leads 1000 and a set of leads 1010, respectively. The set of leads 1000 and the set of leads 1010 are coated with MPTS using dipping or spraying techniques. MPTS reacts with the copper surface and coats the copper surface, but since the metal plating layer does not include copper, the metal plate is not coated with MPTS. In the set of leads 1000, the number 1016 represents the area coated by MPTS. In the set of leads 1010, the number 1018 represents the area coated by MPTS. In an example, MPTS may be cured for 2 to 10 minutes at 50 to 60 degrees Celsius, for example. As described below, the cured coating can better withstand the chemical and thermal effects during subsequent electroplating.Fig. 10G depicts a perspective view of a set of leads 1000 and a set of leads 1010 aligned with each other, and Fig. 10H depicts a side view thereof. As depicted in FIG. 10H, the set of leads 1000 and the set of leads 1010 are aligned such that the metal plating on each set of leads are aligned with each other. In FIGS. 10G and 10H, the metal plates are not in contact with each other. Instead, spacers and clamps or other suitable equipment are used to maintain alignment and close proximity (for example, there is a distance of 15 microns between the metal plates). Then the set of leads 1000 and the set of leads 1010 are placed in an electroplating bath and a current is applied. 10I depicts a side view of a set of leads 1000 and a set of leads 1010 when the electroplating process causes the plating layer 1020 to grow between the metal plating layer 1008 and the metal plating layer 1014 to bridge the gap between the metal plating layer 1008 and the metal plating layer 1014. Figure 10J depicts a perspective view of the assembly shown in Figure 10I. As shown in the perspective view of FIG. 10K, a solvent such as acetone or pyrrolidone can be used to strip the MPTS coating.FIG. 11A depicts a perspective view of die 1022 with electrical connectors 1024 extending from die 1022. Figure 11A also depicts the assembly shown in Figure 10K. The electrical connector 1024 is aligned with the corresponding leads 1002, 1004, 1006. FIG. 11B is a perspective view of the assembly of FIG. 11A, but the electrical connector 1024 has been electrically coupled to the leads 1002, 1004, 1006 (e.g., using solder). FIG. 11B also depicts an assembly encapsulated in a molded part 1026 (eg, epoxy), which is depicted as translucent in this figure to facilitate viewing within the molded part 1026. 11C and 11D depict alternative perspective views of the completed package, in which a set of leads 1010 are exposed on the surface of the molded part 1026 (ie exposed to the surface of the package), and a set of leads 1000 are exposed on the surface of the molded part 1026 Different surfaces (ie, different surfaces exposed to the package). In FIG. 11D, the assembly is upside down so that the top surface in FIG. 11C is the bottom surface in FIG. 11D.The term "lead" as used herein is not limited to any specific example or embodiment. For example, referring to FIG. 10I, "lead" may refer to a stack, which includes: a conductive member 1010; a non-solder and/or non-copper metal plating layer 1014 stacked on the conductive member 1010; The electroplating layer 1020 on the copper metal plating layer 1014; the non-solder and/or non-copper metal plating layer 1008 stacked on the electroplating layer 1020; and the conductive member 1000 stacked on the non-solder and/or non-copper metal plating layer 1008. As shown in FIG. 10G, the conductive member 1000 may extend in multiple directions in a common plane, for example, to achieve a fan-out configuration. However, this does not have to be the case, because the term "lead" may include the conductive member stack depicted on the left side of FIG. 10I or the conductive member stack depicted on the right side of FIG. 10I.FIG. 12 depicts a flowchart of a manufacturing method 1200 according to various examples. The method 1200 can be used to manufacture one or more of the aforementioned devices, or at least a part of one or more of the aforementioned devices. The method 1200 begins by coupling a first set of leads to a plurality of electrical connectors extending from the surface of the die, where the first set of leads has leads arranged in multiple rows (step 1202). Then, the method 1200 includes selectively plating a second set of leads with a non-solder metal to produce a metal plate, the second set of leads having multiple portions and being thicker than the first set of leads (step 1204). The method 1200 then includes coupling the metal plates of the second set of leads to multiple rows using non-solder metal (step 1206). The method 1200 additionally includes applying a molding to at least partially encapsulate the first set of leads and the second set of leads (step 1208). The steps of method 1200 may be performed in any suitable order, and may be modified to add, modify, or remove one or more steps.FIG. 13 depicts a flowchart of a manufacturing method 1300 according to various examples. The method 1300 may be used to manufacture one or more of the aforementioned devices, or at least a part of one or more of the aforementioned devices. The method 1300 begins by providing a first set of leads (step 1302). The method 1300 includes forming a first non-solder metal plate on the first set of leads (step 1304). Method 1300 includes providing a second set of leads, wherein the spacing between the leads in the second set of leads is not as fine as the spacing between the leads in the first set of leads, and wherein the leads in the second set of leads are smaller than the spacing between the leads in the first set of leads The lead in is thicker (step 1306). The method 1300 next includes forming a second non-solder metal plate on the second set of leads (step 1308). The method 1300 further includes coating the first set of leads and the second set of leads with a non-conductive material, and after the coating is completed, the first non-solder metal plate and the second non-solder metal plate remain exposed (step 1310). The method 1300 also includes forming a non-solder metal connection between the first non-solder metal plate and the second non-solder metal plate using electroplating techniques (step 1312). The method 1300 includes removing the coating from the first set of leads and the second set of leads (step 1314). The steps of method 1300 may be performed in any suitable order, and may be modified by adding, modifying, or removing one or more steps.FIG. 14 depicts a flowchart of a manufacturing method 1400 according to various examples. The method 1400 may be used to manufacture one or more of the aforementioned devices, or at least a part of one or more of the aforementioned devices. The method 1400 begins by providing a chip package that includes a die coupled to a first plurality of leads, and the first plurality of leads are exposed on the surface of the chip package (step 1402). Then, the method 1400 includes providing a second plurality of leads, the leads of the second plurality of leads being thicker than the leads of the first plurality of leads, and the first plurality of leads having a finer pitch than the second plurality of leads ( Step 1404). Then, the method 1400 includes forming a metal plate on the second plurality of leads (step 1406), and coating the second plurality of leads with a non-conductive material (step 1408). Then, the method 1400 includes coupling the metal plate of the second plurality of leads to the first plurality of leads using an electroplating process (step 1410). Then, the method 1400 includes removing the coating from the second plurality of leads (step 1412). The method 1400 includes encapsulating the second plurality of leads with a molding (step 1414). The steps of method 1400 may be performed in any suitable order, and method 1400 may be modified to add, modify, or remove one or more steps.The various examples disclosed herein provide advantages in addition to the advantages described above. For example, the aforementioned different lead pitches enable the use of dies with finer features than before in the power device environment. In addition, devices with multiple sets of leads can use a wider variety of design rules, lead thicknesses, materials, and suppliers in the supply chain. These advantages can in turn provide other advantages; for example, the increased flexibility of design rules can lead to a reduction in manufacturing costs relative to the manufacturing costs that would otherwise be incurred. The various examples disclosed may provide other advantages not explicitly described herein.In the foregoing discussion and claims, the terms "including" and "including" are used in an open-ended manner, and therefore should be interpreted as meaning "including but not limited to...". Likewise, the term "coupled" is intended to mean an indirect or direct connection. Therefore, if the first device is coupled to the second device, the connection may be through a direct connection or through an indirect connection through other devices and connections. Similarly, the device coupled between the first component or location and the second component or location may be through direct connection or through indirect connection via other devices and connections. An element or feature that is "configured" to perform a task or function may be configured by the manufacturer during manufacturing (for example, programming or structural design) to perform the function, and/or may be configurable (or reconfigurable) by the user after manufacturing ) To perform functions and/or other additional or alternative functions. The configuration can be performed through firmware and/or software programming of the device, through the configuration and/or layout of hardware components, and the interconnection of devices or a combination thereof. In addition, in the foregoing discussion, the use of the phrase "ground" or similar terms is intended to include chassis ground, earth ground, floating ground, virtual ground, digital ground, public ground, and/or ground applied to or suitable for the teachings of the present disclosure Any other form of connection. Unless otherwise stated, "about", "approximately" or "substantially" before a value means +/- 10% of the stated value.The above discussion is intended to illustrate the principles and various embodiments of the present disclosure. Once the above disclosure is fully understood, many changes and modifications will become apparent to those skilled in the art. It is intended that the following claims are interpreted as encompassing all such changes and modifications.
PROBLEM TO BE SOLVED: To provide a binary compatibility between DPSs of different generations. SOLUTION: An embodiment of sub-pipeline translation to provide binary compatibility between DPSs of current generation and future generation is disclosed. When a fetch packet is retrieved from memory ('an instruction memory'), an operation mode (a base instruction set or a migrant instruction set) is assigned to the whole fetch packet in accordance with the execution mode('execution mode') at the time when the request to the fetch packet is submitted to the instruction memory. The fetch packet from the instruction memory is parsed to the execution packet and classified by an (dispatched) execution unit in the data path ('shared data path') that is shared by both execution modes (base and migrant).
A sub-pipeline transformation structure for providing binary compatibility between the base architecture of the VLIW architecture and the migrant architecture, which is composed of the base architecture and the migrant architecture, including the base execution mode and the migrant architecture. A VLIW architecture having an execution mode, a fetch packet having an operation mode retrieved from memory and having an operation mode dependent on the execution mode at a time when a request for the fetch packet is made to the memory, and the base architecture mode Shared by both the base and migrant architectures, parsing the fetch packet of the and migrant architecture modes into an execute packet and dispatching the base execute packet to the appropriate base architecture decode of the executing hardware. , A migrant architecture control circuit that dispatches execution packet instructions having a migrant execution mode to the migrant architecture decoding, and execution packet instructions in the execution unit, as well as base architecture decoding and migrants Execution hardware having architecture decoding for decoding the base architecture instruction and the migrant architecture instruction prior to execution depending on the execution mode of the fetched patch of the instruction to be decoded, respectively, and at least two An input and one machine word output, one input being the output of the Migrant architecture decoding, the other input being the output of the base architecture decoding, depending on the mode of operation of the fetch packet. A sub-pipeline transformation structure including a multiplexer for selective selection and a machine word controlling a unit of the execution hardware.A method of providing binary compatibility between a VLIW architecture base architecture and a migrant architecture, wherein a base execution mode and a migrant execution mode of the VLIW architecture are respectively executed. Preparing a fetch packet retrieved from the memory, the fetch packet having an operation mode depending on an execution mode at the time when the request for the fetch packet is made to the memory, Migrate architecture mode fetch packets are parsed into execute packets, and the base execute packets are dispatched to the appropriate base architecture decodes of the executing hardware on a data path shared by both base and migrant architectures. , Dispatching an execute packet instruction having a migrant execution mode to the migrant architecture decoding in the migrant architecture control circuit, executing the execute packet instruction in an execution unit of the execution hardware, the execution hardware comprising: An architecture decoding and a migrants architecture decoding, each of which decodes the base architecture instruction and the migrants architecture instruction respectively before execution, depending on the execution mode of the fetch packet of the decoded instruction, A multiplexer with one machine word output, depending on the mode of operation of the fetch packet, selects between the output of the Migrant architecture decoding and the output of the base architecture decoding and the execution with the machine word A method comprising controlling a unit of hardware.
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates generally to instruction set architectures, and more specifically to subpipeline and pipelined instruction execution in VLIW.2. Description of the Related Art Conventionally, a VLIW (Very Long Instruction Word) processor is defined by the following attribute groups. That is, the ability to specify multiple independent operations in each instruction (MultiOp instruction). The VLIW architecture is a horizontal machine, where each wide instruction word or MultiOp consists of several operations or Ops. All Ops within a MultiOp fire with the same execution schedule. The program assumes latency other than a particular unit of action, and is, in fact, accurate only when these assumptions hold. Static compile-time operation schedules need to take into account operation latency and availability of resources. This requires that the hardware exactly match the assumptions built into the program regarding the number of functional units and latency of operation. Despite the large number of pipelined operations issued per cycle, there is no interlock hardware.The natural attraction of these types of architectures is the ability to develop large amounts of instruction level concurrency (ILP) using relatively simple and inexpensive control hardware. A number of VLIW products [4, 5, 3] have been assembled that can produce 6 or more operations per cycle, but it is possible to construct a super color product with these levels of ILP. Not demonstrated [18,2,14,8,7,6]. In addition, the full exposure of available hardware resources and correct operating latency to the compiler allows highly optimized scheduling.From such a property itself, the idea that the VLIW processor has limited interest as a product has been born. Rigorous assumptions about the hardware built into the program are considered to prevent object code compatibility between processors with different latencies that are built at different times using different techniques. There is. Even with a single processor, the need for the compiler to schedule a fixed latency at compile time means that the latency can vary significantly depending on whether a cache hit or miss occurs. Some load-like behavior is a problem. Because of the problems described below, VLIW products rarely stick to the ideal of no interlocking hardware at all. Alternatively, when configuring a VLIW architecture, processor interlocks and stalls are common when loads take longer than expected.The conventional belief is that dynamic scheduling is not applicable to VLIW processors. The first step towards understanding how to implement dynamic scheduling in a VLIW processor is to recognize the conceptual differences between a conventional VLIW processor and a VLIW architecture. A VLIW processor is defined by a particular group of resources (functional unit, bus, etc.) and a particular execution latency when various operations are performed. When a program for a VLIW processor is compiled and scheduled with such resources and latencies in mind, it should be executed on this processor with no instruction control concurrency without any special control logic. Can be done. Conversely, VLIW processors that do not have special control logic can only correctly execute programs compiled with the correct resource and latency assumptions. Conventionally, the VLIW processor has been assembled without any special control logic, so it has been concluded that the VLIW processor must be designed in this way.A different view of VLIW is as an architecture, ie as a contractual interface between the programs written for that architecture and the processor architecture of this architecture. The usual view is that this contract relates to the instruction format as well as the interpretation of the bits that make up the instruction. But the contract goes further, and more important than anything else in this application is that aspect of the contract. First, through its MultiOp capability, the VLIW architecture is guaranteed to be independent of each other (thus can issue at the same time without any checking by the issuing hardware). Identify.Second, the assertion regarding latency of operations specifies how the program should be interpreted in order to properly understand the dependencies between operations. For sequential architectures, most latencies are assumed by the programmer to be one cycle. Therefore, the input operands for an action will appear to the programmer to be determined by all actions issued (and, in most cases, completed) prior to the action in question. This is true because there are certain sequential architectures, such as SPARC, where some instructions (branches with delay slots) have non-unit latency.In the case of a program for the VLIW architecture, when an operation has a latency other than a unit, the input operand for an operation is not determined by all the operations issued before the operation in question. At issue are actions that are supposed to be completed before the action in question is issued. Actions that have been issued earlier but have not yet been assumed to be complete do not impose flow dependencies on the action in question.A program is assumed to be a unit if the semantics of the program are correctly understood by assuming that all the operations in one instruction have completed before the next instruction is issued. Have a wait time (UAL). If at least one operation has a latency L greater than 1, which is assumed to be non-unit, the program has a latency (NUAL) assumed to be non-unit, that is, exactly the next (L-1) Is understood to have been emitted before this action is completed, the semantics of the program are correctly understood. The architecture is UAL(NUAL) if the type of program it is supposed to execute is UAL(NUAL). In this specification, the term NUAL program is used interchangeably with the term wait time recognizing program.The VLIW (Very Long Instruction Word) processor is an attractive way to achieve instruction level concurrency because it can issue many operations per cycle using relatively simple control logic. Is seen as. While the VLIW architecture offers the advantages of simplicity of design and high issuance speed, the main obstacle to using VLIW and other new ILP architectures is their incompatibility with existing software bases. is there. Lack of object code compatibility in the VLIW architecture between processors with different hardware latencies and varying levels of concurrency is an important limitation to adopting them as a general purpose computing paradigm. Is. That is, the installed binary software base cannot be assembled across a group of generations. The economic implications of this problem are enormous and efficient solutions are needed for the success of the VLIW architecture. Two types of solutions to this problem have been reported in the literature: hardware and software. These schemes can be compatible, but at the expense of hardware complexity, which can affect cycle time. A typical software scheme is to statically recompile VLIW programs from object files. This approach yields a number of workables, which presents difficulties in commercial copy protection and system management. For example, if a first generation machine has a certain latency for each functional unit and a second generation VLIW machine has a different latency for the same functional unit, then there may be a flow dependency between different operations. Therefore, the old VLIW schedule cannot be guaranteed to run correctly on a second generation machine. The same kind of problems arise when second generation machines have additional functional units. Even though the latency remains the same, the code scheduled for this new machine will not execute correctly on the old machine because the scheduler is trying to take advantage of this additional multiplier to get things working. There is no easy way to adapt this schedule to older machines. This is the case of backward incompatibility between generations. In this case, when machines of different generations share binaries, a mechanism for adjusting a schedule or a group of binaries different for each generation is required for compatibility. IBM describes a hardware feature for an ILP machine called DAISY (Dynamic Cali Architected Instruction Set From Yorktown). DAISY is specifically intended to mimic existing architectures, thus all existing software for older architectures (including operating system kernel code) will operate unchanged on the VLIW architecture. Each time a new fragment of code is executed for the first time, the virtual machine monitor (software) in fixed memory translates the code into VLIW primitives and parallelizes it to the old architecture in main memory. It is stored in an invisible part. When the same fragment is executed after this, no conversion is required (other than when it is discarded). The limitation of this hardware scheme is that the scope of the schedule is limited to the window of Ops seen at run time, so that the available ILP is relatively less than can be developed by the compiler. Such a method may cause an increase in cycle time, which is why many people consider the VLIW paradigm to be superior to superscalar as a machine for future generations.Instruction set architecture is a description of computer architecture at the level that a programmer can observe. The computer model programmer model is a similar term. In the exposed pipeline architecture, the delay associated with pipelined execution of instructions can be seen in the instruction set architecture and can be used to improve computational bandwidth.Another approach to solving the compatibility problem involves moving software to new machine architectures. In such applications, the instruction set architecture from which the original old architecture or processor that one is trying to mimic is called the base architecture. A VLIW that mimics an older architecture, or any instruction set architecture other than the base architecture of a given processor supported by that processor, is called a migrant architecture.Code compatibility between the current generation and future generations of exposed pipeline VLIW DSPs is an example of compatibility contemplated by the present invention. For example, TI C6000 DSP and TI 64-bit C6000 DSP extension are current and future architectures. The TI 64-bit C6000 DSP architecture features modifications to the ISA visible pipeline and other architectural features for higher operating frequencies. These changes compromise the requirements for binary compatibility, and the present invention constitutes a strong code-migrant path, as will now be described.An embodiment of a sub-pipeline conversion for providing binary compatibility between a current generation DSP and a future generation DSP will be disclosed. When the fetch packet is retrieved from the memory, the operation mode (base instruction set or migrant instruction set) is assigned to the entire fetch packet according to the execution mode at the time when the request for the fetch packet is made to the instruction memory. Be done. Fetch packets from the instruction memory are parsed and sorted into execute packets by the (dispatched) execution units in the data path shared by both execution modes (base and migrants). In this case, the two execution modes have different control logic because the syntax of the fetch packet and the sign of the execution unit are different between the migrants and the base architecture. Instructions from the dispatch data path are decoded by either the base architecture decode logic or the migrante architecture decode logic depending on the execution mode that constrains the fetch packet of the parent of the instruction being decoded. The code processed by the Migrants and Base Decoding Pipelines generates a machine word that controls the register file and the execution hardware functional unit. These machine words are selected using a multiplexer. The final machine word selection from the multiplexer depends on the mode of operation that constrains the fetch packet that generated this machine word and the sequential logic for sub-pipeline execution. The selected machine word controls the global register file, which supplies the operands for all hardware execution units and can accept the results of all hardware execution units.DETAILED DESCRIPTION Sub-pipelined execution is a hardware efficient method for executing code from the Migrants exposed pipeline architecture. In this method, the base architecture is designed with a fixed multiple (denoted by S) of the latency of the instruction of the desired migrants architecture. Using this relationship between migrant and base architecture to execute the code from migrant architecture in the base architecture by delaying the issue of the migrant instruction by (S-1) clock cycles. Can be done. In addition to providing a pipelined execution mode, it provides the facility to change between base and migrant instruction sets with low overhead.Given the code shown in FIG. 1, the base architecture provides four times the latency of the Migrant architecture instructions for all instructions, as shown in the table below.[Table 1]The code can be rescheduled in the base architecture, as shown in FIG. As shown, code operating in pipelined execution mode exceeds some of the performance of the base architecture instruction set because of the NOPs required to make up for the latency differences between the base and migrants code. Absent. Depending on performance requirements and other characteristics of the code, two automated solutions can be used to enhance the performance of the code from the Migrants architecture. First, the code from the Migrant architecture can be re-linked using a library written in the base architecture, and second, the off-line binary translation scheme can be used to convert the Migrant architecture code. Can be converted to base architecture code.In the case of the migrant code having the base library, the performance gain as compared with the pipeline type is proportional to the execution time spent in the base library and the performance of the library routine in the base ISA. The amount of time spent in the library varies greatly with the application, but with the 32-bit C6000 DSP instruction set, it is not uncommon for more than 50% of the application's execution time to be in the library.Off-line binary translation takes binary code from the Migrants architecture, reverse-assembles the code, and transforms it into the base architecture. This conversion process can be implemented in several ways. Using the sub-pipeline execution mode simplifies the conversion. If it is perceived as difficult or cannot be converted, run in sub-pipeline mode. This will allow you to deploy the offline conversion incrementally. As the feeling of difficulty changes over time, the proportion of code executed in subpipeline mode changes.Supporting the Migrant Architecture with the solution supporting compatibility between IBM and HP and several generations of VLIW processors of North Carolina State University, previously mentioned in "Prior Art and Challenges". While a VLIW-based architecture is obtained, such a solution is not a suitable solution for DSPs. In particular, the dynamic conversion schemes described in the IBM and NCSU studies do not provide the run time predictability needed for real-time DSP applications. The method described in the HP study is complex but has a deterministic run time, but at the cost of considerable hardware cost due to issue delay instruction buffers, delay register files and copyback units.Studies have also been published relating to the conversion of CISC to superscalar RISC. A recent version of this same study uses static transformation combined with a software simulation of a Migrant architecture operating on a base architecture. In the present invention, all the functions of the software simulation of the Migrants architecture are replaced by the hardware execution in subpipeline mode.FIG. 3 shows a sub-pipeline conversion embodiment according to the preferred embodiment of the present invention. In this invention, the code is retrieved from memory. The instruction memory can be composed of a direct address RAM or a cache. The code in the instruction memory appears to the program as being in the Migrant instruction set or the base instruction set. In one configuration, the code in the instruction memory can be pre-decoded to facilitate processing of subsequent instructions. A group of instructions retrieved simultaneously from memory is called a fetch packet. The fetch packet is assigned an operation mode (base instruction set or migrant instruction set) according to the execution mode at the time when the request for the fetch packet is issued to the instruction memory. As a result, the mode can be switched quickly.Fetch packets from the instruction memory are parsed into execute packets and sorted by execution unit (dispatched) in the data path shared by both execution modes (base and migrants). The two execution modes have separate control logic because the syntax of the fetch packet and the sign of the execution unit differ between the migrants and the base architecture in this case.The instruction from the dispatch data path is decoded by the base architecture decode logic or by the migrant architecture decode logic depending on the execution mode that constrains the fetch packet of the parent of the instruction being decoded. In the case of the exposed pipeline VLIW instruction set, the decoding logic for the base and migrant architectures primarily translates the opcodes into the control signals necessary to execute the instructions specified in the execution hardware functional unit. Due to the relationship defined in the present invention regarding the latency of the mygrant and the base instruction set (the base operation is twice the latency of the mygrant operation) and the exposed pipeline characteristics of such an instruction set, the pipeline depth and the instruction It does not require an instruction decoding scheme which requires knowledge of grading and instruction dependencies. This results in less hardware and less complexity for the instruction decode logic.The code processed in the Migrants and Base Decoding pipelines generates the machine words that control the register file and execution hardware functional units. These machine words are selected using a multiplexer, which can also select the third no op instruction. The selection of the final machine word from among the three possibilities depends on the mode of operation that binds the fetch packet that generated the machine language and the sequential logic for sub-pipelined execution.In a first preferred embodiment of the present invention, a selected machine word supplies a global register file that supplies operands for all hardware execution units and can receive the results of all hardware execution units. To do. Two clock cycles after the hardware pipeline, the selected machine word controls the local register file that supplies the operands to either the local execution hardware functional unit or the adjacent hardware execution functional unit. Finally, the selected machine word controls various forms of execution hardware that evaluate the function on the operands and produce the result.Regarding the above description, the following items will be further disclosed. (1) In a sub-pipeline conversion structure that provides binary compatibility between the base architecture of the VLIW architecture and the mygrant architecture, the base execution mode is composed of the base architecture and the mygrant architecture. And a VLIW architecture having a migrant execution mode, a fetch packet having an operation mode that is retrieved from the memory and that depends on the execution mode at the time when the request for the fetch packet is made to the memory, Of the base and migrant architectures, which parses the fetch packets of the architectural and migrant architecture modes into execute packets and dispatches the base execute packets to the appropriate base architecture decode of the executing hardware. A data path shared by both, a migrant architecture control circuit that dispatches execute packet instructions with a migrant execution mode to the migrant architecture decode, and execute packet instructions in the execution unit as well as base architecture decode and Execution hardware having a migrant architecture decoding and decoding the base architecture instruction and the migrant architecture instruction prior to execution depending on the fetch patch execution mode of the instruction to be decoded respectively; Has at least two inputs and one machine word output, one input being the output of the Migrant architecture decoding, the other input being the output of the base architecture decoding, and the mode of operation of the fetch packet A sub-pipeline transformation structure that includes a multiplexer that selects depending on the number and a machine word that controls a unit of the execution hardware.(2) The sub-pipeline conversion structure according to the first aspect, further including a third input to the multiplexer, the third input being a no op instruction. (3) In the sub-pipeline conversion structure according to item 1, the machine word controls a global register file, and the global register file supplies operands to all hardware execution units. A sub-pipeline transformation structure that can accept the results of all hardware execution units. (4) In the sub-pipeline conversion structure according to the third item, the machine word has a local execution hardware function unit or an adjacent hardware execution function after the machine word controls the global register file. A sub-pipeline transformation structure that controls a local register file that supplies operands to any of the units. (5) In the sub-pipeline conversion structure according to item 4, the machine word controls various types of execution hardware that evaluates a function for an operand, and the machine word is the local register file. A sub-pipeline transformation structure that produces a result of the hardware execution unit after controlling the. (6) In the sub-pipeline conversion structure according to item 1, the base and migrant architecture decoding units are required to execute the op code and the instruction specified by the functional unit of the execution hardware. Sub-pipeline conversion structure that converts control signals. (7) In the sub-pipeline conversion structure according to the item (1), the migrant architecture control circuit further issues a no-op instruction for protecting the semantics of the instructions in the migrant architecture. Transformation structure.(8) In the method of providing binary compatibility between the base architecture of the VLIW architecture and the my grant architecture, the base execution mode and the my architecture of the VLIW architecture base architecture and the my grant architecture, respectively. Executing the grant execution mode and preparing a fetch packet retrieved from the memory, the fetch packet having an operation mode depending on the execution mode at the time when the request for the fetch packet is made to the memory, The base architecture mode and migrant architecture mode fetch packets are parsed into execute packets, and the base execute packets are shared by both base and migrant architectures, with a suitable base of executing hardware. Dispatch to architecture decoding, dispatch an execution packet instruction having a migrant execution mode to migrant architecture decoding in migrant architecture control circuit, execute execution packet instruction in execution unit of execution hardware, and execute said execution The hardware has a base architecture decoding and a migrant architecture decoding, and depending on the execution mode of the fetch packet of the decoded instruction before executing, the base architecture instruction and the migrant architecture decoding respectively. A multiplexer with one machine word output for decoding instructions and selecting between the output of the Migrant architecture decode and the output of the base architecture decode, depending on the mode of operation of the fetch packet, Controlling the unit of execution hardware using words.(9) A method according to item 8, further comprising selecting between the output of the migrant architecture decode, the output of the base architecture decode and a no op instruction. (10) The method according to the item (8), further including controlling the register using the machine word. (11) The method of claim 8 further comprising controlling the global register file using the machine word, the global register file supplying operands for all hardware execution units. A method that can accept the results of all hardware execution units. (12) The method of claim 11 further comprising, after the step of controlling the global register file, locally supplying an operand to either a local execution hardware functional unit or an adjacent hardware execution unit. A method comprising controlling a register file. (13) In the method described in (12), further, by controlling various types of execution hardware that evaluates a function for an operand to control the local register file, the hardware execution is performed. How to generate unit results. (14) In the method described in the eighth paragraph, the op code is further converted into a control instruction necessary for executing the instruction specified by the execution hardware functional unit in the base and migrant architecture decoding unit. A method comprising the step of converting. (15) The method according to the item (8), further comprising the step of issuing a no-op instruction from the migrant architecture control circuit to protect the semantics of the instruction in the migrant architecture.(16) An embodiment of a sub-pipeline conversion that provides binary compatibility between DSPs between the current generation and future generations has been disclosed. When a fetch packet is fetched from memory (“instruction memory”), the entire fetch packet operates according to the execution mode (“execution mode”) at the time the request for this fetch packet was issued to the instruction memory. A mode (base instruction set or my grant instruction set) is assigned. Fetch packets from the instruction memory are parsed into execute packets and sorted by the (dispatched) execution units in the data path shared by both execution modes (base and migrants) (the "shared data path"). Since the syntax of the fetch packet and the sign of the execution unit are different in this case between the migrants and the base architecture, the two execution modes have different control logics (“base architecture control”, “my grant architecture control”). )have. Instructions from the dispatch data path are decoded by either the base architecture decode logic or the migrants architecture decode logic depending on the execution mode that constrains the fetch packet of the parent of the decoded instruction. The code processed by the Migrants and Base Decoding pipelines generates machine words ("machine words") that control the register file and the executing hardware functional units. These machine words are selected using a multiplexer. The final machine word selection from the multiplexer depends on the mode of operation that constrains the fetch packet that generated this machine word and the sequential logic for sub-pipeline execution ("sub-pipeline control"). The selected machine word controls the global register file, which supplies the operands for all hardware execution units and accepts the results of all hardware execution units.Brief description of the drawingsFigure 1 An example of a migrants code of a sub-pipeline execution migrants architecture with an instruction latency of 1/4 of the latency of instructions of the base architecture.Figure 2 illustrates the re-scheduling of the migrants code operated in subpipeline execution mode after conversion to the base architecture code, showing the inefficiency of the conversion.3 is a block diagram showing an embodiment of the sub-pipeline conversion according to the preferred embodiment of the present invention.Explanation of symbols1 Sub-pipeline conversion structure
A network switch includes a data bus, a register, an endpoint controller and a direct memory access controller. The endpoint controller is configured to receive a descriptor generated by a device driver of a host system, store the descriptor in the register, and transfer data between a root complex controller of the host system and the data bus. The descriptor identifies an address of a buffer in a memory of the host system. The direct memory access controller is configured to receive the address of the buffer from the endpoint controller or the register and, based on the address and an indication generated by the device driver, independently control transfer of the data between the memory of the host system and a network device connected to the network switch. The direct memory access controller is a receive direct memory access controller or a transmit direct memory access controller.
1.A network switch, including:Data Bus;register;The endpoint controller is configured to receive the descriptor generated by the device driver of the host system, store the descriptor in the register, and between the root complex controller of the host system and the data bus Transferring data, wherein the descriptor identifies the address of a buffer in the memory of the host system; andA direct memory access controller configured to receive the address of the buffer from the endpoint controller or the register, and to independently control the data based on the address and an instruction generated by the device driver The transfer between the memory of the host system and the network device connected to the network switch, wherein the direct memory access controller is a receiving direct memory access controller or a sending direct memory access controller.2.The network switch according to claim 1, wherein the endpoint controller is a peripheral component interconnection express device that transmits the data according to the peripheral component interconnection express protocol.3.The network switch according to claim 1, wherein the indication is a flag, an interrupt or a signal, and the flag is stored in the memory.4.The network switch according to claim 1, further comprising:A media access control device configured to transfer the data to or from the direct memory access controller; andThe Ethernet switch is configured to transmit the data between the media access control device and the network device connected to the network switch.5.The network switch according to claim 1, wherein the network device is a sensor, an actuator, a peripheral component interconnection fast device, or an endpoint device.6.The network switch according to claim 1, further comprising a media access control device, wherein, while independently controlling the transmission of the data, the direct memory access controller is configured to communicate between the data bus and the media The data is transferred between access control devices without interaction with the host controller of the host system.7.The network switch according to claim 1, wherein the direct memory access controller is configured to obtain control of the buffer of the memory from the device driver before the transmission of the data, and to control the buffer of the memory before the transmission of the data After the data, an interrupt is generated to return the control of the buffer to the device driver.8.The network switch according to claim 1, further comprising another controller configured to receive the rule stored in the memory, and based on the rule, check at the network switch The frame is received from the network device, and the frame is discarded or forwarded to the device driver, the application controller of the host system, or the denial of service controller of the host system.9.A data transmission system includes:The network switch according to claim 1;The memory;A host controller that implements the device driver; andThe root complex controller is configured to provide access to the memory to the host controller and the direct memory access controller.10.The data transfer system according to claim 9, wherein the device driver is configured to transfer control of the buffer to the direct memory access controller, and the direct memory access controller is configured to The control of the buffer is returned to the device driver.11.The data transfer system according to claim 9, wherein the root complex controller is configured to control the transfer of control information between the device driver and the memory.12.9. The data transfer system according to claim 9, wherein the root complex controller and the endpoint controller are peripheral component interconnection fast devices that operate according to the peripheral component interconnection fast protocol.13.The data transfer system according to claim 9, further comprising a denial of service controller configured to receive a frame from the network switch, determine whether the frame may be associated with an attack, and the change is stored in The rules in the memory and the changed rules are sent to the network switch to drop another frame or connection with the network device.14.A method of operating a network switch, the method comprising:Receiving, at the endpoint controller of the network switch, a descriptor generated by a device driver of the host system, wherein the descriptor identifies the address of a buffer in the memory of the host system;Store the descriptor in a register;Transferring data between the root complex controller of the host system and the data bus of the network switch;Receiving the address of the buffer from the endpoint controller or the register at a direct memory access controller, where the direct memory access controller is a receiving direct memory access controller or a sending direct memory access controller; as well asBased on the address and the instruction generated by the device driver, independently controlling the transfer of the data between the memory of the host system and the network device connected to the network switch.15.The method according to claim 14, further comprising transmitting the data according to the Peripheral Component Interconnect Fast Protocol via the endpoint controller.16.The method according to claim 14, further comprising:Transferring the data to or from the direct memory access controller via a media access control device; andThe data is transferred between the media access control device and the network device connected to the network switch via an Ethernet switch.17.The method according to claim 14, further comprising, while independently controlling the transfer of the data, transferring the data between the data bus and a media access control device via the direct memory access controller without Interaction with the host controller of the host system.18.The method according to claim 14, further comprising:Before the transmission of the data, obtain control of the buffer of the memory from the device driver at the direct memory access controller; andAfter transmitting the data, an interrupt is generated to return control of the buffer to the device driver.19.The method according to claim 14, further comprising:Receiving the rules stored in the memory; andBased on the rule, check the frame received from the network device at the network switch, and discard the frame or forward the frame to the device driver, the application controller of the host system, or the The host system's denial of service controller.20.The method according to claim 14, further comprising:Receiving a frame from the network switch;Determine whether the frame may be associated with an attack;Change the rules stored in the memory; andSend the changed rule to the network switch to drop another frame or connection with the network device.
Network with endpoints and direct memory access controller for vehicle-mounted data transmission Network switchCross-references to related applicationsThis application claims the priority of U.S. Patent Application No. 16/697,361 filed on November 27, 2019, which claims the rights of U.S. Provisional Application No. 62/772,506 filed on November 28, 2018. The entire disclosure of the above cited application is incorporated herein by reference.Technical fieldThe present disclosure relates to data transfer between devices in a vehicle, and more specifically to an automotive Ethernet switch device for transmitting sensor data to a host controller in the vehicle.Background techniqueThe background description provided herein is for the purpose of generally presenting the context of the present disclosure. As far as the work described in this background section is concerned, and in terms of the description that may not conform to the prior art at the time of filing the application, the work of the currently named inventor has not been explicitly or implicitly recognized as the present disclosure. technology.Automotive applications such as autonomous vehicles continue to increase the demand for high-bandwidth data services. Autonomous vehicles include fully and partially autonomous vehicles. This includes the transmission of video, audio, LIDAR, RADAR, proximity, and/or other sensor data. For example, sensors in the vehicle can be configured to monitor the environment outside the vehicle and provide data back to the host system for processing. The data is processed by the host system and used to perform actions within the vehicle (for example, autonomous operations such as braking, steering, acceleration, etc.). Additionally or alternatively, the data may be routed to network devices and/or components inside and/or outside the vehicle.Summary of the inventionA network switch is provided, which includes a data bus, a register, an endpoint controller, and a direct memory access controller. The endpoint controller is configured to receive the descriptor generated by the device driver of the host system, store the descriptor in a register, and transfer data between the root complex controller of the host system and the data bus. The descriptor identifies the address of the buffer in the memory of the host system. The direct memory access controller is configured to receive the address of the buffer from the endpoint controller or register, and based on the address and the instructions generated by the device driver, independently control the data in the memory of the host system and the network device connected to the network switch Transfer between. The direct memory access controller is a receiving direct memory access controller or a sending direct memory access controller.Among other features, the endpoint controller is a peripheral component interconnect fast device that transmits data according to the peripheral component interconnect fast protocol. In other features, the indication is a flag, interrupt, or signal, and the flag is stored in the memory.In other features, the network switch of claim 1 further comprises: a media access control device configured to transmit data to the direct memory access controller to transmit data from the direct memory access controller; and an Ethernet switch configured to transmit data in the media access controller Data is transferred between the control device and the network device connected to the network switch. Among other features, network devices are sensors, actuators, peripheral component interconnection fast devices, or endpoint devices.In other features, the network switch also includes a media access control device, wherein, while independently controlling the transfer of data, the direct memory access controller is configured to transfer data between the data bus and the media access control device, but not with The interaction of the host controller of the host system.In other features, the direct memory access controller is configured to obtain control of the buffer of the memory from the device driver before the transfer of data, and after transferring the data, generate an interrupt to return the control of the buffer to the device driver.In other features, the network switch further includes another controller configured to receive the rule stored in the memory, and based on the rule, check the frame received from the network device at the network switch, and The frame is discarded or forwarded to the device driver, the application controller of the host system, or the denial of service controller of the host system.In other features, a data transfer system is provided, the data transfer system comprising: the network switch of claim 1; a memory; a host controller that implements a device driver; and a root complex controller that is configured to communicate to the host controller And the direct memory access controller provides access to the memory.In other features, the device driver is configured to transfer control of the buffer to the direct memory access controller, and the direct memory access controller is configured to return control of the buffer to the device driver. In other features, the root complex controller is configured to control the transfer of control information between the device driver and the memory. Among other features, the root complex controller and the endpoint controller are peripheral component interconnection rapid devices that operate according to the peripheral component interconnection rapid protocol.Among other features, the data transfer system further includes a denial of service controller, the denial of service controller is configured to receive frames from the network switch, determine whether the frame may be associated with an attack, change the rules stored in the memory, and send to the network switch The changed rules to drop another frame or connection with a network device.In other features, a method of operating a network switch is provided, the method comprising: receiving a descriptor generated by a device driver of a host system at an endpoint controller of the network switch, wherein the descriptor identifies a buffer in a memory of the host system The address of the zone; store the descriptor in the register; transfer data between the root complex controller of the host system and the data bus of the network switch; receive the address of the buffer from the endpoint controller or register at the direct memory access controller ; And based on the address and instructions generated by the device driver, independently control the transfer of data between the memory of the host system and the network device connected to the network switch.In other features, the method further includes transmitting data according to the Peripheral Component Interconnect Fast Protocol via the endpoint controller. In other features, the method further includes: transmitting data to or from the direct memory access controller via the media access control device; and transmitting data between the media access control device and the network switch via the Ethernet switch Transfer data between network devices.In other features, the method further includes: while independently controlling the transfer of data, transferring data between the data bus and the media access control device via the direct memory access controller without interacting with the host controller of the host system .In other features, the method further includes: before the transfer of the data, obtaining control of the buffer of the memory from the device driver at the direct memory access controller; and after transferring the data, generating an interrupt to return the buffer to the device driver District control.In other features, the method further includes: receiving the rule stored in the memory; and based on the rule, checking the frame received from the network device at the network switch, and discarding the frame or forwarding the frame to the device driver, the host system The application controller or the host system’s denial of service controller.In other features, the method further includes: receiving a frame from a network switch; determining whether the frame may be associated with an attack; changing the rule stored in the memory; and sending the changed rule to the network switch to discard another frame or Connection with network equipment.From the detailed description, claims and drawings, more application fields of the present disclosure will become clear. The detailed description and specific examples are for illustrative purposes only, and are not intended to limit the scope of the present disclosure.Description of the drawingsFIG. 1 is a functional block diagram of a data transmission system of a vehicle including a host system and one or more network switches according to the present disclosure.Fig. 2 is a functional block diagram of a network switch in the host system and the network switch in Fig. 1.Fig. 3 is a functional block diagram of the network switch of Fig. 2.Fig. 4 illustrates a data transmission method according to the present disclosure.Figure 5 is a functional block diagram of an example partially or fully autonomous vehicle implementation of the network switch and host system according to the present disclosure.FIG. 6 is a functional block diagram of the peripheral component interconnection of the host system of FIG. 2 and the network switch implemented in a vehicle according to the present disclosure, wherein the host system includes an application controller and a denial of service detection controller.FIG. 7 illustrates an attack prevention method performed by a network switch according to the present disclosure.FIG. 8 illustrates a denial of service method executed by the denial of service detection controller of the host system according to the present disclosure.In the drawings, reference numerals may be used repeatedly to identify similar and/or identical elements.Detailed waysThe vehicle may include multiple sensors for monitoring the status of vehicle components and the internal and external environment of the vehicle. The host system of the vehicle may also include a plurality of controllers that receive data from the sensors and perform various operations in response to the received sensor data. In some applications, data is shared with nearby vehicles, remote stations, and/or network devices in the vehicle. Some example controllers are engine controllers, transmission controllers, heating, ventilation, and air conditioning (HVAC) controllers, partially or fully autonomous vehicle controllers, infotainment controllers, lighting controllers, and so on.The examples set forth herein include a data transfer system that includes a host system and one or more network switches for routing data between the host system and other network devices inside and/or outside the vehicle. In various embodiments, each of the network switches is configured as an endpoint device and includes an endpoint controller for communicating with the root complex controller of the host system. Therefore, each of the network switches is regarded as a single endpoint device (eg, Peripheral Component Interconnect Express (PCIe) endpoint) to the host system. As such, each of the network switches appears as a single device, which can be controlled by the host system using, for example, the PCIe protocol and PCIe link.The network switch also includes a direct memory access (DMA) controller that controls the data transfer between the register in the network switch and the buffer in the host memory of the host system. One or more device drivers of the host system follow an initialization process that includes pre-configuring the host memory and the network switch to allow the network switch to access the host memory. This includes pre-allocated buffers and descriptors for host memory. Some of the descriptors are pre-configured during the initialization process. Once the initialization process is complete, one or more device drivers provide the network switch with access control of the host memory. Then, the network switch can control data transfer to and from the host memory independently of the host controller by pre-allocating buffers and pre-configured descriptors. The endpoint controller allows one or more device drivers to control the internal operation of the network switch, including the operation of the DMA controller, media access control (MAC) receiver and transmitter, and/or ternary content addressable memory (TCAM) controller .In one example, the root complex and the endpoint controller are PCIe devices that communicate through a PCIe link, and the PCIe link is a point-to-point connection. The PCIe link between the root complex controller and the endpoint controller includes 2 lines running at PCIe Gen3 (GEN3) and can collectively send up to 5-10 gigabytes per second from a single Ethernet port (Gbps) data.Network switches include intelligent features such as Internet Protocol (IP) routing and attack prevention. In one embodiment, each of the network switches includes a TCAM controller, and the TCAM controller implements IP routing and attack prevention. In another embodiment, one or more device drivers of the host system and the TCAM controller cooperate to provide attack prevention. The host system includes denial of service (DoS) firmware, and the TCAM controller includes IP routing firmware. The IP routing firmware determines the source and destination addresses of ports, queues, registers, host buffers, DMA engines, etc., for each frame, and routes the frames accordingly. The DoS firmware monitors incoming frames, and based on predetermined rules, determines whether to allow routing of the frame as instructed by the IP routing firmware, reroutes the frame for further analysis, and/or discards the frame. Ports, queues, registers, and DMA engines are located in a specific network switch in the network switch. In one example, the IP routing and attack prevention firmware is dynamically configured and/or controlled by one or more device drivers and/or host controllers of the host system.FIG. 1 shows a data transfer system 100 of a vehicle 102 for transferring data (e.g., sensor data) between a host controller and network devices (e.g., sensors and other network devices). The data transfer system 100 includes a host system 104 and one or more network switches (one network switch 106 is shown) that communicate with each other via a link 107 such as a PCIe link. The host system 104 includes one or more host controllers 108, a host storage 110, and a root complex controller 112. The host controller 108 includes a device driver 114 implemented at one of the host controllers 108. For example, the host controller 108 is implemented as a central processing unit and controls the operation of the vehicle 102 in response to sensor data and/or other received data. The device driver 114 configures the host memory 110 and the network switch 106 for data transfer between the network switch 106 and the host memory 110 independent of the host controller 108. The host memory 110 may include solid-state memory and/or other memory for storing received data and/or data to be sent from the network switch 106 to downstream network devices, such as vehicle status data. The root complex controller 112 controls data and control information (i) between the host controller 108 and the host storage 110, (ii) between the host controller 108 and the network switch 106, and (iii) between the host storage 110 and the host storage 110. Transfer between network switches 106.Each network switch in the network switch includes an endpoint controller 120, a control bus 121, a data bus 122, a reception (RX) DMA controller 124, a transmission (TX) DMA controller 126, a MAC receiver 128, a MAC transceiver 130, and Ethernet switch 132. The endpoint controller 120 controls the transfer of data and control information to and from the root complex controller 112 via the link 107 and to and from the DMA controllers 124 and 126 via the buses 121 and 122. The control bus 121 is connected to the register 133. The control information is stored in the register 133 and is applied before transferring data. The register 133 is implemented in the Ethernet switch 132. The endpoint controller 120 allows the network switch 106 to operate as an endpoint device relative to the host system 104, for example, by communicating on a PCIe link as a PCIe endpoint device that supports the entire network between the host system 104 and the network switch 106. Duplex communication and control of the entire network switch 106.In an embodiment, the root complex controller 112, the link 107, and the endpoint controller 120 are implemented as PCIe components of a PCIe system operating according to the PCIe protocol. The root complex controller 112 is implemented as a PCIe root complex of a PCIe switch fabric that connects the host controller 108 and the host memory 110 to a network switch. In an embodiment, the link 107 is implemented as a PCIe link. The endpoint controller 120 is implemented as a PCIe endpoint.The control bus 121 is used to transfer control information including descriptor information. If the descriptor is ready in the host system 104, the device driver 114 triggers the network switch 106 through the control bus. The network switch 106 then activates one of the DMA controllers 124, 126 to obtain the descriptor and corresponding application data. Examples of descriptor information include source and destination addresses, source and destination identifiers (ID), and frame size and type. The data bus 122 is used to transfer data to and from the host memory 110. The DMA controllers 124 and 126 control data transfer to and from the host memory 110 based on the descriptor information received from the device driver 114. The network switch 106 includes any suitable number of RX DMA controllers 124 and any suitable number of TX DMA controllers 126. In the example shown, the network switch 106 includes ten receiving DMA controllers 124 and ten transmitting DMA controllers 126. By having multiple DMA controllers 124, 126 control data transfer, high bandwidth is achieved between the host system 104 and the network switch 106 via the link 107, which in the embodiment provides one or more PCIe links. The data received in the host memory 110 and provided by the network switch 106 via the receiving DMA controller 124 is processed by the host controller 108. Then, the transmission DMA controller 126 may be used to transmit the obtained processing data to the network switch 106. These data transfers include multi-layer transfer, processing, and exchange of data within the host system 104 (for example, between the application layer, the presentation layer, the session layer, the transport layer, and the network layer of the host system 104).The MAC receiver 128 provides the control abstraction of the physical layer, so that the complexity of the physical link control is invisible to the logical link control and upper layers of the corresponding network stack. The physical layer is at least partially implemented by the Ethernet switch 132. The MAC receiver 128 converts the received frame into a frame for delivery to the RX DMA controller 124. In some applications, this conversion includes removing the sync word preamble, padding, and/or frame check sequence from the received frame. The MAC receiver 128 includes a filter 133 that distributes incoming frames to the receiving DMA controller 124. The MAC transmitter 130 converts the frame in an appropriate format for transmission in the physical layer. In some applications, this conversion includes adding a synchronization word preamble, padding, and frame check sequence to identify transmission errors.The Ethernet switch 132 controls data transfer between (i) the MAC receiver 128 and the MAC transmitter 130, and (ii) the sensor 140, the actuator 142, and other network devices 144. Examples of the sensor 140 include one or more RADAR sensors, LIDAR sensors, proximity sensors, cameras, temperature sensors, pressure sensors, voltage sensors, current sensors, flow sensors, and the like. Examples of the actuator 142 include engines, motors, pumps, and valves. Examples of other network devices 144 include transceivers, telematics controllers, infotainment controllers, global positioning system (GPS) controllers, navigation controllers, lighting controllers, brake controllers, steering controllers, acceleration controllers Wait.Compared with typical network interface cards (NIC) and traditional PCIe switches, the network switch 106 is constructed and operated differently. With its innovative architecture and functions, it provides flexibility and flexibility for different applications implemented by the host controller 108. Adaptability. The NIC provides an interface between the host system and the network via a single Ethernet port. For example, the NIC can be used as an interface between a PCIe link connected to the PCIe root complex and a local area network (LAN). Traditional PCIe switches are not PCIe endpoint devices, but are used to exchange frames between PCIe links and multiple PCIe endpoint devices. The network switch 106 is capable of transmitting and converting NIC-like frames, and further includes an integrated endpoint controller 120 that allows the host controller 108 to treat the network switch 106 as an endpoint device. In some implementations, the network switch 106 is connected to one or more PCIe endpoint devices. As a component integrated into the network switch 106, the endpoint controller 120 allows the device driver 114 to configure and have full access to the elements of the network switch 106, including DMA controllers 124, 126, MAC receiver 128, MAC transmitter 130, and Ethernet Switch 132. The device driver 114 can configure the host system 104 and the network switch 106 to bind certain services of the DMA controllers 124 and 126 to one of the host controllers 108. This allows the distribution of receive and transmit load to multiple host controllers. In an embodiment, the network switch 106 is configured to support transmission control protocol communication between network devices.FIG. 2 shows the host system 104 and the network switch 106 that transmit data and control information through the link 107 in more detail. The host system 104 includes a host controller 108, a host memory 110, and a root complex controller 112. In one embodiment, only a single device driver is included. In another embodiment, two or more host controllers 108 include corresponding device drivers. Each of the device drivers (designated as 114') may be similarly configured and/or operated.The host memory 110 includes a receiving buffer 200, a sending buffer 202, a receiving descriptor 204, and a sending descriptor 206. The receiving buffer 200 receives data from the network switch 106. The sending buffer 202 stores data sent from the network switch to the actuator 142 and/or the network device 144 of FIG. 1 via the Ethernet switch. The reception descriptor 204 stores control information related to the data stored in the reception buffer 200. The transmission descriptor 206 stores control information related to the data stored in the transmission buffer 202. The control information in the descriptors 204 and 206 includes source and destination addresses, source and destination IDs, the type of the stored frame, and the size of the stored frame. In an embodiment, each of the descriptors 204 and 206 identifies a host controller in the host controller 108, a DMA controller in the DMA controller, a register in the register 133 of the Ethernet switch 132, The port of the Ethernet switch 132 and the ID of one of the sensor 140, the actuator 142, and the network device 144.The network switch 106 includes an endpoint controller 120, a control bus 121, a data bus 122, an RX DMA controller 124, a TX DMA controller 126, a MAC receiver 128, a MAC transmitter 130, and an Ethernet switch 132. The control bus 121 is connected to the register 133. The control information is stored in the register 133 and applied before the data is transferred. The received data may be stored in the Ethernet switch 132 before being sent to the host memory 110 or external to the Ethernet switch 132 and connected to the devices of the Ethernet switch 132 (such as the sensor 140, the actuator 142, and the network device 144) , Such as in TCAM 230. The descriptors 204, 206 may be associated with the register 133 and/or other buffers/memory in the network switch 106. Register access is initiated by the device driver 114 and requires less bandwidth. The reception DMA controller 124 acquires a reception descriptor similar to when the reception data is acquired from the reception buffer 200. The transmission DMA controller 126 acquires a transmission descriptor similar to when acquiring transmission data from the transmission buffer 202. A large amount of bandwidth is associated with these tasks, which are initiated by the network switch 106.The root complex controller 112 provides the ability to map the register 133 to the address space of the host memory 110. This enables the device driver 114' to initialize and maintain the MAC receiver 128, MAC transmitter, and DMA controllers 124, 126 via memory-mapped register access. The DMA controllers 124, 126 use the descriptors 204, 206 to interoperate with the device driver 114'. This includes sharing information stored as part of the descriptors 204, 206.The root complex controller 112 and the endpoint controller 120 provide management interfaces that dynamically provide access to the elements of the network switch 106 during runtime. When the root complex controller 112 and the endpoint controller 120 are implemented as PCIe devices, the network switch 106 appears to the host system as PCIe Ethernet devices. The device driver 114' can fully access the DMA controllers 124, 126, the MAC receiver 128, the MAC transmitter 130, the register 133, and the TCAM controller 232 of the TCAM 230.In an embodiment, the Ethernet switch 132 includes a register 133, a TCAM 230, and a TCAM controller 232. The TCAM controller 232 controls the frame based on the control information in the register 133 and the TCAM rules (for example, the TCAM rule 604 of FIG. 6) to control the frame in (i) the equipment outside the Ethernet switch 132 and connected to the Ethernet switch 132 and (ii) Transmission between the MAC receiver 128 and the MAC transmitter 130. In an embodiment, the Ethernet switch 132 transmits and/or receives data at a rate of up to 5-10 Gbps.Figure 3 shows a network switch 106, which includes an endpoint controller 120, a control bus 121, a data bus 122, an RX DMA controller 124, a TX DMA controller 126, a MAC receiver 128, a MAC transmitter 130, and Ethernet Switch 132 and register 133. The Ethernet switch 132 includes a register 133, a TCAM 230, a TCAM controller 232, an ingress port 300, an egress port 302, an interface port 304, and an ingress first-in-first-out (FIFO) buffer 310.The TCAM controller 232 controls the frame transfer between (i) ports 300 and 302 and (ii) the interface port 304 based on the control information in the register 133. In an embodiment, the TCAM controller 232 is directly connected to the register 133 or accesses the register via the control bus 121. The TCAM controller 232 accesses the control information stored in the register 133. The port 304 is connected to a device external to the Ethernet switch 132 and connected to the Ethernet switch 132. In the embodiment, the ports 300 and 302 are unidirectional ports, some of the ports 304 are unidirectional, and the other ports 304 of the ports 304 are bidirectional. The one-way port is used to transmit sensor data from the sensor to the host system 104. The bidirectional port is used for bidirectional transmission of data and control information between, for example, the host system 104 (including the host controller 108 and the host memory 110) and the network device downstream of the Ethernet switch 132. The buffer 310 is resized to hold the bytes to be compared to check whether the received data complies with TCAM rules (e.g., TCAM rules 604 of FIG. 6). When a frame is received from a device outside the network switch, the frame is scanned and filtered. When the TCAM rule is assigned to the corresponding port of the Ethernet switch 132, the buffer 310 is enabled. Check the frame when it is received, not after it is received. By checking the frame when it is received, the processing time associated with checking the frame is minimized. In one embodiment, the interface port 304 is implemented as a physical layer (PHY) circuit. In another embodiment, the interface port 304 is connected to a PHY circuit external to the Ethernet switch 132. The PHY circuit is connected to the devices 140, 142, and 144. In another embodiment, some of the interface ports 304 are implemented as serial interfaces and connected to corresponding sensors. An example PHY circuit is shown in Figure 5. In an example embodiment, some of the interface ports 304 are serializer/deserializer (SERDES) interfaces and reduced gigabit media independent interfaces (RGMII).Figure 4 shows the data transfer method. Although the following operation is mainly described with respect to the implementation of FIGS. 1 to 3, the operation can be easily modified to be applied to other implementations of the present disclosure. This operation can be performed iteratively. Although the following operation is mainly described in association with the use of a single device driver, the operation can be modified and implemented in association with multiple device drivers.At 400, one of the host controllers 108 loads the device driver 114 from, for example, the host memory 110 and executes the device driver 114. In one embodiment, the device driver 114 is an Ethernet device driver.At 402, the device driver 114 allocates the buffers 200, 202 to the DMA controllers 124, 126, and configures the receive descriptor 204 while leaving the transmit descriptor 206 empty. The reception descriptor is pre-classified and configured for each reception buffer and is assigned to one of the host controllers 108, and one of the host controllers 108 is generated by the device driver 114 in the embodiment. Interrupt to change. Each of the DMA controllers 124, 126 is assigned to one or more of the buffers 200, 202. The reception buffer 200 is allocated to the reception DMA controller 124 and the transmission buffer 202 is allocated to the transmission DMA controller 126. In an embodiment, the buffers 200, 202 are shared by the DMA controllers 124, 126. Two or more receiving buffers in the receiving buffer 200 are shared by two or more receiving DMA controllers in the receiving DMA controller 124. Similarly, two or more transmission buffers in the transmission buffer 202 are shared by two or more transmission DMA controllers in the transmission DMA controller 126. The reception descriptor 204 is generated and configured as described above to include source and destination addresses, source and destination IDs, and/or other control information available when configured. The source and destination addresses include the receiving buffer 200, the receiving DMA controller 124, the register 133, the port of the Ethernet switch 132, and/or the address of the final destination device outside the switch 106. The source and destination IDs include the receiving buffer 200, the receiving DMA controller 124, the register 133, the final destination device outside the switch 106, the port of the Ethernet switch 132, and/or such as the root complex controller 112 and the endpoint controller The identifier of the 120 intermediate device.In the embodiment, although it is described below as being set after it is determined that the data is to be transmitted, the transmission descriptor 206 is at least partially set in advance. The device driver 114 allocates one or more transmission descriptors in the transmission descriptor 206 to one or more transmission DMA controllers in the transmission DMA controller 126. The device driver 114 generates the transmission descriptor 206 to include the address and ID of the transmission buffer 202 and/or the transmission DMA controller 126. At this time, the transmission descriptor 206 does not include the address and/or ID of the final destination device. If there is no control information available, the send descriptor remains empty.The following operations 406, 408, 410, 412 are performed when the frame is sent. Operations 420, 422, 424 are performed when the frame is received. The host system 104 and the network switch 106 perform operations 420, 422, and 424 while performing operations 406, 408, 410, and 412.At 404, the device driver 114 determines whether the frame is to be sent. As an example, one of the host controllers 108 generates an interrupt, sets a flag in the memory, or signals the device driver 114 that the frame is to be sent. In an embodiment, the interrupt is generated by the network switch 106, and the network switch 106 signals the device driver 114 that the frame is to be transmitted. This can happen similarly when the reception or transmission of the frame is completed. As another example, this may occur when the descriptors 204, 206 are repopulated (ie, the new and/or updated control information is stored as a descriptor). If the device driver 114 is controlling the operation of one or more of the host controllers 108, the device driver 114 knows when the frame is to be sent. At 406, the device driver 114 configures the send descriptor 206. This includes generating the transmission descriptor 206 and storing the transmission descriptor 206 in the host memory 110 if it has not been completed. The transmission descriptor 206 is assigned to a corresponding host controller in the host controller 108, and the corresponding host controller is changed using an interrupt generated by the device driver 114 in the embodiment. The sending descriptor 206 is configured to include the address and/or ID for the device, and the corresponding frame will be sent to the device.At 408, the device driver 114 transmits the control of the transmit buffer 202 and the transmit descriptor 206 to the transmit DMA controller 126. This includes the device driver 114 performing at least one of the following: signaling the sending DMA controller 126 to indicate that control is transferred; setting a control flag accessible and monitored by the sending DMA controller 126 in the host memory 110 and/or register 133, Or, an interrupt detected by the transmission DMA controller 126 is generated. The following operations 410 and 412 are performed independently of the host controller 108 and/or the software interaction implemented by the host controller 108 in the host system 104 and therefore the central processing cycle of the host controller 108 is not used to perform these operations. At 410, the transmission DMA controller 126 controls the transmission of the frame. This includes signaling the endpoint controller 120 to instruct the root complex controller 112 to access the data stored in the send buffer 202 according to the control information in the corresponding send descriptor in the send descriptor 206. The transmission DMA controller 126 accesses the data stored in the transmission buffer 202 and transmits the data to a device outside the network switch 106 via the Ethernet switch 132.Some of the control information and status information are stored in the register 133. The status information includes whether the data is being received or sent, and whether the transfer is about to be executed, currently being executed or completed. Control and status information is accessed by the DMA controllers 124, 126 via the control bus 121. The endpoint controller 133 operates as a pass-through device for data transfer to and from the DMA controllers 124 and 126. The received data is stored in the receiving DMA controller 124 before being transmitted to the host system 104, and the transmitted data is stored in the transmitting DMA controller 126 before being transmitted via the external port of the Ethernet switch 132. The DMA controllers 124, 126 include buffers and/or memories for temporarily storing data. The buffers and/or memories of the DMA controllers 124 and 126 can store much more data than the registers 133. This allows the DMA controllers 124, 126 to store the data being transferred and the corresponding descriptors. The transfer of data and descriptors is completed via the data bus 122.At 412, the transmit DMA controller 126 generates one or more interrupts that indicate that control is transferred to the device driver 114. The control of the transmission buffer 202 and the transmission descriptor 206 is returned to the device driver 114 as a result.At 420, the device driver 114 transmits control of the receive buffer 200 and the receive descriptor 204 to the receive DMA controller 124. This includes at least one of the following: signaling the receiving DMA controller 124 to indicate that control is transferred; setting a control flag accessible and monitored by the receiving DMA controller 124 in the host memory 110 and/or register 133, or generating a control flag that is accessible and monitored by the receiving DMA controller 124 The interrupt detected by the DMA controller 124.The following operations 422, 424 are performed independently of the host controller 108, and/or the software interaction implemented by the host controller 108 in the host system 104 and therefore the central processing cycle of the host controller 108 is not used to perform these operations. At 422, the receiving DMA controller 124 controls the transmission of the frame. This includes receiving and/or accessing data from the Ethernet switch 132 and storing the data in the receive buffer 200. The receiving DMA controller 124 signals the endpoint controller 120 to instruct the data transfer to the receiving buffer 200, and the receiving buffer 200 in turn instructs the root complex controller 112 to store the data in the receiving buffer 200. These operations are performed according to the control information in the corresponding reception descriptor in the reception descriptor 204.At 424, the receiving DMA controller 124 generates one or more interrupts that indicate that control is transferred to the device driver 114. The control of the receiving buffer 200 and the receiving descriptor 204 is returned to the device driver 114 as a result.FIG. 5 shows an example partially or fully autonomous vehicle 500 implementation of the network switch 502 and the host system 504. The network switch 502 and the host system 504 are similarly implemented as the aforementioned network switch and host system (including the examples shown in FIGS. 1 to 3). The network switch 502 includes a PCIe endpoint controller 506 and an Ethernet switch 508. The host system 504 includes a PCIe root complex controller 510, a host controller 512, and media interfaces, such as camera serial interfaces (CSI) 514, 516, which are provided as examples. Ethernet switch 508 is shown connected to PHY circuits 520,522,524. The PHY circuits 520, 522, and 524 are connected to one or more cameras 526, one or more RADAR sensors 528, and one or more LIDAR sensors 530. In an embodiment, the PHY circuits 520, 522, and 524 are implemented in the network switch 502 and/or the Ethernet switch 508. In the example shown, the host controller 512 is connected to one or more cameras 532 and one or more cameras 534 via each of the CSI 514, 516. The connection between the Ethernet switch 508 and the camera 526 and the sensors 528, 530 is an Ethernet connection. Similarly, the connection between the host controller 512 and the cameras 532, 534 is an Ethernet connection.6 shows the PCIe implementation of the host system 104 and the network switch 106 of FIG. 2 implemented in a vehicle 600, where the host system 104 includes a host controller 108, a host memory 110, a PCIe root complex controller 112, and an application controller 601 and DoS detection controller 602. The host controller 108 includes a device driver 114. The application controller 601 executes one or more software applications and is connected to and/or communicates with the PCIe root complex controller 112 via the first channel. The application controller 601 and the PCIe root complex controller 112 have access to the first set of reception and transmission descriptors stored in the host memory 110. The DoS detection controller 602 is connected to and/or communicates with the PCIe root complex controller 112 via the second channel. In an embodiment, the first set of reception and transmission descriptors is configured by the application controller 601 and used by a DMA controller such as a network switch to transfer data (e.g., application data) as described above. The DoS detection controller 602 and the PCIe root complex controller 112 have access to the second set of reception and transmission descriptors stored in the host memory 110. The second set of receive and transmit descriptors excludes the first set of receive and transmit descriptors. In an embodiment, the second set of receive and transmit descriptors is configured by the DoS detection controller 602 and used by, for example, a DMA controller of a network switch to transfer data (e.g., control information) as described above. In one embodiment, the first and second channels refer to reception and transmission descriptors included in the first and second sets of reception and transmission descriptors.The DoS detection controller 602 sets and adjusts the rules 604 and applies the configuration to the register 133 to adjust the conditions under which frames and/or connections are dropped. Changes to the configuration are made via register access. The controllers 601 and 602 may be implemented in the same host controller as the device driver 114, or may be implemented in other host controllers and/or elsewhere in the host system 104. In one embodiment, the device driver 114 replaces and performs the operations described herein with respect to the DoS detection controller 602. In another embodiment, the device driver 114 provides an interface to the DoS detection controller 602 to evaluate frames and a static Internet Protocol (IP) routing table stored in the TCAM 230 for dynamically configuring IP routing and DoS attack prevention feature. By having the DoS detection controller 602 and/or the device driver 114 evaluate the frame and the static IP routing table stored in the TCAM as described above, the described data transfer system has enhanced robustness because the evaluation is via dedicated control The bus (for example, the control bus 121 of FIG. 2) is completed. The control bus 121 is minimally affected by data traffic received from a network device external to the network switch 106 and received at a port of the network switch 106. Because the control bus 121 is mainly used for the transmission of control information between the endpoint controller 120 and the DMA controllers 124 and 126, and because the control bus 121 is isolated from the external port of the Ethernet switch 132, the Ethernet switch 132 is connected to external network devices. The inter-data service does not affect the transmission of control control information on the bus 121.The host memory 110 includes buffers 200 and 202 and descriptors 204 and 206. The network switch 106 communicates with the host system 104 via the link 107 and includes an endpoint controller 120, a register 133, and a TCAM 230. Some ports of the network switch 106 (for example, some ports of the Ethernet switch within the network switch 106) are connected to network devices outside the vehicle and are protected from attacks.The TCAM 230 stores the rule 604 and its version. Based on the rule 604, the TCAM 230 (i) maintains or discards connections with devices outside the network switch 106 and/or the vehicle 600, and (ii) controls the passage and discarding of frames. The rules 604 provide conditions based on which frames and/or connections will be dropped. The host memory 110 stores a version of the rule 604, which is accessed and modified by the DoS detection controller 602.FIG. 7 shows an attack prevention method mainly implemented by the TCAM 230. At 700, the DoS detection controller 602 accesses the register 133 according to the application (eg, client application) implemented by the application controller 601.At 702, the TCAM controller 232 receives a frame from a source device external to the network switch. The frame is received from one or more source devices in the network outside the vehicle, and/or one or more frames are received from the source device in the network inside the vehicle.At 704, when a frame is received, the TCAM controller 232 checks one or more of the received frames. The TCAM controller 232 selects at least some of the received frames to check. The inspection is guided based on rules 604. At 705, the TCAM controller 232 determines the probability of an attack based on the result of the inspection. If the probability is greater than the predetermined level, then operation 706 is performed, otherwise, operation 702 is performed. In one embodiment, the TCAM controller 232 calculates the IP checksum offload value. This is done to check the integrity of the frame and determine whether the frame has errors. The IP checksum offload value is used to determine whether the frame is damaged. For example, the header of the frame is modified to include the IP checksum offload value.At 706, based on the result of the check, the TCAM controller 232 forwards one or more of the received frames to the application controller 601, discards one or more of the received frames, and/or will receive One or more of the received frames are forwarded to the DoS detection controller 602. In one embodiment, operation 708 is performed after operation 706.At 708, the TCAM controller 232 accesses the updated rules from the DoS detection controller 602, which are stored in the register 133. The DoS detection controller 602 stores the rules in the register 133. At 710, when the connection has been maintained, the TCAM controller 232 proceeds to operation 712, and when the connection has been dropped, the CAM controller 232 performs operation 714. In an embodiment, operations 700, 702, 704, 706, 708, 710, 712, 714, 716, are performed for each source device outside the network switch 106 and in the network inside the vehicle or in the network outside the vehicle. 718, 720. The TCAM controller can perform multiple iterations of these operations in parallel. Therefore and as an example, operation 712 may be performed on the first device, while operation 714 may be performed on the second device. The TCAM controller 232 creates a log and/or sets an alarm as described below for the first device, and at the same time counts the period of time that has elapsed since the connection was dropped and determines whether the connection with the second device needs to be re-established.At 712, the TCAM controller 232 creates log entries and/or sets alarms. The log is stored in the TCAM 230 and maintains a record of the received frame, the source of the frame, the address of the source, and the time and date of the received frame. Logs are used for frames that are associated with a possible attack and/or are targeted for a probability of attack that is greater than a predetermined level. In an embodiment, an alarm is generated to indicate the probability level that a frame has been received and whether the frame is associated with an attack. For example, the alarm includes a video signal indicated on the display of the vehicle, an audio alarm, an alarm signal sent to a mobile device in the vehicle, an alarm signal sent to a network device outside the vehicle (for example, a central monitoring station), and /Or a warning signal sent to the diagnostic controller outside the vehicle.At 714, the TCAM controller 232 starts a timer. In one embodiment, a timer is started when a frame that is associated with an attack and/or has a high probability of being associated with an attack is received or checked. In the illustrated embodiment, a timer is started when the connection associated with the frame is dropped. At 716, the TCAM controller 232 determines whether a predetermined period of time has passed since the timer was started. If the predetermined time period has passed, operation 718 is performed.At 718, the TCAM controller 232 determines whether to re-establish the dropped connection. This is determined based on the rules, the probability of being attacked, the identification and/or location of the source device that sent the frame, and/or other information indicating whether the frame is associated with the attack (such as received information indicating that the source device is not an attacker) . At 720, the TCAM controller 232 reconnects to the source device that sent the frame that was previously determined to be possibly associated with the attack.FIG. 8 shows the denial of service method implemented by the DoS detection controller 602. At 800, the DoS detection controller 602 programs the TCAM controller 232 according to the application (eg, client application) implemented by the application controller 601 at startup.At 801, the DoS detection controller 602 receives a frame from the TCAM controller 232 via the PCIe root complex controller 112. At 802, the DoS detection controller 602 analyzes the received frame to determine whether the received frame is associated with an actual attack or there is a high probability of being associated with an attack.At 804, the DoS detection controller 602 changes the rule 604 based on the received frames to drop more frames and/or connections. In one embodiment, this occurs when the probability that the frame is associated with an attack is greater than a predetermined threshold. The determination of whether to discard the frame also or alternatively depends on the type of the received frame, the information in the header of the frame (for example, the IP checksum offload value), and/or the rule 604. At 806, the DoS detection controller 602 sends the updated rules 604 to the network switch 106 for storage in the TCAM 230 and use by the TCAM controller 232.The above operations of FIGS. 5 and 7 to 8 are illustrative examples. Depending on the application, this operation can be performed sequentially, simultaneously, simultaneously, continuously, during overlapping time periods, or in a different order. In addition, depending on the implementation and/or sequence of events, any of the above operations may not be performed or skipped.The above example includes a network switch with an endpoint controller that eliminates the need for a network interface card between the root complex controller and the switch. The provided network switch consumes minimal power and provides the device driver of the host system with control of the entire core of the network switch, including the endpoint controller, DMA controller, MAC receiver and transmitter, TCAM and corresponding to the network switch The control of the register. Examples include a host system with a DoS controller to detect attacks and indirectly control the operation of TCAMs located in network switches to prevent attacks. TCAM can filter out frames that are associated with attacks and/or include errors. When a frame is received and inspected, filtering can happen "on the fly." This prevents network devices outside the vehicle from attacking the host system of the vehicle and controlling the operation of the vehicle.The spatial and functional relationships between elements (for example, between circuit elements) are described using various terms, including "connected", "joined", "coupled", and "adjacent." Unless explicitly described as "direct", when the relationship between the first and second elements is described in the above disclosure, the relationship may be a direct relationship in which there are no other intermediate elements between the first and second elements, but It can also be an indirect relationship in which there are one or more intermediate elements (spatially or functionally) between the first and second elements. As used herein, at least one of the phrases A, B, and C should be interpreted as expressing logic (A or B or C), using a non-exclusive logical OR, and should not be interpreted as expressing "at least one A, at least one B, At least one C".In this application and some examples, including the following definitions, the term "controller" may be interchanged with the term "circuit". In some examples, the term "controller" refers to, belongs to, or includes: Application Specific Integrated Circuit (ASIC); other suitable hardware components that provide the described functions; or a combination of some or all of the above, such as System on chip.
A compact and efficient optical system featuring planar multi-layered LED light source arrays concentrating their polarized or unpolarized output within a limited angular range. The optical system manipulates light emitted by a planar light emitters such as electrically-interconnected LED chips. Each light emitting region in the array is surrounded by reflecting side-walls whose output is processed by elevated prismatic films, polarization converting films, or both. The optical interaction between light emitters, reflecting sidewalls and the elevated prismatic films create overlapping virtual images between emitting regions that contribute to the greater optical uniformity. Practical illumination applications of such uniform light source arrays include compact LCD of DMD video image projectors, as well as general lighting, automotive lighting, and LCD backlighting.
An overhead luminaire comprising:a panel (1096) includinga two-dimensional array of light sources including:means for emitting light; andmeans for reflecting and collimating light propagating from the means for emitting light.A luminaire according to claim 1, wherein the means for emitting light includes a light emitting diode, LED.A luminaire according to claim 2, wherein the panel includes a monolithic two-dimensional array of LEDs.A luminaire according to any one of claims 1 to 3, wherein the means for reflecting and collimating light includes an etendue preserving reflector.A luminaire according to any one of claims 1 to 4, wherein the panel is dimmable, or wherein light propagating from the panel is evenly spread, or wherein color and temperature of light propagating from the panel is controllable.A luminaire according to any one of claims I to 5, further comprising a light directing layer (1092; 1093) placed across an output of the panel to increase output angle of illumination.A luminaire according to claim 6, wherein the light directing layer is a plano-concave lens.A luminaire according to claim 6, wherein the light directing layer is a negative Fresnel lens.A luminaire according to claim 7 or 8, wherein the lens is cylindrical.A luminaire according to claim 7 or 8, wherein the lens is spherical.A luminaire according to claim 7 or 8, wherein the lens is aspherical.A luminaire according to any one of the claims 1 to 5, further comprising a diffuser placed across an output of the panel to increase output angle of illumination.A luminaire according to any one of claims 1 to 12, wherein the panel includes:a light source cube including three two-dimensional arrays of light sources emitting light of different colours into three sides of the cube, exiting at an output face.An array of luminaires, each in accordance with any one of claims 1 to 13, arranged to provide contiguous lighting patterns.An array in accordance with claim 14, including six luminaires arranged in two rows providing uniform distribution of light at a tabletop plane.
The present invention is concerned generally with a thin and compact multi-layered optical system and method for generating well-organized output illumination from a spatially discontinuous one or two-dimensional array of discrete emitters, the output light emanating from one (or opposing sides) of the multi-layered system, uniformly over the system's aperture. The field of illumination produced by the optical systems containing these emitting arrays is rendered visually featureless so as to provide useful rear-illumination for an image to be viewed directly, an illuminating beam for an image to be projected onto a screen, or the illumination itself may be composed of an array of seamlessly arranged and controlled image pixels, the sum of which at any instant forming a spatially modulated image to be viewed directly. The field of even illumination so produced may also be used as a means of general illumination. More particularly, the multi-layer optical system that achieves this favorable performance uses a sequence of at least two optical light directing layers positioned relative to the `emitting array surface or surfaces, these layers located at a preferred elevation above the discontinuously emitting source array, the layer constructions designed to even-out the light source array's brightness uniformity and color on the system's output aperture or output screen, and in doing so, form a uniform beam of light. An additional purpose of these precisely elevated optical layers is to establish a fixed angular range for the beam of emitted light. The system's first (and in some cases second) light manipulating layer is designed in such way that it shifts and expands the spatial distribution of input light so as to minimize brightness variations presented to subsequent layers and output screens. The related layer or layers, in configurations that need them, can be conventional light spreading materials such as holographic diffusers, lenticular diffusers, lens arrays, bulk or surface scattering diffusers, opal glass, or ground glass. The related layer or layers can also be a reflective polarizer that holds light of one polarization state within the light source structure until it converts to light of the orthogonal polarization. A base-diffusing layer, positioned just above the light source's emitting plane is added in some applications to introduce additional randomization.Currently available illumination systems capable of achieving equivalent brightness uniformity using only conventional diffusers do so either less efficiently (in terms of brightness), in a thicker package, or both.Such improved illumination systems are of primary interest for the projection of images onto screens from such spatial light modulators as reflective and transmissive LCDs and DMDs. Such improved illumination systems are also of interest for the backlighting of LCD screens, where illumination uniformity must be of extremely high quality without sacrificing any amount of brightness or compactness. LCD applications require the highest possible brightness combined with the thinnest possible packaging. Improved illumination systems are also of interest for backlighting passive appliques used in a myriad of high brightness signage and display applications, including for example, one and two sided EXIT signs. Other applications for such improved illuminations systems include theatrical lighting, automotive headlights, safety warning lights, and certain traffic signals and alerts.These improved illumination systems are also of interest for their intrinsic ability to display images directly, when the light source involved is made as a discontinuous array of individually-addressed light emitting regions or pixels whose boundaries are not contiguous, but when the multi-layer optical system achieves their seamless arrangement, so as to create an image characterized by both evenness of pixel illumination and maximization of pixel density.It is, therefore, an object of the invention to provide an improved illumination system and method of use.It is another object of the system to provide a novel light source panel system and method for providing efficient and homogeneous rear illumination for such images as those represented by LCD screens.It is a further object of the invention to provide a novel light source panel system and method for providing efficient and homogeneous rear illumination for the stencils and appliqués used in commercial signage, including potentially "exit signs" and various traffic control signs and announcements.It is still another object of the invention to provide a novel light source panel system and method for providing an efficient and homogeneous beam of directional illumination to LCD and DMD spatial light mod-alators within compact video projection systems.It is an additional object of the invention to provide a novel light source panel system and method for providing a uniformly consolidated light beam from a regular array of substantially square emitting regions such that each emitting region is converted into a virtual emitting square up to twice the width on each edge as the original emitter, and the emitting regions spaced from each other so that the resulting virtual emitter array appears to be filled with substantially contiguous virtual images whose overall aperture appear to emit a directional beam of high uniformity.It is still another object of the invention to provide a multi-layered packaging means for a novel light source panel structure containing a sparse two dimensional array of light emitting diode chips on a layer that provides for external electrical interconnections to the diodes, and that isolates one or more diode chips within separate diffusely reflecting compartments, the compartments themselves arranged in a two-dimensional array that is covered with a stack of optical layers, one of which is a mechanical spacer that allows light transmission from each compartment to reach two light directing layers that include linear arrays of prism-like grooves made in a clear plastic material, the grooves in each layer aligned at 90-degrees to one another.It is also an object of the invention to provide a multi-layered packaging means for a novel light source panel structure containing a sparse two dimensional array of single-colored light emitting diode chips on a layer that provides for external electrical interconnections to the diodes, and that isolates each chip within a separate diffusely reflecting compartment, the compartments forming a two-dimensional array with diffusely reflecting spaces between the compartments being between 0.5 and 1.0 times the width of the compartment, the compartments covered with a stack of optical layers, one of which is a transparent spacer allowing light from each compartment to two light directing layers that include linear arrays of prism-like grooves made in a clear plastic material, the grooves aligned at 90-degrees to one another.It is yet another object of the invention to provide a novel manufacturing method for multi-layer light source panel structures wherein a very large area single lamination of thin multi-layer sheets, including a regular two-dimensional array of bonded light emitting diodes separated from and laminated to a series of light directing layers by an exact spacer thickness, so that the large area lamination can be subsequently sectioned into individual light source panel devices, each containing a constituent array of light emitting diodes and the common multi-layer optical and mechanical structure, where the size and shape of the yielded light source panels is predetermined by the electrical interconnection design.It is still a further object of the invention to provide a novel means for integrating three separate primary colored light source panels, one each of red, green and blue, into three panel reflective LCD video projection systems, one LCD for each primary color, each light source panel within a reflective non-imaging angle transforming system comprising an LCD, a polarizing beam-splitter, a wide band quarter wave phase retardation film, a concave metallic reflective surface, and a negative field lens.Plus others on projectorsPlus others of light source cubesPlus others on LCD backlighting, automotive head lightingIt is yet another object of the invention to provide an improved system and a method for diffusing the inhomogeneous light emitted by a two-sided discontinuously emitting array, such that the dimmer regions in between the more strongly emitting regions of the array are strengthened in light intensity in part by the refracting action of the pre-diffuser, whose unique elevation above the emitting array is specifically chosen for optimum output uniformity.It is a further object of the invention to provide an improved system and a method for homogenizing the uneven light distribution of a double-sided discontinuously emitting source, using a sheet consisting of linear micro prisms (or prism-like elements) formed in an array and positioned a fixed elevation above the emitting source.It is yet another object of the invention to provide an improved system and a method for homogenizing the uneven light distribution presented by a discontinuous two-dimensional array of light emitting diodes or regions containing light emitting diodes, each diode (or diode containing region) having length and width W, and equal separation from adjacent regions, W (or less than W), by using two parallel but orthogonal sheets of linear micro prisms, the exact elevation of these sheets from the emitting plane set approximately at height generally between W and 0.5W, so as to produce maximum evenness of output brightness within the output beam so created.It is still another object of the invention of provide an improved system and a method for homogenizing the uneven light distribution presented by a two-dimensional array of light emitting diodes, each diode contained in a separate emitting cavity whose output aperture is separated from two parallel but orthogonal sheets of linear micro prisms, the separation created by a spacer layer composed of an array of reflecting cavities of specified sidewall slope.It is still another object of the invention of provide an improved system and method for homogenizing the uneven light distribution presented by a two-dimensional array of light emitting diodes, each diode contained in a discrete commercial package, each package separated from each other by a space equal to or less than the width of the package, and whose output apertures are covered with a diffusing material, the array separated from two parallel but orthogonal sheets of linear micro prisms by a transparent spacer layer of thickness falling generally between 0.5 and 1,0 times the width of the packages in the array.The following number clauses provide atterative optional embodiments and/ or aspects of the invention.Clause 1. An image display system, comprising:a means for generating image information including a spatial light modulator;and a light Source system for directly illuminating said spatial light modulator consisting of:a one-dimensional array of substantially parallel emitting channels located behind said spatial light modulator, said array having an output area and shape arranged to match or exceed the aperture size of said spatial modulator, each channel of said array having an emitting width W defined by the projected length of emitting material enclosed by said channel, measured perpendicular to the axis of said channel and as viewed from above said array, adjacent transparent regions of equal intra-channel separation S, being substantially less than W and equal to the shortest distance between said emitting material of any one channel and said emitting material of any adjacent channel;a first light directing layer including parallel array of transparent dielectric micro prisms or aspheric semi-cylinders on a transparent substrate, said substrate arranged parallel to the output plane of said emitting channels and between the output plane of said emitting channels and the input side of said spatial light modulator, separated from said emitting channel output plane by an optical distance 1, T being less than W;a second light directing layer disposed between the said first light directing layer and said spatial light modulator arranged parallel to and above said first light directing layer by an air-gap of thickness D, D being substantially less than W+S, said layer including one or more of a holographic diffuser sheet, a bulk diffuser sheet, a surface diffuser sheet, lenticular lens diffuser sheet and reflective polarizer film;a light reflecting layer located behind the rear side of said emitting channels a distance G from the rear side emitting plane, G being substantially less than S, said layer including one or more of a white diffuse reflecting material, a specular reflecting material a prismatic reflecting material, a structure reflecting material, a bulk diffuser, and a holographic diffuser, and a flat substrate.2. The image display system as defined in clause 1 wherein said spatial light modulator is a liquid crystal display (LCD).3. The image display system as defined in clause 1 wherein said spatial light modulator is a passive alphanumeric appliqué such as found in conventional EXIT signs.4. The image display system as defined in clause 1 wherein said spatial light modulator is a photographic transparency.5. The image display system as defined in clause 1 wherein said first diffusing layer includes a plastic sheet of 90 -degree micro prisms, as manufactured by Minnesota Mining and Manufacturing Company under the trademark BEF.6. The image display system as defined in clause 1 wherein said first diffusing layer includes a sheet of short-focal length plastic cylinder lenses.7. The image display system as defined in clause 1 wherein said first diffusing layer includes a sheet of aspheric plastic cylinder lenses whose cross-sectional shape can be inscribed within a prism whose apex angle is approximately 90 degrees full angle.8. The image display system as defined in clause 1 wherein said second diffusing layer consists of a single holographic diffuser sheet whose output angle specification, if not symmetric, is made widest in the plate perpendicular to the axes of said emitting channels.9. The image display system as defined in clause 1 wherein said second diffusing layer consists of two holographic diffuser sheets, either touching each other, or separated from each other by an air-gap thickness in the range of 1 to 3 mm.10. The image display system as defined in clause 1 wherein said second diffusing layer includes a reflective polarizer film whose polarization transmission axis has been aligned parallel with the direction of any input polarizers transmission axis that may be attached to or otherwise part of said spatial light modulator.11, The image display system as defined in clause 1 wherein said emitting channels are hollow, thin-walled, made of glass, approximately rectangular in cross-section, and whose inside walls have been coated with a fluorescent phosphor material.12. The image display system as defined in clause 11 wherein said emitting channels are attached to each other in a continuous serpentine manner by means of short interconnecting channel sections perpendicular to the axis of the interconnected parallel sections.13. The image display system as defined in clause 11 wherein said emitting channels have substantially equal emitting widths falling between 10 and 15 mm and substantially equal transparent separation regions falling within the range of 1 to 5 mm in width.Clause 14. An illuminating system comprising:a two-dimensional array on substantially equal horizontal and vertical center-to-center spacing consisting of one or more electrically interconnected light emitting diode chips of lateral dimensions L mm by W mm whose transparent substrate medium is of refractive index no;a reflecting plane layer disposed behind said two-dimensional array of said electrically interconnected light emitting diode chips, providing means of support and electrical interconnection;a first transparent spacer layer of finite thickness, comprising regions of a transparent dielectric medium of refractive index n1 said dielectric medium encapsulating said light emitting diode chips;a second transparent spacer layer of finite thickness, comprising a dielectric medium of refractive index n2 , the top surface of which contains a regular array of v-shaped smooth-sided grooves having equal groove angle and equal groove depth;a third transparent spacer layer disposed just above said first transparent spacer layer of finite thickness, comprising a medium of refractive index n2 , said refractive index n3 being less than said refractive index n2 ;a fourth transparent spacer layer disposed above said third transparent spacer layer of finite thickness, comprising a medium of refractive index n4 being greater than said refractive index n3 , the bottom surface of which is a smooth plane, the top surface of which contains a regular array of v-shaped smooth-sided grooves having equal groove angle and equal groove depth;and whose groove axes run substantially at a 90 degree angle to the groove axes of said second transparent layer;a fifth transparent spacer layer disposed above said fourth spacer layer of finite thickness, comprising a medium of refractive index n5 , said refractive index n5 being less than said refractive index n4 ,a sixth transparent spacer layer disposed above said fifth spacer layer of finite thickness, comprising a medium of refractive index n6 , said retractive index n6 being greater than said refractive index n5 and said groove angles being those measured between adjacent groove faces, said groove depth being the shortest distance measured from the bottom of said groove to the top of said groove; andsaid sixth transparent spacer layer including at least one of a light scattering diffuser and a polarizer for absorbing or reflecting light of a first polarization state and transmitting light of a second polarization state orthogonal to said first polarization state.15. The illuminating system as defined in clause 14 wherein said dimensions L mm and W mm of said electrically interconnected light emitting diode chips are approximately equal to L and lie between 0.2 mm and 2.0 mm on a side.16. The illuminating system as defined in clause 15 wherein said center-to-center spacings are within a range no less than about 1.5L and no greater than about 2L. 17. The illuminating system as defined in clause 15 wherein said two-dimensional array consists of said electrically interconnected light emitting diode chips all emitting substantially the same color light.18. The illuminating systems as defined in clause 15 (wherein said two-dimensional array consists of clusters of three or more said light emitting diode chips, one emitting substantially red light, one emitting substantially green light and one emitting substantially blue light, the centers of said clusters separated from one another by said center-to-center spacings being no less than 1.5 times the minimum cluster size defined by the square area taken up by said electrically interconnected light emitting diodes located within said cluster, said electrically connected light emitting diodes within said cluster being separated from each other in all directions within said cluster by a space no smaller than about 1.5L. DETAILED DESCRIPTION OF REFERRED EMBODIMENTS One form of the present invention involves the use of a stack of light directing layers disposed above a plane of separated emitters arranged either as an array of parallel stripes or as a two-dimensional array of bounded emitting regions such that a directed output beam of even uniformity is created as if from a continuous emitter of area equal to that to the output aperture. One (or two) of the light directing layers are prism sheets whose geometry and elevation above the plane of emitters is chosen uniquely so as to create the required overlap and diffusion of emitter images. 1.0 One-Dimenstoaat Emitting Array An optical system constructed in accordance with one principal form of the invention is indicated generally in FIG. 1 and represents a side elevation. The optical system 10 embodies a structure and method, which uses various optical elements, disposed in a compact geometry relative to an intrinsically discontinuous light source 1 formed by an array of emitters, and manipulates this light to generate uniform output illumination over a wide range of illumination directions on an output screen 28 placed the minimum possible distance from the plane of light source 1. Light from this output screen then provides the required even field of featureless back illumination, either continuously white in color, or pulsed rapidly and sequentially in periods of red, green and blue emission for a directly viewed image display device 3 placed against it, device 3 which may be a spatial light modulator (SLM) image display such as a conventional liquid crystal display ("LCD") imaging device or other active image display devices that do not generate light of their own but rather modulate brightness of the tiny component parts of the image know as pixels. The image display device 3 may also be any one of a variety of passive or static image sources such as photographic transparencies, wherein for example the system 10 can be used as an improved illuminator for medical x-ray films.The behavior of the system of FIG. 1 and that of each of its elements is described in greater detail below. In summary, the height of prism sheet 7 is used to form overlapping virtual images 26, 27 of the output plane 3 4 of light source 1. Light from the overlapping emitter images is then used to fill in the non-emitting spaces 25 between emitters as evenly as can be arranged, and thereby reduce the maximum and minimum brightness that would otherwise be observed. Subsequent conventional light scattering layers 28 and 30 are elevated above the vertex points 12 of the image-displacing prism array 7 by distances G2 and G2+G3 respectively to add further spatial mixing to the result and also to widen the range of output angles that exit the effective output screen 28. As will be shown below, the exact height of the prism sheet 7 above the output plane 34 of light source 1, whether this is the discrete emitting channels themselves, or a diffusively scattering layer above the emitters, depends on the geometry of the prism units and on the degree of image displacement that is desired. The prism's apex angle and size can each be varied so that the distance 18, Gal, is the smallest possible, which in some cases might be zero.Back-reflector 46 in FIG. 1 is composed of a metal or plastic support substrate 48, a reflecting layer 50 which may be diffusely reflecting, and a gap 52, whose medium (air or dielectric) and thickness 52 are adjusted to provide a balanced transfer of back-reflected light back through the channels, and through the non-emitting gaps 25. When the support substrate 50 is an electrically conductive one, it becomes a capacitive part of the electrical equivalent circuit of light source 1. When the support substrate 50 is a thermally conductive one, it equalizes the distribution of heat throughout the lamp in a manner that has become traditional in many light source systems when spatial light modulator 3 is an LCD screen. The conductive plane provides a means for preventing LCD image contrast changes caused by its exposure to local heating. When the support substrate 50 is an electrical ground plane, the purpose of separation distance 52 is also to prevent (or minimize) electrical power losses to ground from leakage of the current flowing in light source 1 through the plane's distributed capacitance. This plane can also be used to isolate electronics used to control the spatial light modulator 3 from the electrical drive fields of light source 1. Generally, the performance of this conducting plane 50 is maximize when made of the most electrically conductive metals such as stainless steel and aluminum. The diffusely reflecting layer can be any material loaded with highly reflective white light scattering particulates such as those plastic sheets called REFWHITE manufactured by Kimoto Co., LTD. of Japan.The prism sheet 7 may be one of the 90-degree offerings manufactured by the Minnesota Mining & Manufacturing Company (3M) under the trade name BEF, as in brightness enhancement films. The prism sheet 7 may also be a more specifically suited custom designed prism array molded, embossed or cast in an optically transparent material such as acrylic, polyester or polycarbonate, wherein the apex angle of the prism is not 90 degrees, but rather an angle matched to the exact performance required, and perhaps the prism shape modified, as described below, to fine tune the particular image displacement characteristics needed. Since direct view oi` the prisms themselves is obscured from view by the diffusive layers 20 the width of the prisms, any cosmetic defects on their surfaces and any dead space 16 between the individual elements, is cannot be seen. Widespread commercial applications of 3M's BEF products in backlit LCD screen systems place the BEF sheets just behind the LCD screen, where any discontinuities or defects in its optical performance are seen directly, even through weak diffuser sheets. Consequently, those principally brightness enhancing features of the 3M prism sheet materials require extreme levels of cosmetic perfection in both manufacturing and handling.For practical applications, the total system 10 thickness 56, T, in FIG. 1 , G3+G2+G1+L, is made the smallest possible commensurate with suppressing visibility of the discontinuous nature of light source 1. A practical example will be given further below for a new flat, parallel- emitting-channel fluorescent lamp developed by Corning, Inc.Another embodiment of the invention of FIG. 1 is given in FIG. 2 , which also represents a side elevation. In this case, the single light source 1, which as above emits light from its entire internal surface, in both forward and rearward directions, does so surrounded by a completely symmetric image display system 10 featuring both forward and rearward spatial light modulators 3 and 4 positioned on opposing sides of light source 1, each with its own interior and exterior intervening diffusing layers 20. The result is a particularly thin two-sided display device whose bright and uniformly illuminated images can be seen from either side. In this case, half the total lumens produced by the light source are routed through each side's set of intervening multi-layers 7 and 11. The configuration of FIG. 2 is exactly that of FIG. 1 with its structure disposed symmetrically on each side of the system's mirror plane 6. Virtually identical emitting patterns 24 are produced on the outermost light scattering surfaces 34 of light source 1, and it is these light patterns that are displaced as virtual images 26 and 17 by the prism sheets 7, governed by their apex angles 8 and their relative heights 18 above the object planes 34. In this two-sided structure, any light reflected back towards light source 1 by the upper prism sheet 7 is either re-scattered by the upper side of light source 1 or transmits through light source 1 and becomes a part of the light emitted by the lower side of light source 1. Practical applications of this double-sided invention format include two-sided televisions, two-sided desktop computer monitors, two-sided commercial signs such as "EXIT" signs, and two-sided passive signs displaying a different message depending on the side viewed. 2.0 Two-Dimensional Emitting Arrays A two-dimensional emitting array is formed by arranging rows and columns of discrete square (or rectangular) emitting apertures, as opposed to the rows of one-dimensional emitting stripes involved above. In this case, the discrete emitting regions are separated from each other by non-emitting regions, and the need is for a means to provide light evenly distributed over the entire array aperture. Such means is provided in the present invention by a bi-layered extension of the elevated single prism sheet method of FIGS. 1 and 2 , as well as by arrays of discretely tapered micro reflectors. Both two-dimensional approaches couple light collected from the discrete emitting elements in the array to a collective and spatially unified output beam that appears to have come from a single output aperture encompassing the entire array. 2.1 Elevated Prism Sheets The precise elevation of two orthogonal prism sheet layers 58 and 60 is applied to create a two-dimensional array of virtual images, four virtual emitter images associated with every emitting object in the underlying emitting array. By taking this bi-layered, rather than mono-layered prism sheet approach, a completely contiguous output array of emitter images can be achieved without any appreciable non-emitting regions between them. As one example, square emitters, W millimeters on a side, separated from each other by non-emitting gaps W millimeters wide are so converted into a contiguous array of W millimeter square emitter images, each conveying about a quarter of the original light flux emitted (less the transfer efficiency of the light through the two prism layers 58 and 60). Moreover, this organized output light is constrained to a concentrated range of output angles characteristic of the prism geometries used within each prism sheet, and relatively independent of the considerably wider angular range of input light emitted. In this manner, an emitting array whose original emitting area is only 25% of the overall array aperture, converts to an output array whose emitting area becomes 100% of the overall array aperture, and whose emission is contained within a reduced range of emission angles. The practical advantages of beams with such uniformity and directionality will be illustrated in a set of examples to follow below.This bi-layer prism sheet approach is implemented in one of two illustrative two-dimensional configurations related to the one-dimensional method of FIG. 1 . A first approach, shown schematically in FIG. 3 relays the contiguous virtual images created by the two elevated prism sheets to the output plane by means of an array of micro lenses. A second approach, generally preferable to the first with regard to compactness, is illustrated schematically in FIG. 7 , and uses the two prism sheets alone, with light projecting outwards to the output plane from the underlying virtual images themselves. The spatial relationship between the virtual images created by any given emitting aperture is illustrated graphically in FIG. 4 . One example of a compartmentalized spacer layer between the emitting array and the prism sheets, showing one isolated compartment per emitter, is conveyed in FIG. 5 . Then, the form of a tool enabling fabrication of this spacer layer is given in FIG. 6 . 2.1.1 Tri-Layered Prism Sheet Illuminators A cross-sectional view is given in FIG. 3 of one two-dimensional multi-layered emitting array consisting of two prism sheet layers and one micro lens layer. In this example, output screen 28 is arranged to display a contiguous or slightly overlapping array of well-defined and controllable image (or illumination) elements (sometimes referred to equivalently as pixels) that result from the system's manipulation of light from, in this case its two-dimensional array of individually controlled emitting entities 72 whose square (or rectangular) emitting apertures 24 form the base layer of light source 1. Two prism sheets, 58 and 60, are used in tandem, both sets of prism vertices pointing towards the viewer, with the planar axis of each sheet (x axis 11G for sheet 58 and y axis 116 for sheet 60) made perpendicular to each other and the relative spacings G1' and S1, 19 and 34, adjusted so that, as shown in the cross-sectional perspective of FIG. 3 , two orthogonal sets of shifted virtual images 106 are created, thereby converting every emitting area 110 into a cluster 106 of four essentially contiguous virtual images 26, 27, 108, and 109 of the original emitting region 110 (shown more clearly in the perspective drawing of FIG. 4 ). In this case, the lower prism sheet 58 creates a pair of shifted virtual images 26 and 27 in the plane of the cross-section in FIG. 3 and similar to those described in FIG. 1 , while the upper prism sheet 60 splits this pair into a pair of virtual image doublets shifted into and out from the plane of the cross-section, one doublet composed of images 26 and 27, the other composed of images 108 and 109, as in FIG. 4. Fig. 4 provides a three-dimensional view of these spatial relations, with emitting region 110 shown as a white square in relation to its four shifted virtual images 26, 27, 108 and 109, and the surrounding eight emitting regions 112, each of these shown shaded. The spatial boundary of the resulting virtual pixel is highlighted with a black frame 114 for additional clarity. Each of the four virtual images of emitting region 110 are shifted a distance W', 120, from center 122 in each of the two orthogonal directions x, 116 and y, 118. The plane of the cluster of virtual images 106 resides at a height, 124, G1'-V above the emitting plane 122, where V is the depth of this plane beneath the plane of the lower prisms 58, which will be established more exactly later on.A viewer looking at the output side of the prism sheets 58 and 60 in FIG . 3 sees the virtual image plane as a contiguous array of discrete regions, each consisting of the 4 -image cluster 106 ( FIG. 4 ) of each underlying emitting region 110. While this alone might be suitable for some direct viewing applications, there are some limitations to take into consideration. One limitation is that output light from the virtual image plane is confined to a narrow cone of viewing angles (+/- 22.5 degrees to half peak power) that is an intrinsic feature of transmission through two orthogonal prism sheets. The second limitation is that demarcation lines within each 4-image cluster and the demarcation lines between the individual 4-element pixels themselves, might give rise to a visible pixel structure distracting to a direct viewer.Practical applications of the two-dimensional illumination systems of FIGS. 3 and 7 , however, relate both to those involving direct view of the illuminator and those wherein the system's output light beam 100 is used to provide illumination to external elements that are themselves viewed. In some applications, it is preferable that light from the illuminator's aperture be smooth in its spatial uniformity, but the emission confined to a narrow range of angles. In other applications, not only must the spatial uniformity be smooth, but the illumination must also be made visible over a wide range of viewing directions.In direct view applications, one solution to the demarcation lines and the viewing angle restrictions is provided within the multi-layer structure of FIG. 3 by the array of micro lenses 62 that are used to relay a real image of the virtual image plane at unity magnification to an output scattering layer 94 placed within (or on) output viewing screen 24. FIG. 3 symbolizes only one lens unit per pixel region 102, but sub-arrays of lenses may also be used as dimensions require and allow. The exact height 32 of the viewing screen, G3, can be adjusted to defocus the system just enough blur to soften the appearance of the demarcation lines. And, the more Lambertian the behavior of the scattering mechanism involved, the wider becomes the associated output viewing angles 100. The generalized behavior of lens array 62 is illustrated in FIG. 3 with rays 88 from virtual image point B (corresponding to one set of rays from point A on emitting region 24) collected and imaged as rays 96 forming real image point C on the lenses image plane 94, whereupon they are scattered by viewing screen 28 into a fan of output rays 100. In direct beam illumination applications, a solution to the demarcation lines between virtual images is to defocus the system, so that the relayed output images are not sharply focused on output screen 28 (used with little or no scattering layer 94). Lens array 62 may be a two-dimensional array of plano-convex aspheric surfaces, each having square (or rectangular) boundaries as needed to correspond to pixel dimensions, which in the examples given are 2W by 2W. The lens array 62 may also be composed of aspheric Fresnel lens elements, or two parallel sheets of aspheric,plano-convex or Fresnel cylinder lens arrays, each cylinder lens having a width corresponding to that of the corresponding pixel width (2W). In this latter case, the two sets of cylinder lens arrays are oriented so that their cylinder axes are orthogonal to each other. For shortest possible focal length, however, a stack of two aspheric plano-convex lenses (bulk or Fresnel), vertices facing each other, might be used for each pixel. This would mean using registering two parallel sheets of lens arrays in place of the single sheet depicted in FIG. 3 . For larger sized pixels, more than one shorter focal length lens can be used within each pixel region. Regardless of the lens format used, the effective lens focal length is set so that it is approximately half the lens's elevation above the virtual image plane 66, which can be no closer than G2 + S1 + V. At the same time, the viewing screen 28 is elevated above the lens plane an equal distance G3. In this situation, the total thickness, 26, T of system 10, becomes 4 F+G1'-V, where F is the minimum focal length of lens array 62, G1' approximately the width of the emitting region 110, and V the depth of the virtual image plane 66 below the plane of prism sheet 58 (which as will be proven later is about 0.75W).When the emitting regions are taken as 8 mm squares, the thickness of two prism sheets, 0.3 mm, and the spacing between them, S1, near zero, the minimum focal length of lens array 62, when composed of only one lens element per pixel, becomes about (.75W+0.3)/2 or 3.15 mm, which is shorter than practical for a single pixel-sized lens element. The shortest practical single lens focal length covering the entire 1b rom x 16 mm aperture (22.6 mm diagonal) would be about twice the semi-diameter or 22.6 mm, making total thickness 26, more than 90 mm. One practical way of achieving the more preferable focal length of 3.15 mm is to use a 7 x 7 or 49 lenslet sub-array covering each 16 mm x 16 mm pixel area, with each lenslet in the sub-arrays truncated for this example to 2.28 mm squares each having an effective spherical radius of curvature of 1.57 mm. If this were done, total thickness 26 becomes about 12 mm plus the thickness of light source 1, which is more suited to the applications of interest.In this manner, the arrangement shown in FIG. 3 converts each emitting area 24 on light source to a corresponding emitting area or pixel (102 in FIG. 3 , 108 in FIG. 4 ) on output screen 28 with the spaces between the original emitters (24 in FIG. 3 and 110 in FIG. 4 ) effectively eliminated by the optical manipulations. System performance is improved considerably, however, adding a physical structure 84 in between and otherwise bounding all emitting areas (24 in FIG. 3 and 110 in FIG. 4 ), both to minimize pixel-to-pixel cross talk and to serve as a natural spacer of thickness G1' for the lower prism sheet 58. The processing of virtual images 26, 27,108 and 109 in FIG. 3 and FIG. 4 is independent of the presence of structure 84 and its sidewalls 85. The sidewalls serve to restrict any high angle light rays from the emitting region 24 itself, and any initially reflected light from prism sheet layers 58 and 60 from reaching or returning to prism sheets 58 or 60 outside the boundary lines 102 of the corresponding output pixel. When these sidewalls 85 are made reflecting, such rays scatter within the cavity until they are randomly converted to one of the correct angles for transmission as output light from prism layers 58 and 60 within the pixel boundary. 2.1.2 Compartmentalized Prism Spacing Layers A generalized three-dimensional view of one such structure 84 is given in FIG. 5 showing a hollow faceted version wherein the sidewalls 85 surrounding any discrete emitting region 24 are tilted at an angle 87, φ, relative to the system axis 5, Tan-1(W/2G1'). If this hollow isolating structure is made of (or coated with) a white diffusely reflecting material, which is preferable, the sidewalls of the structure 58 and the base layer of prism sheet 58 form boundaries of an effective integrating cavity whose multiplicity of reflections improves the percentage of light transmission through the prism sheets and the light distribution uniformity within any pixel's output area (102 in Fig. 3 or 116 in FIG. 5 ). The structure shown in FIG. 5 can be compression or injection molded in a plastic such as acrylic or polycarbonate that has been filled with particles of a finely divided white light scattering material such as titanium dioxide. The structure can also be formed, in plastic sheets up to about 10 mm in thickness, by an embossing (or casting) process wherein the pyramidal tooling needed to generate each of the four common sidewalls, one version shown in FIG. 6 , is made thicker than the film (or resin) itself so that it punches through the molten sheet (or resin) material to a non-molten carrier layer (or air above the resin), and so generates the array of clear holes 126 needed in Fig. 5 to permit efficient light transfer from the emitting regions 24 of light source 1. The molded, embossed or cast material can also be a composite of polymer and any second phase including glass, ceramic or metal, so as to achieve specific mechanical, thermal and/or optical properties.The compatibility of this structure with the image-shifting function of the prism sheets themselves, as well as some other beneficial forms for this layer, will be covered in more detail below. Qualitatively, however, the most important concept is that any light scattered from the sidewalls that then appears as if emitted by sidewall surfaces themselves, contributes to virtual images of those same sidewalls that shift only inwards towards the center of the cavity, otherwise overlapping the shifted virtual images 26, 27, 108, and 109 of the emitting region itself. As will be explained more thoroughly below, the distance light from any point is shifted by the prism layers 58 and 60, relates to the specific depth of from the base of the prisms of any point of emission, and as mentioned earlier, the apex angle 6 of the prisms themselves. The closer the particular emission point is to the base of the prisms, the smaller is the shift, the further the point from the base of the prisms, the larger the shift, for any given apex angle 6 . Because of this, sidewall light is unable to shift into neighboring cavities, and appears as the tilted images 104 in FIG. 3 . This is beneficial to image-forming performance of the pixels as it virtually eliminates pixel-to-pixel cross talk. 2.1.3 Bi-Layered Prism Sheet Illuminators A thinner bi- rather than tri-layered alternative to the arrangement of FIG. 3 is indicated generally in FIG. 7 , where micro lens array 62 of FIG. 3 has been eliminated, and output screen 28 arranged to display or convey light from a contiguous or slightly overlapping array of well-defined virtual images 102 that result from the system's prismatic manipulation of input light. A chief advantage of the bi-layered approach is that by eliminating relay lens layer 62 of FIG. 3 , the system of FIG. 7 can be made significantly thinner. Total thickness 22 of the bi-layered system, T, reduces to G4 + G1' plus the thickness of light source 1. With 1 G mm square output pixels and the 3.15 mm focal length relay lens array used in FIG. 3 , the total thickness in FIG. 7 depends primarily on the prism offset G1' needed to make the 8 mm emitting regions appear contiguous on the output plane 94.. When using a single 90-degree prism sheet, the condition of contiguous displacement occurs when the offset G1 is substantially equal to the emitter width W. When using two orthogonal 90 -degree prism sheets, however, the offset G1' is somewhat less than W. By both ray trace modeling (using the optical system modeling software ASAP™ produced by Breault Manufacturing Organization) and direct laboratory experiment, it is determined that G1' is approximately 0.625W. This means that with prism sheets having prism elements with standard 90-degree apex angles can be less than about 5 mm plus the thickness of light source 1, about a 2.5x thickness reduction over the 12 mm thick system of FIG. 3 . Then since the prism sheet offset distance, G1', for the perfect image displacement of FIG. 4 can be reduced by means of adjustments to the prism element's apex angle 6, even thinner systems 10 can be created, when so desired.Being able to truncate the illuminator system thickness at the height of the upper prism sheet contributes considerable thickness reduction. The compartmentalized spacer layer 84, whose sidewalls 85 can be made diffusively reflective, reduces visibility of virtual image demarcation lines, as does any scattering layer 94 used within output screen 28. The multi-layer arrangements of FIGS 3 and. 7 are generally preferable for illumination applications involving tightly spaced emitters arrays, where the spaces between emitting elements is about equal to or less than the size of the emitting apertures themselves. When applications call for considerably larger area output pixels, the prism sheet layers are replaced by an array of micro reflectors whose input apertures match the emitting apertures, and whose output apertures are contiguous by design. 2.1.4 Multi-Layered Micro-Reflector Illuminators An alternative output array structure is illustrated in FIG. 8 in which the virtual image forming prism sheets 58 and 60 of FIG. 3 and 7 have been replaced by a two-dimensional layer of micro-reflectors similar to the compartmentalized diffusely-reflecting spacer layer 84 shown previously in FIGS. 3 , 5 and 7 . In this instance, however, the reflecting sidewall 136 is a specularly reflecting one and its mathematically derived shape, critical to function. Wide angle input light from each emitting aperture 24 enters the associated specular reflecting cavities of FIG. 8 , wherein it is transformed by the series of specular reflections that result, into output light whose angular spread is reduced from the input spread in a determinable manner. Since the reflecting elements are themselves made two-dimensionally contiguous, as in FIG. 5 , the output light emitted from the array is itself similarly contiguous. The micro reflector boundaries do form visible demarcation lines in the output light, but these generally fine boundaries can be blurred in numerous ways, including allowance of some purposeful light leakage or cross-over to occur at the reflector boundaries.FIG. 8 shows the cross-section 123 of several emitting pixels as well as the three-dimensional perspectives of single pixel units 121 and 127. The pixel design of perspective 121 crosses appropriate two-dimensional sidewall shapes in the orthogonal (x and y) meridians, whereas perspective 127 is for a reflector having spherical symmetry. On the other hand, when called for, physical boundary walls 133 can be added to isolate the light and its reflections within one pixel from another, thereby substantially eliminating the number of light rays crossing over from one pixel's reflector into the space of a neighboring pixel.By means of micro reflectors, it is possible to magnify emitting region areas beyond the fourfold expansion achieved using bi-layered prism sheets 58 and 60. Because the micro reflector's sidewalls are made sloping outwards towards an enlarged output aperture 102, in principle every input light ray is transmitted as output. No input rays can be trapped internally within specular reflectors 130, as they can by total internal reflections within prism sheets 58 and 60. The reason for this is that there is no combination of specular reflections from outward sloping sidewalls 136 that prevents any input rays from becoming output rays. Then if the outward sloping reflecting sidewalls are shaped purposefully, substantially all output rays can be made to behave in an organized manner.There are at least two advantageous ways of shaping the outward sloping reflector sidewalls for an efficient conversion of input light to output light. One advantageous sidewall shape is that of a concave conicoidal reflector used in conjunction with polarization-selective mirror plane, such as has been described for other illumination applications in US. Patent 6,213,606 . In this case input light is injected through a small aperture made in the reflector's vertex, and multiply reflected output light exits through the reflector's outermost aperture. Another advantageous sidewall shape is provided by tapered non-imaging optical concentrator similar to integrating bars. In this case, input light enters the smaller end of the reflecting element and exits the larger end. 2.1.4.1 Hyperboloidal Reflecting Elements A hyperboloidal reflective sidewall shape is shown schematically in FIG. 9 for the side view of any single pixel unit of what is actually a contiguous two-dimensional array. In this case, the output aperture of the pixel, 150, is made considerably larger than the size of the input aperture, 152, to prevent or minimize any losses associated with light return and re-radiation by this aperture 152. As above, the input emitting aperture may be either the output emitting-surface of a light emitting device such as an LED (or OLED) 70, or the diffusive aperture 24 of a cavity 72 (such as in FIG. 8 ) containing one or more light emitting devices 70. If the input aperture 152 were 2 mm in diameter, the output aperture would be preferably 10-20 mm in diameter or larger. The total lumens flowing out of input aperture 152 becomes substantially the total lumens flowing out of the output aperture 150 less any losses due to absorption and total internal reflections along the way.The optical path of a given extreme ray 154 leaving from point O on input aperture 152 to point D on the output aperture 150 is in its simplest configuration a 3 -step process, as illustrated by rays 156, 158, and 160 in FIG. 9 . Input light ray 154 may be polarized or unpolarized. When it strikes the output screen 131 at point D it is either polarized linearly by the reflective polarizing layer 162 itself, with one linearly polarized ray reflecting as 156, the other linearly polarized ray transmitting as ray 160, or just reflecting as linearly polarized ray 156. In either case, the reflected ray 156 proceeds back towards the shaped concave reflecting sidewall 136, as if from a front focal point 166 of the hyperbolically shaped concave reflecting sidewall 136. This ray 156, on reaching the concave reflecting surface at point B reflects back towards the output screen as if emanating from the reflector's rear focal point 168, and passes through all layers comprising output screen 131, including a wide band quarter wave retardation film 170, the reflective polarizer 164 and the screen 172. The screen 172 may contain a diffusely scattering layer to spread out the light, a collimating lens to narrow the angular output or both. The reflecting sidewall 136 may be smooth and purely specular, may have a stippled surface to create a range of reflected directions or a smooth reflecting surface and an external scattering layer to create a range of reflected directions. The purpose of adding some light scattering on or near the reflective sidewall, whether by stippling .of its surface or by an external scattering layer near its surface is to soften any non-uniformities associated with the input aperture, thereby improving the spatial uniformity of the output light.A special case is presented when input ray 154 is un-polarized. Selective reflecting layer 164 linearly polarizes the directly transmitted output ray 162, and the multi-step (O-D-B-C) reflection process converts output ray 160 to the same polarization as ray 162. As a result, there is a composite output distribution with half the lumens spread over +/-θ as if emanating from point O, and the other half spread over +/- Ψ, as if emanating from point G, 168. Uncorrected, such an angular (and spatial) mix may not be appropriate for every lighting and display application, but may have special benefits for others, particularly in general lighting when providing directed and flood illumination simultaneously.The principal purpose of reflective sidewall structure 131 is to spread out the lumens delivered by the input aperture 152 over a geometrically expanded output aperture (the contiguous pixel) 150 with the smallest loss of light, the least cross-talk with neighboring pixels, and in the thinnest overall multi-layer system thickness, T, possible. When this sidewall shape is made hyperbolic (or approximately so), as in the cross-section FIG. 9 , the input light rays follow the deterministic paths described when the reflective polarizer plane 164 is placed a distance 174, H, above the plane of the hyperbola's vertex point O, H being equal to 0.5 times the distance between the front focal plane and the vertex plane, F2. This positioning extends the total optical path length significantly without increasing the system's thickness. Even though the light originates at point O, its exit through the output aperture 150 at point C is as if the light actually originated at the hyperbola's back focal point 168, a distance F+A further below, F and A being the parameters of the hyperbolic function. When A becomes very large, the hyperbolic function behaves more and more as a parabola, the output rays 160 appear to come from infinity, and are nearly parallel rather than diverging.The mathematics of a hyperbolic reflector is summarized by equations 1-3, which describe the hyperbolic reflector's concave sag 178, Y, as a function of the effective semi-diameter 180, X, which can be thought of as the reflector's radial coordinate, and the salient reflected ray. X = B ⁢ Y + A 2 A 2 - 1X = Tan ⁢ θ ⁢ F ⁢ 2 - YTan ⁢ Ψ = Xo F ⁢ 1 + YoThe parameters A (190 in FIG. 9 ), B, C, F 1 (186 in FIG. 9 ) and F2 (176 in FIG. 9 ) are the hyperbolic constants, θ, the angle 182 that an extreme ray 152 makes with the system axis. The concave sag at any point B, Yois determined by equating equations 1 and 2. Once solved for Yo, the corresponding Xois determined by substitution in equation 2. Then, the resulting maximum output angle 192, Ψ, is set by equation 3. The salient hyperbolic parameters, A, B and C are given in equations 4-7 for the system of FIG. 9 . When F1 approaches -infinity, the reflector shape of equation 1 is parabolic, and the output rays 160 proceed generally along the system axis, the angle Ψ approaching zero. A = K ⁢ 1 - F ⁢ 2 + 2 ⁢ H ⁢ 1C = F ⁢ 2 - H ⁢ 1 + AB = C 2 - A 2 0.5K ⁢ 1 = F ⁢ 1 + F ⁢ 2 / 2Him these equations is the location of the hyperbolic vertex point O on the Y-axis (nominally 0), F1, is the location on the Y-axis of the back focal point 168, and F2 is the location on the Y-axis of the front focal point, 166. F2 is a positive number and F 1 , a negative number. The true focal length, F, is F2 +A ( 188 in FIG. 9 ). The reflector's eccentricity, E, is as always, F/A.The wider the angular spread of emitted light, θ, the larger the output aperture satisfying the above conditions. Choosing the extreme ray of the emitted input light determines the reflector size and thereby, the size of the illuminating pixel.Examples of reflector sizes can be calculated directly from these expressions, with reflecting plane 164 at the prescribed height 174 above the vertex point O, F2/2. As one illustration of this, suppose we wanted to place a 0.5 mm x 0.5 mm LED having a +/- 60-degree angular spread in input aperture 152 of a hyperboloidal surface of revolution with rectangular (or square) truncation. With hyperboloidal parameters F = 38 and A= 32, the reflector reaches a rim height of almost 2 mm at the final reflection point B in FIG. 9 . The semi-diameter of this point is about 7 mm, meaning that the output aperture is approximately a 10 mm by 10 mm square. In this case, the reflective polarizing plane 164 is at a height of 4 mm above the reflector's vertex. This configuration would be accomplished when the phase retardation substrate thickness is about 1 mm. In this case, and without an output lens, the maximum output angle, Y, is just less than +/- 6 degrees, and the reflector, almost parabolic. The conic constant is -1.09, which if an ideal parabola would be - 1.The pixel's output brightness to a viewer depends, as always, on the angular spread over which the lumens are distributed. The wider the angular spread, the wider the range of possible viewing directions, but the lower the brightness. The narrower the angular spread, the higher the brightness, but the more limited the range of possible viewing directions. Layer 184 in FIG. 9 , a light spreading light scattering layer, is used to set both the pixel brightness and the angular extent. 2.1.4.2 Non-Imaging Optic Reflector Elements Specularly reflecting sidewalls 136 mathematically shaped so that the number of sidewall reflections a ray experiences between input and output apertures are minimized, and so that an even distribution of output power is created throughout the aperture, are generally known in the prior art as non-imaging concentrators (and sometime as compound parabolic concentrators). A two-dimensional array of such reflectors arranged similarly to the array conveyed in FIG. 5 can be used to collect light from an array of input light emitters while generating a cohesive output beam 100 whose angular range has been restricted by the details of the sidewall design. More generally, such ideal power transfer can be arranged to behave as an array of θi/θoconcentrators, in that the collective array transforms input light of maximum angle θito output light of maximum angle θoby virtue of the well established Sine Law: AiSin2θi=AoSin2θo, where Aiis the area of each individual emitting region, Aois the area of each individual output aperture, and θiand θothe respective input and output half angles. Such ideal etendue preserving designs, even for an array, transfer the brightness (and uniformity) of the source apertures, which in this case are the set of well-separated emitter regions 24, to the collective output aperture made up of the sum of individual output apertures 102. Less ideal sidewall designs, such as for example, the linearly tapered walls of FIGS. 3 and 7 when used without prism sheets 58 and 60, may transfer nearly the same output power as the ideal designs, but spread that power over a larger than ideal angular range, and show greater levels of spatial non-uniformity than the ideally shaped sidewalls would.Such non-imaging micro reflector array configurations are most beneficial when each micro reflector's emitting aperture is relatively small (less than 5 mm on diagonal) and when asymmetric output angles are desired in the two output meridians. When the emitting aperture is larger than this (i.e. more than 5 mm on diagonal), the non-imaging concentrator approach leads to reflector depths that may be considered too large for practical multi-layer systems in many situations.One example of a potentially beneficial use of a non-imaging reflector shape is provided by a two-dimensional array of 0.5 mm square emitting apertures 24, such would result from the wide-angle outputs of light emitting diode (LED) chips. When output angles of +/-22.5 degrees and +/-17.26 degrees are required in the two output meridians, the reflector's output aperture in accordance with the Sine Law becomes 0.5/Sin (22.5) or 1.31 mm and 0.5/Sin (17.26) or 1.69 mm. This aperture size imposes a limit on the emitting array's density, which becomes in general, Ain/Aout, and in this example, only 11%. By comparison, emitter densities possible by means of the method of FIGS. 3 , 4 and 7 are greater than 25%, with Ain/Aoutbecoming (W2)/(2W)2. Yet as will be discussed later, the throughput efficiency of a non-imaging reflector is potentially much greater than that of the two prism sheets 58 and 60 of FIG. 7 , which is approximately 50%. When the non-imaging reflector is a transparent dielectric whose reflecting walls are created by its air-dielectric boundaries, throughput efficiency as high as 90-95% is possible. When the non-imaging reflector is formed by metal-coated sidewalls 136, throughput efficiency is lower, but often as high as 80%-90%. Ideal rays leaving the reflector's input aperture 24 strike only one sidewall a single time, leading to the high efficiency. Non-ideal rays may strike more than one sidewall, reducing theoretical efficiency. When each LED in the array contributes 20 lumens to the input apertures, the air-filled Illustrative non-imaging array with 85% efficiency achieves 7.68 lumens/mm2. Lumen density increases to 9.9 lumens/mm2when the output aperture is 1.31 mm square. The same array covered by prism sheets 58 and 60 spaced for contiguous virtual images achieves about 10 lumens/mm2.Hence, despite the enlarged output aperture of a non-imaging reflector, the net lumen density possible for the same output conditions is about the same as that achieved with prism sheets 58 and 60. The main tradeoff is layer thickness. The depth (or thickness.) of a non-imaging reflector is governed by its input and output aperture sizes and the output angle along the same meridian. For the case where the output angles are +/-22.5 degrees in each output meridian, the reflector length becomes 2.86 mm. While this result is almost 5 times thicker than the comparable two-prism sheet alternative (which can be as thin as 0.3 mm for the two sheets plus the preferable (0.625)(.5) mm gap spacing), it is still relatively thin for many application examples that follow.Concentrator length can be truncated, but the larger the truncation, the greater the negative effect on power transfer efficiency and uniformity. The best way to reduce concentrator length, without compromising power transfer efficiency or uniformity, is to decrease the ratio of W to W', W being the emitter width, and W, the width of the non-emitting spaces between emitters. In the above example, W'/W is 2. If this ratio were reduced 33% to 1.5, and the 8 mm emitter width was maintained, the ideal pixel size would fall from 16 mm to 12 mm, making the space between emitters, 4 mm instead of 8 mm. The associated concentrator length drops 86% to 15.8 mm, which is still thicker than preferable in many applications.The concentrator length can also be reduced by a folded hollow lens mirror combination much like that drawn in FIG. 9 , but with polarization conversion layers 164 and 170 replaced by a lens element. In this approach, some non-imaging ray paths are folded back towards the mirror by total internal reflection from the lens. 2.1.4.3 Micro Reflector Fabrication Whether using linearly tapered sidewalls or the mathematically shaped sidewalls nearer to those of either the hyperboloid or the ideal concentrator, light leaving the emitting region 24 enters the pixel medium 144 (air or dielectric) that fills the volume defined by specularly reflecting sidewalls 136. When this medium 144 is a transparent dielectric such as acrylic or silicone, specular reflection occurs by total internal reflection at the sidewalls provided the gray shaded volumes 130 in FIG. 7 or Fig. 8 are filled with air or another lower refractive index material. When the gray shaded volume 130 in FIG. 7 is made of a transparent material, an opaque material, or one that has low specular reflectivity, it must be coated with a thin specularly reflective layer or film such as for example, aluminum, enhanced aluminum or silver, to provide the basis for efficient specular reflection. Once the smoothly shaped sidewalls 136 are coated, all light rays 140 that strike them will be reflected with an efficiency determined by the reflectivity of the coating, and these rays 142 will generally exit without further reflection through the structures output aperture 138 within a prescribed range of output angles as bounded by the Sine relation above.As described earlier, the reflective spacer structure 84 (in FIG. 3 and FIG. 7 ) or 130 in FIG. 9 can be fabricated as a plastic sheet using a forming tool 146 such as the one represented schematically in FIG. 6 . Whether casting and curing, embossing injection molding, or compression molding, the cured or cooled plastic or plastic-composite sheet can be pulled away from the linearly or functionally tapering sidewalls 148 of the tool 146 without interference. Each element in the tool has a base width 154, W + W, and a top surface 150 of width 156, W. The salient molding tool dimensions 154, 156, and 158 in FIG. 6 are traditionally made slightly greater (or less than) the target dimensions reflected in FIG. 3 , FIG. 7 , FIG. 8 , and FIG. 9 to allow for any process expansions and shrinkages. When casting or embossing, top surface 150 is made to extend slightly beyond the specified spacer heights G1' and G6 as given in FIG. 7 and FIG. 8 (i.e. L-G1' or L-G6). The reason for this is to assure that the process yields a clean clear hole in the molded sheet that matches the size of the emitting region. When casting, the casting material is filled to stop line 159 in FIG. 6 . When embossing, tool 146 actually protrudes through the (L-G1') or (L-G6) mm thick sheet to be embossed into a compliant carrier film material attached to it. 2.2 Types of Emitting Arrays Covered In general, the present invention applies to one-dimensional arrays of emitting stripes ( FIGS. 1-2 ) and to two-dimensional arrays of emitting regions ( FIGS. 3-9 ). Preferable one-dimensional emitting arrays are sets of parallel fluorescent tubes or channels, parallel fluorescent tube lamp, a lengthy single fluorescent tube lamp bent (or molded) into a serpentine pattern whose major sections run parallel to each other, or a planar device within which a gaseous plasma has been forced into concentrated stripe-like or zone-like regions. This emitter type is best suited to specialized LCD backlighting applications, as will be illustrated by subsequent example.Preferable two-dimensional emitting arrays are spatial arrangements of discrete emitting regions, including planar arrays of pre-packaged LEDs or bare LED chips. These discrete arrays may be a single line of equally spaced elements or a series of equally spaced lines of equally spaced elements.Emitter elements within the array whether fluorescent stripes or discrete LEDs, are powered (separately or in groups) by external controlling electronics. The controlling electronics for fluorescent stripes is a ballast supply that provides the high frequency form of high voltage needed to create and maintain the emitter's gaseous discharge. The controlling electronics for LED arrays is a switchable source of low voltage dc interconnected to sets of LEDs having the same color, leading to widespread uses in lighting and illumination - applications that will be described by specific examples below. The controlling electronics may also be applied to individual LEDs via an image processor circuit (or circuits) that determines proper timing, emission duration, and power-level (color balance) for each LED (or LED sub-group) in the array. Individually powered LED arrays lead to applications in the display of two-dimensional images.The range of lighting applications enabled by LED arrays within the present invention are extensive and will be considered in detail, including preferable packaging arrangements and application examples. After this, an example will be given for the use of fluorescent stripes and tubes in the special cases of LCD. and transparent image backlighting. 2.2.1 Pre-Packaged LED Arrays Commercial LEDs can be arranged in arrays, but the output of the array ,is ordinarily very non-uniform and spread over a wide range of angles. Lenses and diffusers can be used to improve uniformity and directivity, but such improvements reduce efficiency.With the present invention, in the form of FIG. 7 ( 8 or 9 ), commercial LED arrays can produce uniform beams of light in thinner structures and with higher efficiency than conventional alternatives.A wide variety of LEDs are presently manufactured in two types of packages: clear plastic elements of a wide variety of sizes and shapes, or 1.5-3.0 mm square ceramic surface mounts suitable for conventional printed circuit boards. The physical package size deter-mines how many lumens can be supplied in a given application area. The package's optical design determines the characteristic with which the lumens are emitted. Some packages have a lens that causes a more directional emission of light. Other packages emit light in all directions. All present packages are larger than their emitting apertures, making light strongest from the center of the package and giving conventionally packaged Ll :Ds a point-like appearance.Fig. 10 provides one example of the way commercially packaged LEDs can be used within the present invention. Discretely packaged LEDs (or groups of LEDs) 157 can be used as the array elements themselves (i.e. 36 in FIG. 7 ) by soldering their discrete packages 161 in equally spaced rows and columns on a printed circuit board 163 or a ceramic circuit substrate, and then arranging appropriate spacer layer 165 and diffuser layers 167, so as to best implement the cross-section of FIG. 7 ( 8 or 9 ). Bus bar circuitry 169 (anode) and 171 (cathode) is provided for each type of LED used. For simplicity, the circuit illustrated in FIG. 10 is for a single type of LED, such that all LEDs in the array are powered simultaneously. More complex circuitry is provided when each package L61 contains separate red, green and blue LEDs.The specific example of FIG. 10 presumes the use of commercially available 3 mm square ceramic surface mount packages 161 such, as those manufactured by Nichia whose 2.3 mm diameter circular cavity 173 contains an interconnected LED chip and is encapsulated with optically transparent plastic epoxy 175. Exploded detail 141 shows the structure of an idealized 6-package by 6-package array where the spacing between packages 161 is equal to (or less than) their physical width, as described above in conjunction with FIG. 4 . Preferably, cavity 173 is better shaped as a square. When this is not possible, diffusive reflecting layer 167 is combined with a matching array of diffusing screens 177 disposed just above each package 161 such that diffusion screens 177 become the actual illumination source from each underlying package 161. Exploded detail 141 in FIG. 10 also shows the sequence of multi-layer optics arranged according to the approach of FIG. 7 that is used to create the uniform output beam being sought. In this particular example, transparent spacer layer 165 is positioned directly above the emitting apertures 177 to provide the exact spacing needed between the emitting apertures and prism sheet layers 58 and 60 (G1' as in FIG. 7 ). Prism sheet 58 may be optically coupled (laminated) to transparent spacer layer 165 to minimize any unrecoverable power losses due to total internal reflection within the spacer. The collapsed multi-layer illuminator is shown in detail 143. Light is emitted over uniformly over the full aperture of multi-layer illuminator 143, which for the illustrative 3 mm packages is 36 mm by 36 mm.The same conventional packaging approach may be used for just a single row of packaged LEDs as illustrated by FIG. 10 in details 145 and 147. Exploded detail 147 shows the same vertical layout applied to 6 equally spaced LED packages 161. In this case, the full aperture size is the width of 2 packages and the length of 12 packages. Hence, using the illustrative 3 mm packages 161, and their diffusive output layers 177, output light would emit through layers 58 and 60 over a 6 mm by 36 mm region. Using prism sheets 58 and 60 with 90-degree prisms, the output light would be spread over substantially a +/- 22.5-degree angular cone.Array illuminators 143 and 145 can be used in a variety of lighting and backlight applications, including the red high mount stop lamp on the rear deck of automobiles. For such use, the size of the array and the type of diffusive layers added are adjusted to meet the visual needs of the market. Other applications of these arrays will be described shortly.The main practical limitations associated with conventional packaging described in FIG. 10 is the physical limit they impose on the number of lumens that can be delivered per unit area density and the wasteful redundancies of discrete packaging leading to higher than necessary manufacturing costs.These limitations are addressed by introducing a more preferable packaging arrangement, one in which the constituent LED chips are contained in what becomes a single extended package. 2.2.2 Monolithically Packaged LED Arrays Best use of the present inventions ( FIGS. 3 , 7-9 ) occurs when constituent LED chips are arranged in a monolithically laminated multi-layer package.A distributed manufacturing approach is adopted wherein there is but a single continuous package structure accommodating every LED arranged in a super-array, containing many sub-arrays. This approach is more inefficient than using discrete printed circuit boards 163 and discretely packaged LEDs 157, as has become common practice, or even extended electronic circuit boards with individually die-bonded LED chips, discrete conventional optics glued above them. The multi-layer invention of FIG. 7 , for example, can be implemented using very large (if not continuous) sheets or panels for each and every layer, with no need for the inefficiency of handling discrete elements, other than the LED chips 70 , themselves. This distributed multi-layer packaging approach is shown schematically in FIG. 11 , with multi-layer composite panel 181 much larger in physical extent than any constituent sub-panel that is to be used as a yielded product. Unlike the discrete circuit boards 163 and packages 157 of FIG. 10 , the approach of FIG. 11 is more akin to the multi-layer planar processing used in silicon microelectronics, wherein the distributed multi-layer microelectronic wafers are later diced into individually yielded devices with advantageous economies of scale. With similarity, overall multi-layer composite panel 181 is later cut or sliced into separate sub-panels 196 (along preplanned slicing lines 191, which may in turn be reduced to even smaller illuminating entities such as bars 183 and plates 179. Layers 163, 167, 165 and 58 are ruggedly laminated.Similar attachment of layer 60 above layer 58 is complicated by the need to maintain an air (or low refractive index) gap between them over the output aperture. One solution is to apply a bonding agent between layers 58 and 60 only in the dead regions surrounding the effective sub-array apertures, with these same dead regions exceeding the width of cut lines 191. Another solution is to add pre-cut pieces of layer 60, and any output diffusing layer 28, as a post process prior to use. Yet another solution is to choose prism refractive index and geometry in layer 58, spacing G1', and the space allowed between the prisms of layers 58 and 60 anticipating a transparent low refractive index media adhesive or epoxy filling the gap between layers 58 and 60, rather than air. Fluorinated polymeric liquids manufactured, for example, by Addison Clear Wave LLC or DSM Desotech, can be polymerized with refractive indices as low as 1.42. Prism elements can be formed in acrylates and other polymer materials with refractive indices as high as about 1.7.The distributed manufacturing approach symbolized by the multi-layered panels or sheets of FIG. 11 only pre-suppose a practical method for distributing and incorporating large numbers of LED chips efficiently within them. Although conventional pick-and-place methods are compatible with this approach, it would be preferable to place the LED chips in the extended arrays in a collective rather than individual manner. Collective attachment methods are enabled by recent advancements in LED technology creating availability of LEDs with transparent substrates having both electrical contacts on the same side of the chip (allowing so-called flip chip mounting). Such one-sided LEDs can be soldered to metallic circuit elements en masse by heating generally to re-flow deposited solder contacts for all LEDs-at the same time. Collective LED placement is enabled by the continuous packaging structure envisioned herein, and introduced further below.Practical applications vary with the density of illuminating pixel apertures (1,8 in FIG. 11 ; 102 in FIG. 7 ), the number of lumens provided by each pixel aperture, and the size and shape of the resulting panel. Some general lighting applications are offered by the present invention used with discrete LED packages, as illustrated by way of two examples that follow. Yet, there is a much wider variety of lighting applications made possible by the distributed packaging approach that will be addresses through additional discussions and examples. 3.0 General Lighting Applications with Pre-Packaged LEDs Mono-colored light emitting diodes (LEDs) are usually 0.5 mm to 1.0 mm square chips cut from 2" diameter wafers, 6-10 thousandths of an inch thick (0.010"=0.254 mm). Although the diode itself is formed by epitaxial-layers grown on top of the square substrate's surface, light is emitted from the entire chip, which is preferably transparent. While such a chip makes an ideal emitting region 70 , manufacturers prepackage it with wires attached, either in a clear plastic bullet-shaped molding, or as contained on a small ceramic circuit board. In either case, the discretely packaged LED can be arranged to emit through a square emitting aperture, and organized with companion LEDs into a planar array that would be favorably created by the present invention. As such, an array of pre-packaged LEDs implemented as in FIG. 7 or FIG. 8 could be used, at least in principal, in a variety of practical general lighting application.As one of the many general lighting applications possible for an illuminator of the form of FIG. 7 , consider the case where each conventional package element 161 ( FIG. 10 ) contains one each of a state-of-the-art red, green and blue LED 70, and that the array of pixels is arranged as in FIG. 10 , details 141 (exploded) and 143 (collapsed). Suppose each LED group has an output aperture 24 that is made square and 3 mm on a side, with spacing W' between all emitting squares 24 also 3 mm. Total thickness of multi-layer 143 is approximately 3-3.5 mm, including the 1 mm thickness of LED packages 157, spacer thickness G1' between emitting apertures 177 and prism sheets- 58 and 60, and the combined thickness of layers 58 and 60. Spacer thickness G 1' for the contiguous output images of FIG. 4 is about 0.625W (or. 1.875 mm). High-performance semiconductor LEDs, such as those manufactured by LumiLeds, yield approximately 20 lumens per die at drive powers in the range of 0.25 - 0.35 watts dc. Assuming adequate heat sinking, and an approximate optical transfer efficiency of 50% from output apertures 218, means that approximately 30 lumens of mixed red, green and blue light could be yielded from each pixel's output aperture. As industry advancements in the number of lumens per die, Ld, are made over time, n dies are used per pixel, and as the optical transfer efficiency, η, is optimized, the number of yieldable lumens per pixel, n Ldη, may become significantly greater than 30. 3.1 LED Equivalent of 100-Watt Light Bulb Yet, with 30 RGB lumens per 6 mm by 6 mm illuminating pixel 218 , the luminous equivalent (1690 lumens) of a 100-watt General Electric SoftWhite™ light bulb can be achieved with only 56 discrete LED packages 157 and a total of 168 light emitting diodes. If arranged in a nearly square 7 pixel by 8 pixel array, the resulting panel would be 42 mm x 48 mm, and less than 4 mm in overall thickness T', depending on the various layer thickness and offsets used. Such a compact lighting element 143, represented schematically in FIG. 10 , would have many practical uses, as its life expectancy alone exceeds that of incandescent lamps like the General Electric SoftWhite™ light bulb by more than 100 times. With its 168 diodes driven at 0 .25 watts, the total electrical power dissipated would be 42 watts. In addition, the color temperature of the white light emitted by the mixture of red, green and blue LEDs is adjustable electronically, allowing user selectable panel color. 3.2 LED Equivalent of 75 Watt PAR-30 Flood Lamp As a related example, consider GE's industry standard 75 watt, wide-angle halogen floodlight PAR-30, which delivers 1050 lumens over a useful life of 2000 hours. Using the same configuration and dimensions as just above, equivalent performance can be achieved with the 6-element by 6-element array 143 illustrated in FIG. 10 . Outside dimensions are 36 mm by 36 mm, and electrical power, 27 watts. The current worldwide market for all light bulbs is over 1 billion units per year. For solid-state lighting structures of any kind to serve even a small share of this market, manufacturing costs must be reduced towards comparable levels with existing light bulb technologies. Not only does the distributed multi-layer packaging envisioned in FIG. 11 address this need, but facilitates panel combinations such as back-to-back arrangement 187 in FIG. 10 and five-sided lighting cube 189. 4.0 High Lumen-Density Light Sources Panels with Monolithic LED Packaging The distributed packaging of LED chips within the context of the present invention enables a new class of high lumen-density light sources that potentially replace high-lumen light bulbs currently in use within many commercial application, including video projectors, spot and flood lighting luminaires, automotive head and taillights, to mention just a few. 4.1 LED Light Sources for LCD and DMD Video Projectors The most demanding application example for monolithically formed LED light source panels formed by the present inventions involves replacing the 90 to 150 watt halogen arc discharge lamps used in all LCD and DMD front and rear image projectors with comparably performing LED light source panels anticipated, for example, by FIGS. 7 and 11 . Applying the present invention to LCD and DMD projectors, however, requires a denser packing of LEDs per unit area than any imagined general lighting or illumination need. The reason for this is that the total illumination provided by the LFDs in a projector must pass through an image aperture that is typically less than about 18.3 mm x 24.4 mm in cross-section. Not only is this target illumination area considerably smaller than the conventionally packaged high-lumen panels illustrated in the general lighting examples above, but also the panel's intrinsic +/- 22.5-degree output is too wide for efficient usage without additional angular compression. Projector images are created by LCDs and DMDs, which are best illuminated with beams whose angular extent has been reduced to about +/-12 degrees in air. While lenses can be arranged for this purpose, their effect is to further increase beam area, extending the potential for inefficiency. The implication of this reasoning is that the density of the LED arrays must be considerably greater than is allowed physically by the discrete packages sizes of FIG. 10 . The multi-layer packaging approach enabled by the elevated prism sheet bi-layer invention of FIG. 7 is one efficient way to simultaneously satisfy both the beam area and beam angle constraints imposed within efficient projector systems. 4.1.1 Illuminator Constraints in Video Projectors Halogen arc lamps are the existing sources of illumination used in modern video (image) projector systems. Intense halogen arcs surrounded by a glass bulb typically deliver 60 lumens/watt into free air (or about 6000 lumens for the relatively long lived Philips 100 watt lamp). After inefficiencies for light collection, beam formation, polarization, infra-red filtration, overlap with the SLM's aperture (spatial and angular), imaging optics, and color filtration, to mention the most significant ones, only about 1000 to 1200 lumens actually make it to the projector system's viewing screen. The most significant arc lamp inefficiency comes from its poor spatial and angular overlap with the rectangular 4:3 aperture format of standard LCD and DMD spatial light modulators used to form the image that is to be projected. Beams of light collected from arc lamps are intrinsically circular which wastes 40% of the collected power.Best SLM performance comes when the angular extent of light passing through the SLM aperture is limited to about f/2.4 or +/-12 degrees. Such degree of collimation is required in most LCDs to achieve a desirable image contrast ratio. And, with DMDs, such angular limitation is associated with the limited range of micro mirror motion. Bulky reflectors and lenses are used with halogen lamps to provide this degree of collimation. Often, other bulky elements are added to improve beam uniformity and remove infrared heat.In addition to this, the physical size of the SLM aperture is made as small as possible because SLM unit cost is directly related to aperture area. Typical LCDs are 1.2" on the diagonal and as small as 0.7 inches. DMD size, which depends on the size of its individual micro mirrors (e.g. 17 microns square), also depends on the image resolution. When image resolution along the horizontal axis is 1024 pixels, for example, the DMD diagonal is about 0.9 inches. 4.1.2 Rectangular Light Source Aperture Preferred for Efficiently Illuminating a Rectangular Image LED arrays are intrinsically rectangular and therefore can be readily shape-matched spatially to fulfill the needs of the rectangular LCD and DMD image aperture. Angular matching is facilitated by the behavior of prism sheets 58 and 60 (or the micro reflectors 136 ), which aside from establishing beam uniformity, also pre-condense output beam angle to +/- 22.5 degrees or less in each meridian.These capabilities, plus the ease with which LED illumination is color segregated, enables LED arrays of the present inventions to illuminate LCDs and DMDs as well as halogen discharge lamps generating roughly twice the number of input lumens.It will be shown, through the series of examples to follow, that with common projection system inefficiencies, a uniform rectangular emitting pixel array matched both spatially and angularly to an associated SLM aperture need only supply about 3000 lumens of white light if the projected screen image is to embody at least 1200 lumens. Then, with each emitting pixel including a red, green and blue LED triad yielding at least 30 lumens of output light within a cone of +/- 22.5 degrees, the perfect angular transformation of this light to +/- 12 degrees by a lens or mirror system, and the routing of transformed light through a 4:3 SLM aperture with a 1.2 inch (30.48 mm) diagonal, a calculation is made of the number and size of light source pixels involved. The SLM aperture is 24.384 mm along the x-axis-and 18.288 mm along the y-axis. The effective light source aperture must then be made smaller than this because of the beam expansion produced by a 22.5 degree to 12-degree angular transformation. The operative equalities between lamp and SLM for these illustrative conditions are therefore (Lx) Sin (22.5) =(24.384) Sin (12) and (Ly) Sin (22.5) = (18.288) Sin (12), where Lxand Lyare the light source dimensions along its two rectangular edges. Consequently, Lxand Lyare 13.25 mm and 9.94 mm respectively. This means that for maximum transfer efficiency the light source's square output pixels must fit completely within this area. Since each tricolor light source pixel is taken as yielding 30 lumens total, we know that at least 100 such pixels are required to yield 3000 lumens. One hundred square pixels distributed in a 4:3 array over this light source aperture forms an array of 11.547 by 8.66 pixels. Since fractional pixels are not realistic physically, the nearest unit pixel array is 12 pixels by 9 pixels, which if feasible, yield 3,240 lumens. For 12 pixels to fit along the source's 13.25 mm edge, each pixel size has to be no larger than 1.1 mm on a side.The implication of this compaction is that it must be possible to collocate 3 high output LED chips within about a 0.25 mm square. Such compaction is impossible whether using any conventionally discrete package 157 ( FIG. 10 ) or the fourfold optical expansion method of FIGS 4 and 7 . High output red, green and blue LEDs available commercially are typically 0.5 mm square. This means at best that the LEDs would need to be mounted touching each other if to fit within the 1 square millimeter density required (high output LEDs are about 0.5-1.0 mm on an edge). Creating such a practically continuous LED wafer is impractical because of heat dissipation requirements, which require LEDs be separated by sufficient clear space between units, perhaps as much as their width. With this separation constraint, the smallest optically expanded output pixel size is actually about (2)(2) or 4 mm on an edge, which is too much larger than the 1.1 mm size needed. Using 4 mm output pixels the 13.25 mm x 9.94 mm array becomes only 3 x 2 and yields a total of only 180 RGB lumens - far short of the 3,000 lumens required. Moving the LEDs closer together, so that they squeeze into a 1.75 mm square, only increases total RGB lumens to 360. 4.1.3 Color Segregation and Its Critical Effect on Light Source Aperture Size By starting with physically segregated red, green and blue light beams, and then combining them so that they overlap spatially, it is possible to create a composite RGB beam having a significantly higher lumen density and a significantly aperture size than would be otherwise possible using an RGB light source.The reason for this is that a single planar substrate of red, green and blue LEDs cannot be made small enough in area for practical projector use. As was just discussed, each triad of 0.5 mm square red, green and blue LED chips takes up a square area between 1.75mm and 2 mm. Hence to generate 3,000 lumens, at 30 yielded RGB lumens per triad, and 100 triads overall, implies a 12 x 8 triad array. Since to be used with the present invention these triads must be spaced from each other by their width, this implies that the overall aperture is as large as 32 mm x 48 mm, exceeding the size of the 1.2" LCD. Then, since the output angle must be reduced from +/-22.5 degrees to +/-12 degrees, using such a panel efficiently requires a 66 mm x 88 mm LCD (4.3" on the diagonal).Segregating three separate mono-colored light source panels, and then providing the means for their optical recombination (discussed in more detail separately below), enables a sufficiently high lumen density within a sufficiently compact aperture area.By pre-separating the illuminating beam's constituent colors, each monochromatic light source is only required to supply about a third of the total lumens needed for practical image projection, and can each do so over the full 13.25 mm by 9.94 mm illuminating aperture example provided above. With this division, the same 1.1 mm square mono-colored output pixels assumed to yield 10 monochromatic lumens apiece, arranged in the same 12 by 9 pixel array, provide the 1080 lumens minimum needed in each color. Then, as future advancements are made in LED output per chip and in LED coupling efficiency, even more powerful output beams are possible by means of this efficient color segregation method.Conventional halogen arc lamps supply white light that is a fixed mixture of red, green and blue. Modem projection systems based on these lamps already use dichroic color-splitters in many different arrangements to physically separate this white light into red, green and blue input beams. The reason they do this is not to increase light output per unit area, which is fixed by the halogen lamp's arc, but rather to allow the light source's use with three separate monochrome LCDs.These same dichroic color splitters will be applied with the present invention as the means for overlapping the three mono-colored light source panel beams. And, only with the LED light source panels of the present invention ( FIGS. 7 , 8 and 11 ) can the emission colors be so simply and efficiently segregated into separate beams. 4.1.4 Factors Controlling Lumen Density A practical projection system, as in the present 1.2" LCD example, requires an illumination source providing a minimum of 1084 lumens at f/1.4 in each color over a 13.25 mm x 9.94 mm illumination aperture. This corresponds to a minimum effective lumen density in each color of 8.2 lumens/mm2, achieved in the present invention with 0.55 mm emitting regions spaced 0.55 mm apart, so as to create 1.1 mm output pixels of best uniformity using the four-fold area expansion method explained in FIG. 4-5 . Constraints on this color-segregated panel geometry can be relieved, by relaxing uniformity with less than a 4:1 area expansion. The degree of area expansion depends continuously on the exact physical gap spacing, G1', as set by spacing layer 84 in FIG. 7 . If we made the gap G1' such that each of the four virtual emitter images 26, 27, 108 and 109 overlapped slightly, the resulting area expansion would be proportionally less than the perfect factor of 4. With the degree of virtual image overlap, V, 217 , being the same in both the orthogonal x and y directions, as in FIG. 12 , the expression relating the degree of overlap to the resulting pixel area expansion, E, is given as in equation 8. When V=0, equation 8 returns E= 4 as expected. Then, the emitter spacing, W', which for perfect fourfold expansion is equal to W, must equal W-V.The consequence of this approach is that the overlap region, 221, contains twice the lumens, and the overlap region 223, four times the lumens, as the completely displaced regions 225 residing at the four corners of the illumination pixel. This non-uniformity, however, when it is required, can be homogenized by an optical mixing means provided during the global angle transformation process to be used within the associated projector system, as will be explained in the projection system examples that follow. E = 2 ⁢ W - V 2 W 2Regardless, it takes a relatively dense two-dimensional packing of LED emitting apertures to enable such a powerful source of output light. For the above projector system example at 8.2 lumens/mm2, which is 8.2 MLux. 4.1.4 Illustrative LED Back Plane The light source system of FIG. 7 provides one example of densely packed LED emitting regions 24 under the present invention. In this case, the output lumen density is limited by the degree to which the area of the emitting apertures 24 exceeds that of the size of the compartmentalized LEDs 70 themselves. Densest possible packing occurs when LED chips 70 themselves become the emitting regions 24 and when no confining structure 72 is used to isolate and homogenize their individual emission from one another. In some situations it may be preferable, nonetheless, to contain each LED chip within its own homogenizing cavity 72 . 4.1.4.1 Compartmentalized Multi-Layer Package Structure for Flip-Chip Mounted LEDs One example of a dense and continuous back plane package structure appropriate for LEDs having planar electrodes is given schematically in FIG. 13 within cross-section 212 and bottom view 243. The individual LEDs 70 are mounted flip-chip (electrodes below the light emission) on composite, electrically-insulating, reflective base layers 220 and 225 of thickness H, 222, that include appropriate conductive circuits 224 and conductive vias 226 between those circuits and the LED contacts 228. The illustrative conductive circuits 224 consist of two sets of inter-digital interconnect bars 227 and 233, similar to the approach shown by 171 and 169 in FIG. 10 , each connected to their own common buss or cross bar (not shown). Arrow 227 points in the direction of one common buss bar and arrow 237, the other. All interconnect bars represented by 227 and 235 are generally the same in form and function and interconnect the same side of the LED junction. Interconnect bars like 233 interconnect the opposing side of the diode junctions. The inter-digital metal structure 224 can be formed by vapor deposition and photolithography followed by electroplating, or by a master electrode pattern applied to burn away the open region 245 using a batch machining process such as electro-discharge machining. The via-structure 226 can be formed (or grown) as an array of mesas on top of this pattern. These conductive-patterns are made sufficiently thick to handle not only the electrical power requirements of the LEDs, but also the associated heat dissipation load involved. One way to build the composite of FIG. 13 is to form the circuit structure 224, and then cast onto it a suitable composite insulating layer 225 that is just then enough at thickness K, 244, so that the vias 226 remain exposed. Then the reflective cavity structure layer 220, made separately by molding, embossing, casting or electroforming, would be laminated to insulating layer 225. The hollow cavities 248 within this super-layer also serve as convenient rough alignment guides for the LEDs to be placed within them.The square (or rectangular) cavities 248 are made only slightly larger in extent than the emitting area of the LEDs themselves. Sloped sidewalls 230 are used to increase the amount of LED light coupled to the cavity's output aperture of width W, 42. The sloped sidewall can be a diffusely reflecting material identical to the base layer material 220 or deposited as an external coating. The sloped sidewall 230 can also be made specularly reflecting. In either case, light emitted by the LED towards the side of the cavity is reflected generally upwards, towards the cavity aperture, 24. In any event, this sidewall is sloped at an angle, α, 232, to the vertical, which may in some cases be nearly 0 degrees, and covers a height M, 234, that is made approximately equal to the thickness of the LED chip 70, which is typically on the order of 0.005 inch. With the LED chip 70 being generally square and LL millimeters on an edge 236, the cavity aperture W, 42, is given by equation 9. W = 2 ⁢ M Tan α + LLThe cavity medium, 238, can be air, or filled with a high refractive index encapsulating dielectric such as clear silicone, an acrylic resin or a specially formulated transparent epoxy such as is commonly used for encapsulating LEDs, to mention a few. In addition, this cavity medium 238 can be loaded lightly with a sub-micron scale scattering material, to facilitate volume diffusion of out-coupled LED light from the sidewalls and from the LED surfaces themselves, when the additional randomization so provided is preferred. When for example, LL equals 0.5 mm, chip thickness, 0.01 inches, and sidewall angle, 45 degrees the cavity aperture W becomes 0.754 mm and exceeds the chip dimension by 50%, which might be larger than desired in some of the most densely packed applications, such as in the present projector source example. The cavity aperture can be reduced, however, by sharpening the sidewall angle 232 and/or by using a thinner chip. If the sidewall angle were made 30 degrees, for example, the aperture size becomes 0.646 mm, which is only 30% larger than the chip itself. If made 10 degrees, the aperture size becomes 0.545 mm, which is only 10% larger than the chip itself.In the base layer example of FIG. 13 above, the LED chips 70 are presumed to be, at least for this illustration, of the new semiconductor types that have optically transparent substrates 240. The emitting p-n junction 242, in this case, is mounted preferably facing the interconnect vias, 226. One reason for preferring a junction-down LED orientation (sometimes called flip chip) is to simplify electrical attachment to the base layer. Another reason for this orientation is to facilitate the removal of heat through the vias 226 and the electrical interconnects 224, which can be thickened for example by electroplating in the vicinity of the LEDs so as to serve as a convenient heat sink as well.The base layer 220 is made of any material that can be molded, cast, or embossed, and that has or is given the appropriate mechanical, thermal, and optical properties. Typically, this means a polymer that has been loaded with a ceramic, glass or metal powder, forming a composite that can withstand the heat dissipated at the LED junction, and into the deposited metal interconnects 224 and vias 226. Each LED chip 70, when operating at full output, may be running at 0.25-.35 watt, and in some cases even higher. Not only must the base layer material be thermally stable, it must be counted on to service as part of the collective heat sink. The cavity thickness, K, 244, is then chosen as a compromise between physical integrity and thermal transfer ability. One of the possible materials that could be used for this purpose is a new composite system developed by Corning, Inc., called CORTEM™. This new material is a blended alloy of a low melting temperature glass and a co-melted polymer that has excellent thermal and ignition resistance. A wide variety of appropriate glass-polymer and glass-ceramic-polymer composites can be used as well, including those formed from precursor organometallic liquid mixtures based on butoxides and isopropoxides, wherein molecular-scale mixing is used to improve mechanical consistency.Cavity gap height J, 246, in FIG 13 , is used to provide some offset between the LED surface and the output diffuser 68. The reason such a gap, J, is introduced, is to extend the cavity's interior scattering or reflecting surface without increasing the width of the cavity aperture 42, thereby providing some diffusive mixing of emitted light, when its wanted. Diffusive mixing is useful not only as a means to soften the tendency for emission hotspots above the LED surface itself, but also as a means to provide some degree of color mixing when using tri-color LED clusters. 4.1.4.2 Compartmentalized Multi-Layer Package Structure for LEDs Mounted Electrodes-Up Mounting the LEDs, junction up, is also possible, as for example, in FIG. 14 using such a structured base layer, but in this instance, a transparent mounting layer 258 must be used to support some interconnect circuit bars 256 and the soldered LEDs themselves. In addition, base layer 221 must have vias 260 that pass through the entire layer thickness H, 222, to reach, and connect with these interconnection bars 256. The base layer 221 provides the surrounding reflective cavity as before, but now contains not only the LED interconnection vias 260 and the LED interconnection circuitry base 262, but as necessary, thermal heat sinking vias 250 and heat sink tabs 264. This method can be combined with that of FIG. 13 for LEDs having one electrode on the top and the other on the bottom. 4.1.5 Fully Integrated LED Light Source Panels FIG. 15 shows one possible set of fully integrated two-dimensional light source panels combining the high-density LED back plane of FIG. 13 with the multi-layer illuminator arrangement of FIG. 7 . The same type of integration applies to the arrangements of FIG. 14 and FIG. 7 , and also to the light sources of either FIG. 13 or FIG. 14 and the system of FIG. 8 . The four illustrative cross-sections 248, 221, 223, and 225 of FIG. 15 all use elevated prism sheets 58 and 60, separated from the plane of emitting apertures by spacer layer 84 (or 217 ) of thickness G1' or G1", G1' and G1" being the appropriate physical thickness for the degree of virtual image separation required, the spacing medium being air (G1') or transparent dielectric (G1"). Boundary 57 between lower prism sheet 58 and dielectric spacer layer 217 is either a thin air gap formed by mechanically resting one discrete layer 58 on another 217, or a purposeful optical coupling wherein layers 58 and 217 are bonded together, for example, by means of optical adhesive or epoxy. In many cases, the existence of an air boundary between the lower prism sheet and the emitters beneath is preferable, in that it imposes a limit on the range of angles input to the lower prism sheet's substrate layer. While this limitation in turn may limit the percentage of power transferred from layer 217 to layer 58, it leads to a more narrowly confined output beam 219. Making all media boundaries between the LED chips 70 and lower prism sheet layer 58 optically coupled increases the amount of output light 219 emitted in higher angles, generally widening the beam's angular extent.Preferable choices for cavity dimensions W, H and J, in FIGS. 13-15 , as well as for the reflective properties (diffuse or specular) given to its interior walls 230 and the optical properties given to its immersing media 238 depend on the size 236 of the LED and the specific system application within which the LED light source array is to be used. These choices, regardless of application are governed by the core value of the optical quantity known as etendue. Core etendue is equivalent to the surface area of the LED's emitting junction (nominally LL2) times Sin2ψ, ψ being the maximum possible emission angle. All subsequent reflection and scattering events undertaken by the emitting light as it leaves the cavity, within transparent substrate 240, against cavity sidewalls 230, within immersing media 238 and from any aperture layer 68, work to increase etendue, primarily by an increase in effective emitting area. The most preferable combination of dimensions and materials are those that keep the etendue of the output aperture as close as possible to that of the core value, and that maximize the ratio of output lumens to those generated in the LED's junction. Yet, when the emitting cavities 228 are to be used in conjunction with the elevated prism sheets 58 and 60 as in FIG. 15 , it is beneficial to couple these layers to the cavities in a way that releases the most to the restricted angle output light 219. 4.1.5.1. Light Panels with Air-Filled Compartmentalized Spacers The structure of FIG. 15 is shown in cross-sectional detail 248, and involves a compartmentalized spacer layer 84 of thickness G1' placed between the emitting array of FIG. 13 and the optical multi-layers 58 and 60 of FIG. 7 , as described above. The spacer sidewalls 85 become part of the extended light source as they reflect, diffusely (or specularly), light impinging on them from both optical prism sheet 58 and from emitting apertures 24 (i.e. output layer 68 of emitting cavities 228 ). For most applications, it is preferable that at least some portion of sidewalls 85 and emitting plane layer 68 involve a scattering mechanism, as randomization of light ray angles and polarization state are required for best system performance, as will be shown in more detail by way of the application examples to follow.Monolithic light source panel 248 replaces the discrete embodiment shown in FIG. 10 . The printed circuit board 163 and the discretely attached LED packages 157 (of FIG. 10 ) it supports are replaced by a continuous back-plane layer 220 that includes LED-encapsulating media 238. Discrete LED-encapsulating media 175 and external reflective diffuser sheet 167/177 are replaced by the LED-encapsulating media 238 (which may optionally include scattering particles), the diffusely-scattering surface properties of packaging material 220 and the inclusion of an optional scattering layer 68. Transparent spacer layer 165 is replaced by the compartmentalized spacer layer 84 and by its air-filled compartments 83. In this case, spacer medium 83 is air, which effects the best value of prism sheet offset G1, the elevation associated with the amount of emitting aperture expansion required, which will be explained in more detail below. G1' is nominally 0.625W for a fourfold aperture area expansion, as in FIG. 4 , with 90-degree prisms in air (W being the edge size of emitting apertures 24). 4.1.5.2. Light Panels with Transparent Dielectric Spacers In some cases of practical interest, there is benefit to making spacer medium 83 a transparent dielectric of refractive index n, a medium 217 in detail 221 of FIG. 15 that may or may not contain a very small amount of light scattering sites.If the medium 217 is a transparent dielectric of refractive index n, and is in contact (mechanical or optical) with the substrate layer of prism sheet 58, the correct spacer thickness becomes G1" = nG1', as measured from the base of the micro prisms used. The presence of dielectric medium 217 in the path of the light emitted by apertures 24 increases each effective optical path length of each ray emitted and reflected by means of its effect on the refraction of light. Hence, for emitting apertures of width W, spacer thickness, G1" that enables the preferred four-fold emitting area expansion of FIG. 4 become approximately W, rather than approximately 0.625W, the spacing required in air. The increased thickness may be useful when using smaller LED emitting apertures so that more substantial spacer layers can be used, making it easier to hold tolerance on spacer thickness. If the emitting aperture is 0.65 mm x 0.65 mm, the spacer layer thickness in air is about 0.4 mm, but in acrylic is about 0.65 mm.There are three possible forms of this variation, shown as cross-sectional details 221 , 223 and 225 in FIG. 15 . Variation 221 is marked by making spacer 84 optically transparent and without the confining reflecting sidewalls 85 of detail 248. In this form, prism sheet 58 lies on top of spacer layer 217 (or is coupled to it), and light from any one emitting aperture 102 is free to cross over into neighboring emitting apertures 102. Optical randomization in angle and polarization state is provided by diffuse reflections at layer 68 (if any) and by making cavity layer 220 of or coating it with a diffusely scattering material. Additional randomization is added as needed by a scattering phase within the otherwise transparent spacer media 217 and 238. Variation 223 has a composite spacer layer made up of reflecting structure 84, as in detail 248 but adding the transparent dielectric 217 into each reflecting compartment. Again, prism sheet 58 rests on top of (or is coupled to) the transparent medium 2171. Reflecting sidewalls 85 add angle and polarization randomization and, like detail 248, confine output emission to the individual output apertures 102. Multi-layer variation 225 removes confining cavities 228 from back plane layer 220, and include the LEDs 70 in a common transparent dielectric encapsulant that forms layer 220. Intervening diffusely scattering layer 68 and diffusely reflecting base layer 225 provide the angle and polarization randomization needed, and may be either mechanically or optically coupled to layers 220 and 217 as desired. Preferably, base layer 225, encapsulating layer 220, diffusing layer 68 and spacing layer 217 would all be laminated together monolithically, with boundary 57 between layers 217 and 58 being air. Equally preferably, layers 225, 220 and 68 would be laminated monolithically, as would layers 217 and 58, leaving a small mechanical air gap between layers 68 and 217. There may be cases where the multi-layer variations 221, 223 and 225 as shown in FIG. 15 are useful because of their efficient index matching with dielectric cavity medium 238 and optional cavity output layer 68 which minimize optical output losses due to total internal reflections at an air dielectric boundary. Such reflections at boundaries between dissimilar dielectrics and especially between air and dielectric, trap reflected angles within the higher refractive index medium. Power losses from such dielectric entrapped limit the full output of emitted light from LED 70 and thereafter from integrating cavity 228. LED substrates have refractive indices between 2.0 and 3.0. Surrounded by air, these dielectrics trap a significant portion of the light emitted within the junction region of the LED itself. To limit this loss, LED manufacturers have routinely packaged commercial LEDs immersed in a transparent plastic encapsulant 175 ( FIG. 10 ) having as high a refractive index is possible, normally 1.5-1.7. No matter what else is done towards improving output emission efficiency, the boundary between any standard encapsulant and the LED substrate traps a substantial portion of the emitted light within the LED substrate. Only increases to the encapsulating material's refractive index can reduce the fraction of light trapped within the LED. Matching the refractive index of spacer medium 217 to the index of layer 68 and to the cavity medium 238 eliminates any further TIR loss at the associated dielectric boundaries.Then, coupling prism layer 58 to spacer layer medium 217, and matching their respective refractive indices, transfers light trapping to the faceted prism surface itself, which intrinsically reflects and transmits light depending on the angle of incidence. Back reflected light is randomized in both angle and polarization on diffusive scattering within medium 217, layer 68 and on the exposed surfaces of layer 220. Some of this light will return to prism sheets 58 and 60 within output apertures 102 having angles of incidence that are output within the characteristically concentrated output angular range of beam 219. The aperture expansion (and angle limiting) behavior expected collectively by the prisms in layers 58 and 60 do not depend on either of the prism sheets being bounded on both sides by air. It is only preferable that the two oblique prism faces defining each prism element be immersed in a medium of refractive index that is substantially higher than the refractive index of the prism material itself. Since air has refractive index 1.0, it becomes an ideal bounding medium for plastic prisms that have refractive indices about 1.5 by maximizing the refractive index difference. And, then since layer 60 is positioned above the prism faces of layer 58, it is simplest that layer 60 be bonded on both sides by air. Immersing the prisms of sheet 58 in an encapsulating medium simplifies its lamination to prism sheet 60, but requires compatible changes in prism material and geometry, as discussed earlier in relation to the structure of FIG. 10 . 4.1.6 Overlapping Output Beams from Mono-Colored Light Source Panels The multi-layer light source panel structures of FIG. 15 (248, 221, 223 and 225) become useful high lumen density monochromatic light sources for RGB lighting applications such as video projection when their mono-colored output beams are efficiently overlapped as a single composite beam no larger in spatial cross-section or angular divergence than characterized by any one of the input beams. Light beams of different color (wavelength) can be mixed without change in beam profile or divergence because etendue is conserved at each wavelength just the way it is conserved with the orthogonal states of linear polarization. Imagine a white beam of light composed homogeneously of red, green and blue wavelengths. When the white beam is separated into three mono-colored beams, the separated beams retain both the angular divergence and beam size of the original white beam, and by reciprocity, visa versa. Neither the separation nor mixing process changes etendue. Etendue increases or decreases when two beams of the same color are combined or divided.Preferable methods for mixing three mono-color light source panel beams involve the use of three or four coupled prisms, the adjacent coupled surfaces of which are coated with multi-layer dichroic films selected to reflect and transmit light of specific wavelengths. The best-known configuration for doing this is the four-prism X-cube arrangement shown schematically in FIG. 16 , as detail 279. In this arrangement, two complimentary dichroic coatings 278 and 280 are applied, for example, to the individual prism surfaces of two of the four 45 degree - 45 degrees- 90-degree prisms involved, 193 and 197 as in detail 271 (for example, one coating 278 that reflects green while transmitting red and blue; the other coating 280 that reflects blue while transmitting red and green). Then, bonded together as in 279, illustrative red-colored light ray 249 enters the X-cube along a surface normal to face 199, and makes a 45 degree angle of incidence with coatings 278 and 280, either reflecting from these coatings or transmitting through them. Because of the red wavelength, this ray passes straight through both coatings 278 and 280. Practical dichroic coatings prefer fairly tight incidence angles around the optimized incidence angle (usually 0 degrees or 45 degrees). Departures from the optimum incidence angle cause unwanted polarization changes, wavelength shifts and reflections. Generally, the net efficiency of reflection and transmission decreases for angles of incidence further away from the optimized angle. For this reason, the standard X-cube is best suited to reasonably well-collimated beams and to dichroic coatings whose performance has been optimized for 45-degree incidence angles rather than normal incidence angles.A more tolerant color mixing prism arrangement know as Philips Prisms is shown in detail 301 of FIG. 16 that achieves the same three-color mixing with beams having wider ranges of incidence angles about the optimized angle of incidence. This three-prism involves two prisms 273 and 285 that share common dichroic coating 278, and a third prism 281 that positions its coating 280 across air gap 277 from prism 285. In this approach, the prism geometries are arranged such that refracted light is incident at each dichroic coating substantially along or near the coating's surface normal. The reason for this is that coating optimized for normal incidence all a wider range of incidence angles before showing unwanted reflective and transmissive behaviors. Illustrative green input ray 251 makes a total internal reflection with face 265 and approaches coating 278 at near normal incidence. Reflected ray 253 then transmits through prism face 265 and passes through blue-reflecting dichroic coating 280 as part of output light mixture 303. The comparable blue input ray 255 reflects from blue reflecting dichroic coating 280 and joins output light mixture 303. Comparable red input ray passes straight through all interfaces and also joins output mixture 303. Through this Philips prism arrangement, efficient power transfer performance has been achieved over at least a +/- 13-degree angular range within the prism medium about an optimized angle near the coating's surface normal.The importance of the arrangements shown in FIG. 16 is that they enable the mixing of discrete mono-colored beams on a spatially and angularly overlapping output beam. In ordinary use, these prisms are used in conjunction with an external source of white light, such as the output beam of a reflectorized halogen arc lamp. In these applications, not only is the light source separated physically from the prisms themselves, the purpose of the prisms (and their dichroic-coatings) are to separate the white light into three separate mono-colored output beams. In many of the practical applications of the present invention to follow, these prisms will be combined with the mono-colored light source panels of FIG. 15 to output a composite beam representing their spatial and angular overlap. In these cases, the light source panels of FIG. 15 will be physically attached to the respective red, green and blue input faces (199, 259 and 261 for the X-cube; 263,265, and 293 for the Philips prisms). The conjunction between these prisms and the associated light source panels, which is described in more detail later on, is unique in both its compactness and efficiency. Not only are the output apertures, square or rectangular, the output beams from the light source panels of FIG. 15 convey no infrared to the prisms that must be removed prior to entry.The same approaches are also advantageous in combining beams of p and s polarizations states, using reflective polarizer films in place of the dichroic films, as will be discussed further below. 5.0 Projection System Application Examples One of the more useful applications of such three-beam color mixing using the preferable LED light source structure of FIG. 15 is provided by video image projection displays incorporating LCDs and DMDs. The exact methods of light source coupling depend on whether the projection system is using transmissive or reflective LCDs, or the reflective, beam-deflecting DMD, as will be explained by the examples that follow.Using the mono-colored LED light source panels of FIG. 15 in place of the presently relied upon halogen arc source illuminators in LCD and DMD projection systems is an advantageous change for numerous reasons. The compact illumination panels remove at least two-thirds of current projection system volume, eliminating the bulky reflector, imaging lenses, heat-reducing filters and cooling fans. As such, traditional projectors commonly considered "ultra-portable" today, improved with solid-state lighting panels may be made small enough to fit in one's hand, and thus "palm-top" in their designation. In addition to system compactness, light source life, currently measured in thousands of hours with halogen bulbs, increases almost 100x with the solid-state LED light source panels. And by using three electronically controlled mono-colored light panels, such improved projection systems offer painless color temperature control and field sequential color operation. In addition, the LED light source panels operate at low dc voltages, thereby eliminating the EMI and physical danger commonly associated with high voltage (and high pressure) halogen bulbs.Integration of the light source panels of FIG. 15 into practical LCD and DMD projection systems is illustrated by the following thirteen examples, and these examples thereby extend the present invention. 5.1 Projection Systems with Liquid Crystal Devices (LCDs) LCDs as used in video projection systems are spatial light modulators containing a flat two-dimensional rectangular array of 480,000 (SVGA) to 786,432 (XGA) separately controlled image pixels arranged typically in a 4:3 aspect ratio. Reduced to its basics, and LCD panel is a thin layer of liquid crystal material sandwiched between two thin sheets of glass. Electrodes and active microelectronic devices on the glass plates can control the voltage applied across each image pixel. Pixel modulation is then achieved by changing the applied voltage so as to change the pixel's electrooptic nature, which in turn changes the way the pixel's effect on the linearly polarized light passing through it. For example, when such pixels are in their on state, linearly polarized light passing through is changed to its orthogonal polarization, and then passes through an output polarizer oriented to block the un-modulated linear polarization. When such pixels are in their off state, linearly polarized light passing through remains unchanged, and is blocked by the output polarizer. Unlike directly viewed counterparts used in laptop computer screens and desktop monitors, LCD pixels in video projectors contain no color filters. As such, image color is provided by the color of the light passing through the projector LCD. Full-color images are achieved in projectors by one of two approaches: three-color mixing or field sequential switching. Arranged for three-color mixing, projectors incorporate three LCD panels, one for each primary color (red, green, and blue), with the monochromatically modulated image beams mixed onto a single full-color output beam. Arranged for field sequential switching, full-color output images are then created by means of a single LCD panel whose input light is made red, green and blue in rapid sequence so that the output beam contains a series of spatially modulated red, green and blue images, which if flashed rapidly enough, appear full-colored.The LCD panels themselves are made either transmissive, in that modulated (or un-modulated) input light passes through the panel's aperture as output light, or reflective, in that input light passing into and through the panel's aperture is reflected back by a mirror plane located at the LCD's back-plane.Each of the thirteen examples that follow, illustrate preferable system integrations of mono-colored light source panels within practical LCD projectors. 5.1.1 Example 1: Reflective LCD Projection System #1 Based Upon FIG. 17 As one illustration of the incorporation of mono-colored light source panels within an LCD image projection system, consider the cross-section shown in FIG. 17 for three reflective LCDs, one each for red light 268, green light 270 and blue light 272. Extensive details regarding illustrative ray path, angle transformation, polarization-clean-up, beam uniformity, field coverage, color sensitivities, efficiency, and geometrical relations are provided for this example that applies by reference to the subsequent examples.The basic systems integration approach involves locating three mono-colored angle transformer units 289 on three sides of a single color-mixing unit 274. Each angle transformer unit collects output light from the respective light source panel, increases angular concentration, directs the reduced angle light to the respective reflective LCD, and provides an output beam of spatially-modulated output light.Central to this system is standard dichroic X-cube 274 as described in detail 279 of Fig. 16 (alternatively, Philips prism 301 could also be used). Projection lens 276 recombines three resulting mono-colored image beams as a full-color projected image. The f/ 2.4 projection lens 276 images light reflected from each of the LCDs through the dichroic cube as has become commonplace in some commercial image projectors. The dichroic X-cube is made with thin-film coatings on each of its two intersecting diagonal surfaces designed to reflect one primary color while transmitting the other two. In this manner, dichroic coating 278 transmits red and blue, while reflecting green. Dichroic coating 280 transmits red and green while reflecting blue. The +/- 22.5 -degree output light from each of the three mono-colored light source panels (red, 288 ; green, 284 ; and blue, 286 ) made illustratively in the form of FIG. 15 are collected and transformed to +/-12- degrees by the three individual folded relay cubes 289, each using a concave mirror 290, a reflective polarizer 292 and a wavelength-independent polarization conversion process enabled by wide band quarter wave phase retardation film 294. This particular folded means of angle transformation, aside from its compactness, significantly improves output beam uniformity by an advantageous pseudo-Kohler averaging process that will be described separately below. The alternative to this means of angle transformation is to apply conventional imaging elements between each light source panel and its corresponding LCD so as to relay a focused image of the light source aperture onto the respective LCD aperture. While direct imaging achieves the angle transformation and field coverage needed in each meridian, any non-uniformity intrinsic to the light source aperture transfers directly to the LCD aperture (and later the projection screen), which may not be preferable. Although defocusing the imaging system softens non-uniformity, the degree to which a strong non-uniformity can be homogenized is rather limited. For example, defocusing may not blur the cross-pattern of FIG. 13 sufficiently invisible. 5.1.1.1 Illustrative Ray Trace The behavior of the compact projection system of FIG. 17 is explained in part by the passage of illustrative ray 296. One illustrative 22.5-degree extreme ray 296 starts at the red light source panel's center-point. This p-polarized ray passes straight through the cube's appropriately oriented reflective polarizer layer 292, and also through a wide band quarter wave retardation film 294. On doing so the ray becomes circularly polarized and switches to the orthogonal circular polarization state by reflection at concave mirror 290, whose optical power transforms the angle from 22.5 degrees at the light source 288 to 12 degrees at LCD 268. The reflected ray 298 on passing back through the retardation film 294 is first converted to s-polarization, and on reaching the 45-degree diagonal of the reflective polarizer 292, is reflected towards LCD 268 on an axis perpendicular to the surface of the LCD. When liquid crystal layer in the LCD retards this ray by a quarter wave (which corresponds to maximum or full spatial modulation) it is once again circularly polarized. On reaching the LCD's metallic back-plane reflector, and reflecting from it, the state of circular polarization changes to its orthogonal state, and passes back through the liquid crystal layer, becoming p-polarized on transit. The p-polarized image ray 300 is imaged by the system's projection lens, and transferred through the dichroic cube 274 to the projection screen (not shown).Imaged light collected by the projection lens from any LCD in FIG. 17 forms the projected spatial image. Regions of the image marked by the absence of light or by lower than maximum brightness are created by effecting less than complete spatial modulation on the LCD pixels in the region. Fore example, complete modulation is characterized by p/ 2 (or ninety-degree) phase retardation, which effects the same quarter wave phase retardation produced by passive layer 294. Full quarter wave phase retardation in the liquid crystal achieves the maximum output light transmission, illustrated by ray 300 above. When the phase retardation is zero, no polarization occurs, the incoming s-polarized ray 298 remains in s-polarization on its transit through the LCD, on reflection at the LCD's mirror, and on re-transit through the LCD. Without any polarization change whatsoever, all the incoming light reflects back through the system along the path it came in on, returning all the way back to the light source 288 . Hence, in this case no light is collected by projection lens 276 for this region, or set of pixels, and the corresponding image region shows absence of the particular color. When the LCD is biased so that this same set of pixels represents an intermediate phase change (between 0 and π/2), a fraction of the available light is converted to collectible output and a fraction remains unconverted. The fraction that remains unconverted also returns to the light source from which it came, along the identical optical path. Light that returns, unused, to the sources from which it came, may be advantageously recycled with potentially important contributions to overall system gain. 5.1.1.2 Light Return and Dynamic Brightness Gain The return of off state light back to the light source is an intrinsic feature of reflective LCDs used with 45-degree reflective polarizers. In conventional arc discharge illumination systems, return to the source is generally not considered a favorable circumstance, because of a potentially negative effect on lamp life. In the case of solid-state light source panels such as those of FIG. 15 , however, light return is not a worrisome process, and generates an incremental output flux that adds constructively to the efficiency of the system.This important light return behavior is illustrated in more detail by way of the expanded cross-section of FIG. 18 , which isolates, for example, on red angle transformation cube 289 of FIG. 17 . Another reason that this unused light return is important is that it provides a means for a dynamic brightness gain in the lighted image areas, a gain that increases as the fraction of the overall image that is dim or dark increases. This dynamic image brightness, sometime called dynamic peaking, or image punch (in CRTs), will be explained further by tracing the return mechanism is in detail. No current LCD or DMD image projector is presently known as embodying any dynamic brightness gain mechanism.The metallic circular-polarization converting back reflector 304, which was not visible in FIG. 17 , is shown more clearly on LCD 268 in FIG. 18 . Illustrative p polarized ray 310 leaves center point 306 on light source 288 as before, but at an intermediate angle, γ, whose optical path through the system lands at point 312 on LCD 268 by virtue of sequential passage through reflective polarizer 280 as ray 310, reflection and polarization conversion by the actions of phase retardation layer 294 and mirror element 290 as ray 314, and reflection by reflective polarizer 280 as ray 316. Illuminating ray 316 is then s polarized. The degree to which the returning output ray 318 remains s polarized depends on the amount of phase change imparted to incoming ray 316 on passage through the liquid crystal layers, which can be electrically biased or not. When the field induced birefringent phase change is maximum (i.e. quarter wave or π/2) all the s polarization is converted to p polarization, and all the light passes through reflective polarizer 280 as before. When the field-induced phase change is zero, none of the s polarization converts, and ray 318 is totally reflected at point 320 on the reflective polarizer 280 upwards towards the concave mirror 290 as ray 322. The s polarized ray 318 is then converted into p polarized return ray 324 which heads back to light source 288 along exactly the same optical path it came in on as ray 310. Constrained to return along its incoming illumination path, ray 324 flows back into the light source, one section of which has been magnified as cross-section 308, in the illustrative form of FIG. 16 (this time with optional output diffusing layer 28 omitted). Ray 324 first passes through prism sheet layer 60, and then in turn through orthogonal prism sheet layer 58 and light diffusing sheet 68 into the diffusively reflecting optical cavity 228. Once within the cavity, this initially returning cavity ray 326 may strike the LED substrate itself, reflecting refractive and/or scattering. As an example of one of the many statistical possibilities, ray 3286 is shown as reflecting internally within the LED substrate, and refracting out as new ray 328, that scatters off reflecting cavity sidewall 85 in a multitude of possible directions, some which may make additional multiple reflections before escaping, and some like ray 330 that passes outwards through layers 58 and 60 as new light that can become part of the lighted image or recycled once again.The basis for the dynamic image brightness gain is in part due to the pseudo Kohler illumination system arrangement of FIG. 17 wherein all emitted parallel light source rays such as 332, 334 and 336, and like rays everywhere across the emitting aperture of 288 are brought to a single common image point 338 on the LCD. As such, when these rays return to their cavities of origin in light source 288, and become randomized in their eventual output angles by the scattering and reflection processes so described, they may in fact return to a any completely different set of spatial image points. Without such angular randomization being provided within the emitting cavity 228, the return rays would remain trapped between any dark reflecting image point such as 312 and the cavity itself, forever retracing the exact same input and output optical path, without means of becoming a part of the output.All first LED cavity emissions like ray 310 in FIG. 18 are routed deterministically to a specific spatial image plane point on the LCD that is preset by the emitted output angle γ. On return to the emitting cavity 228, the return ray is randomized. Spatially, the regenerated ray 330 must emit spatially from some point in the aperture of the cavity 228 that launched it. Angularly, however, the new ray has no physical memory of its childhood angle, γ. Hence, the regenerated rays 330 have new angles, 1, that must illuminate spatially different image points, potentially adding extra flux to these image points, the amount extra depending on the percentage returned unused to the source in this manner and the losses on transit.There will be cumulative transit losses suffered by recycled rays 324 that reduce the amount of dynamic brightness peaking that is possible. An extreme example is the case where 1 image pixel is full white, and all other image pixels, full black. The smallest red, green and blue emitting pixel cavities 228 (102 in FIG. 16 ) are 1 mm x 1 mm and that the cavity's output angle varies as explained, +/-22.5 degrees covering the entire LCD aperture. The LCDs 268, 270 and 272 are each taken as being XGA (1028 x 768) in image pixel resolution and 1.2" in full aperture diagonal, so that their 786,432 pixels are 23.7 microns square. If it is possible for 1600 p polarized image lumens to be projected by lens 276, there would therefore be about 2x10-3 lumens per image pixel. This means that about 1600 s polarized lumens make the two-way return trip from the LCD to back to the source and then from the source, after cavity randomization, back again to the LCD, and potentially outwards as incremental energy, to the screen. If the efficiency of this off state recycling is ηoff, the total number of projected white lumens, Lw, and the number of image pixels, np, it turns out that the fractional boost in single pixel power reduces to ηoff, and the fractional boost in the power of any set of pixels reduces to ηofffoff(where foffis the image pixel fraction in the net off state). This recycling efficiency of the system illustrated in FIG. 17 and FIG. 18 can be expressed by equation 10, with ηrprthe reflection efficiency of reflective polarizer 292, ηpcthe polarization conversion efficiency of phase retardation and mirror elements 294 and 290, ηrptthe transmission efficiency of reflective polarizer 292, η1cdthe LCD passage efficiency, ηranthe efficiency of the cavity randomization process, ηcomthe transmission efficiency of the dichroic combiner cube 274, and the transmission efficiency of the projection lens 276. With most likely efficiency values, the fractional pixel boost ceiling then becomes about (0.95)7(0.75) (0.81) (0.9)2or 34.4%. η off = η rpr 2 ⁢ η pc 2 ⁢ η rpl 2 ⁢ η lcd ⁢ η ran ⁢ η com ⁢ η pl 5.1.1.3 Blockage of Off-State Light Leakage Reflective polarizers 292 block off-state light reasonably well. If off-state light is s-polarized, best results demand that there be absolutely no leakage of s-polarized light within the output beam. Off state light leakage reduces image contrast. One way of preventing output leakage is the inclusion of a clean-up polarizer (absorptive or reflective) on the output face of each monochromatic angle-transforming unit 289 is preferable as a means of improving image contrast. Alternatively, a single output polarizer may be disposed just prior to the projection lens 276. In either case, the clean-up polarizer is aligned so as to block passage to the output viewing-screen of the off state polarization. And, yet another option is to include the clean-up polarizer within the multi-layer construction of reflective polarizer 292. If this were done within the context of the standard prior art polarizing beam splitter cube 289, as an example, it could be done by applying identical polarizing dielectric multi-layers to each of the opposing prism faces in cube 289, and cementing a similarly-aligned (s absorbing) absorption polarizer between them. Although this approach increases transmission loss through the expanded layers 292, it thoroughly eliminates s-polarized leakage. Of these choices, the preferable locations for the s-absorbing (or s-reflecting) polarizer are on the entrance or exit faces 77 and 79 of dichroic beam combiner 274, as they deal with only leakage and not genuine off-state return light, which is best returned to the light source panels for recycling.In conventional arc-lamp-based projections systems, output beam uniformity depends on the uniformity of the arc-lamp illumination system's output beam, which often is enhanced by secondary lens arrays or integrating bars, to provide sources of spatial mixing. In the LED-based image projection system of FIG. 17 , output uniformity of light source panels 284, 286 and 288 is modified by passage of this light through angle transformer cubes 289. 5.1.1.4 Folded Telecentric Angle Transformer and Beam Uniformity A projector system's un-modulated output beam (white-field, dark-field, or field of constant color) must be seen as being spatially uniform and without noticeable brightness artifact. For this to be possible, either the system's illumination source must be sufficiently uniform to be directly imaged, or provision made for improving beam uniformity prior to spatial modulation by any LCD.The light source panels 284, 286 and 288 in the system of FIG. 17 are of the illustrative form described in FIG. 15 , and as such, may show visible internal boundary lines demarcating the 130 illuminating pixels and 520 sub-pixels of a 13 x 10 array example. Assuming XGA LCDs and the 786,432 image pixels this implies means about 1500 image pixels within each of the 520 demarcated illumination cells. On a 100-inch diagonal projection screen, each demarcated region would appear as a 2.3 inches square, and be easily seen as a window pattern across the screen if not pre-diffused.The system of FIG. 17 requires almost no pre-diffusion to obscure this pattern, as folded angle transformer 289 has been designed specifically to assure that any illumination aperture structure including these demarcation frames will not show up in the angle-transformed output beam. The brightness of every individual spatial point over the output beam's cross-section is arranged as an average brightness of every spatial point across the entire light source panel (284, 286 and 288) aperture.This averaging process is accomplished by analogy with traditional Kohler illuminations systems, by locating both the output aperture of light source panels 284, 286 and 288 and the input aperture of reflective LCDs 270, 272 and 302, at the respective focal lengths of illustrative polarization converting concave mirrors 290 (other combinations of lenses and mirrors can be used as well). Reflective polarizer 292 then behaves primarily as a folding mirror sensitive to the polarization state of the light incident upon it. As such, it reflects light from mirrors 290 through a 90- degree bend to the corresponding LCDs. In this manner, light rays arriving at any point 338 ( FIG. 18a ) on LCD 268, for example, represent the average power of rays leaving every aperture point on light source panel 288. The success of this approach presumes that the total lumens emitted from the light source panel's aperture as a function of angle, remains nearly constant for small angles and then fall off to no less than half-power in a smooth and continuous manner over the angular range utilized. If this is so, beam uniformity will be smooth, and the roll-off from center to field edge of field will be no greater than 2:1.When a completely flat illumination field is needed for the highest image quality applications, it may be preferable to use an imaging system to relay a proportionally magnified image of the light source panel onto the LCD aperture. For best results, however, this relay system is made telecentric, so the angle transformed illumination is symmetrically disposed about projection lens axis 75 as it is in FIG. 17 . One compact system that achieves this performance is a two-stage angle transformation system of the form illustrated in FIG. 18 B. In this approach, a first neutral angle transformation stage 267 is used to form virtual focal plane source 251 that may be made the same size as real source 288, but with the smoothly falling center-to-edge spatial characteristics described above. Then, virtual source 251 is positioned as input to second angle transformation stage 289. 5.1.1.5 Other Angle Transformer Layouts Achieving Efficient Telecentric Field Coverage The coupled (two-stage) angle transformer illustrated in FIG. 18B flattens field uniformity by operating in a pseudo-imaging mode, returning light emitted from points on the light source panel to corresponding points on the LCD. The advantage of this particular construct, however, is that it provides a telecentric means for exactly covering the LCD's rectangular field. Provided the spatial non-uniformity on the light source panel aperture is not too severe, small distortions and defocusing in this layout provide an adequate degree of feature blurring that reduces the appearance of minor non-uniformity while maintaining the light source panel's intrinsically homogeneous field of brightness.The same point-to-point imaging results can be obtained with the simpler single stage transformer of FIG. 17 when the spacings between elements are adjusted. Yet, the reason that this approach is not preferable for efficient projection systems is that the conditions for telecentric illumination are not met.The LCD field coverage with a single folded non-imaging angle transformer stage, as its shown in FIG. 17 , is governed by the light source panel's angular extent, βi, in each meridian, which for the invention of FIG. 15 , is the same in each meridian. Such angular symmetry means that without an efficient means of compensating for it, the illuminator's field coverage is intrinsically square and unmatched to the rectangular LCD aperture. Yet, with the two-stage transformer of FIG. 18B , it's the asymmetry of source shape that the first stage converts into a corresponding angular asymmetry, exactly the angular asymmetry needed for ideal second stage field coverage.The details of this approach are illustrated by means of numerical example. Reflective LCD 268, FIG. 18B , has a rectangular aperture, 24.384-mm in the x meridian, 18.288-mm in the y meridian. When light source panel 288 is 13.25 mm in the x meridian, the preferable first stage angle transformer 267 (using glass-prisms) is made with concave mirror 191 having a 25 mm (13.25/2Tan(14.8)) focal length (F0) so that virtual source 251 has exactly the same width (13.25 mm) and inter-stage output angle, β1x(14.8 degrees in glass) that it started with (i.e. β0x= β1x). In this manner, the second stage transformer's concave mirror 190 is made with a focal length F1 = Ux/2Tan (14.8) or 46.14 mm, so that not only is the LCD's field coverage exactly 24.388 mm in this meridian, but its field angle, ωx, is 8 degrees in glass (12 degrees in glass) as desired. Under these conditions, equivalently efficient performance is mimicked in the γ-meridian, where the narrower width of light source panel 288, 9.94 mm, converts to the same virtual width, 9.94 mm, but with an inter-stage field angle, β1y= Tan-1(uy/2F0) or 11.244 degrees in glass (rather than 14.8 degrees as in the x-meridian). It is this automatic inter-stage angular compression in the y meridian that allows for: correct field coverage in stage two. With β1y= 11.244 degrees and F1 = 46.14 mm, the γ-meridian field edge is properly (F1) Tan (β1y) or 9.17 mm from center.The mechanism by which this behavior occurs is further illustrated in FIG. 18B by several key ray paths. Extreme source ray 105 leaves the center of light source panel 288 from point-a at an angle, β0, of 22.5 degrees in air, 14.8 degrees in the glass prisms of polarizing beam splitter cube 263. Ray 105 starts out in this example purely s-polarized, and reflective polarizer 257, oriented for passage of s-polarized light. Hence, ray 107 passes sequentially through reflective polarizer 257 and quarter wave phase retardation layer 294 before reaching concave mirror 191 at point-b, whereupon it reflects with optical power back through phase retarder 294 and towards reflective polarizer 257, reaching 257 at point-c. As explained above, the s-polarized ray's round trip excursion through phase retarder 294, and its metallic reflection at point-b, combine to convert the ray's polarization state from s to p. Reaching point-c on reflective polarizer 257 with the orthogonal polarization, the ray reflects upwards towards point d on virtual source plane 251, continuing upwards into second angle transformation stage 289 as p-polarized ray 107. Second stage reflective polarizer 292 is oriented to pass p-polarized rays like 107, which then continue towards second concave mirror 290, striking it at point-g. The reflected continuation of ray 107 has been converted to s-polarization and continues to point-f on reflective polarizer 290, whereupon it reflects towards point-g on reflective back-plane 288 of LCD 268. The same procedure is illustrated for the axial ray leaving source point-a, showing its passage through the two stages, also to point-g on LCD 268. And, the ray parallel to ray 105 leaving point-aa on the edge of the light panel's field progresses to points bb, cc, d, ee, ff, and finally to gg.It can be seen from the behavior of these illustrative rays that the two-stage system of FIG. 18B actually images points on source panel 288 into points on LCD 268, thereby gaining the beneficial field coverage efficiency of an imaging system, but defeating the beneficial brightness averaging process exhibited by a single non-imaging transformer stage acting alone.It is advantageous then to maintain the non-imaging nature of the single-stage angle transformer of FIG. 17 , but with the ideal field coverage efficiency possible using an imaging system. 5.1.1.6 Single Stage Non-Imaging Angle Transformers Having Efficient Telecentric Field Coverage The single-stage non-imaging angle transformer of FIG. 17 homogenizes field uniformity by means of the focal plane averaging of source field brightness described above. Yet, because the source panel's angular symmetry in the two meridians, the approach creates a square rather than rectangular illumination field. For best results, substantially all lumens output by the aperture of each light source panel land on the rectangular input apertures of their corresponding LCDs (268, 270, 272) under telecentric conditions and the same field angles in each meridian.Specific arrangements are required, however, to meet such requirements.Geometrical relationships and the angular characteristics of the prism-sheet based (58, 60) light source panels of FIG. 15 constrain optimum LCD field coverage as well as the transformer's resulting output angle, ω. Aperture dimensions of the light source panels (ui) and the fixed focal length of concave mirror 290, FL, determine output angle, ω, by means of the geometric expression 2Tanω = ui/FL, uibeing the appropriate light source edge dimension for each meridian (x, long; y, short). LCD field coverage then depends in turn, on the fixed focal length, FL, of optical reflecting element 290, and the angular range, β, of the light source panels, by means of analogous expression Ui= 2 FL Tan β, Uibeing the appropriate LCD edge dimension (x, long; y, short).When the light source panels of FIG. 15 are constructed with identical prism sheet layers 58 and 60, they produce an isotropic beam having symmetrical angular range in each meridian (i.e. β=βx=βy). Because of this, when mirror system 290 is made a simple spherical element of fixed focal length (as shown for example in FIG. 17 ), field coverage on the LCD's 4:3 rectangular aspect ratio becomes a square-like pattern that overfills the rectangular LCD by 25%. Improving on this performance using the pseudo-Kohler non-imaging angle transformer of FIG. 17 requires special means for producing a different angular range in each field meridian. One such means for doing this is shown in FIG. 18B , but resulted in imaging rather than non-imaging system behavior. An alternative approach is shown schematically in FIGS. 18C and 18D that is based on the principles of astronomical Galilean telescopes. A pair of cylindrical lenses, separated by the difference in their focal lengths, is used to compress output angles in one meridian, and not in the other. The lenses may be negative and positive, or positive and positive.The behavior of one preferable negative and positive power lens pair 305 of focal lengths FN (negative lens 203) and FP (positive lens 205) is illustrated in detail 307 of FIG. 18C . Similar results can be achieved using a pair of positive lens elements provided the parameters are adjusted accordingly. The perspective of FIG. 18C represents the LCD's short side meridian whose input angle is to be reduced to β' (207) from its intrinsic isotropic value, β (201), nominally +/- 22.5 degrees in air, as produced intrinsically by the light source panels of FIG. 15 . Illustrative lens elements 203 and 205 are cylindrical so that their optical effect operates chiefly in the meridian shown, and not in the other. Either or both lens elements can be aspherized, formed as cylindrical Fresnel surfaces, and/or implemented as a pair of separate lens elements to reduce aberrations and thickness.Light source panel 288 is disposed immediately below the first (negative power) lens 203, whose clear aperture is made sufficiently large to accept all emitted light. Magnified detail circle 309 illustrates the cross-section of light source panel 288 (as for example in FIG. 15 , detail 221) and three illustrative output rays 311 which emit into the plane input surface of negative lens 203 at -22.5 degrees, 0 degrees and +22.5 degrees as shown. Passing through the negative lens 203 these rays are diverged into the air space between the two opposing lens elements 203 and 205. The ray paths indicated schematically in FIG. 18C are those of actual rays emitted by a 9.94 mm wide light source aperture emitting into a 1 mm thick plano- concave negative lens 203 have a 41.6 mm spherical radius (FN=83.2 mm), and a 4 mm thick convex-plano positive lens having a 51.85 mm spherical radius (FP=103.7 mm). The vertex-to-vertex lens separation distance 315 is made to be 20.5 mm so that d = FP-FN. When this is done, and the positive lens 205 collects all diverging ray bundles 317 output by negative lens 203, the ray directions are converted such that output ray bundles 319 from positive lens 205 are once again substantially parallel, but at angle, β', 207 from system axis 210, given by the geometrical expression Tan β' = (FN/FP) Tan β. Moreover, parallel output ray bundles 319 appear to come from virtual source 312 whose meridonal width uy' preserves system etendue, uySin β = uy' Sin β'. In this example, the converted output angle, β', is thereby compressed from 22.5 degrees in air to about 18 degrees in air (14.8 degrees in glass to about 12 degrees in glass).In an optimized system, positive lens 205 could be divided into a closely spaced pair of thin plano-convex positive lenses (preferably with convex surfaces facing each other) and with each convex surface aspherized to minimize aberrations that would otherwise degrade performance towards the edges of the field. Positive lens 205 could also be made with a cylindrical Fresnel surface also aspherized to minimize aberrations. Negative lens 203 is preferably aspherized to minimize aberrations.The effect of adding output angle transformer element 325 is shown in detail 327 of FIG. 18C , for the short side meridian of illustrative red channel 18.288 mm x 24.384 mm LCD 288. The output angle, ωy', in the short side meridian shown, is from geometry, Tan-1(uy'/2FC), with uy' virtual source width 323 (uySinβy=uy'Sinβy') and FC, the transformer's focal length. The transformer's focal length, FC, is set in the long side meridian (not shown in FIG. 18C ) as FC = ux/2Tanωx, with ωxthe projection system's required output angle at LCD 288 and ux, the actual source size in the x or long side meridian. [Note: The convention adopted in FIG. 18C and in the related geometrical expressions is that a prime indicates a value that has been converted by the action of the negative and positive lenses pair 203 and 205 ]. Since the uxis 13.25 mm and ωxis constrained to be 8 degrees in glass (12 degrees in air), FC is, in this example, 47.1 mm. Then, using the same focal length FC=47.1 mm, ωy' becomes, in the short meridian, Tan-1[(uy'/2FC)] or 7.4 degrees, with uy'=uySinβy/Sinβy'=12.2 mm.The importance of the approach represented in FIG. 18C , despite having an angularly symmetrical emitter and a field coverage determined by the emitter's angular extent, is that with the addition of two cylindrical optical elements it is able to provide controllably asymmetric field coverage, both spatially and angularly. This means not only will the LCD's spatial aperture be fully illuminated with minimal waste, but so will its angular aperture.Element 325 may be spherical with common focal length F in each meridian, or the element may be made toric, with a focal length adjusted for best operation in each meridian. Since the physical height, H, of element 325 above physical source 288 must be the same in each meridian the optimized design is a trade-off manifest by meridonal differences in illumination sharpness. Since sub-system 327 is not meant to function as crisp imaging system, the sharpness of illumination achieved is not a significant factor in the total power transferred to the LCDs. Moreover, some degree of LCD field overfill is required in the final design to allow a reasonable tolerance in the positioning of all optical elements with respect to each other.If element 325 is made toric, it may be combined physically with positive cylinder lens 205, which results in a considerably more compact optical system.Yet, even more substantial compaction is possible using the folded form of FIGS. 17 and 18A , in which the optical power is conveyed as concave mirror 327, FIG. 18D . In this variation, reflecting element 327 is toric with unique focal lengths, FC and FC', in respective long and short meridians. Focal length FC' is a composite focal length combining the angle transformer focal length FC for the short side meridian and the focal length of positive cylindrical lens element 205 using the traditional expression, FC'=(FC)(FP)/(FC+FP).Another way creating the degree of asymmetric field coverage needed in a rectangular illumination system based on the light source panels of FIG. 15 is to introduce angular asymmetry within the light source panel itself. This is possible by using prism sheets 58 and 60 having deliberately different apex angles. While the relationship between apex angle and output angle is complex for a prism sheet pair, it is the prism apex angle that determines the bi-layer's angular output range. There is a specifically larger apex angle, αx, and a specifically smaller apex angle, αy, whose combination in bi-layer 58 and 60 if FIG. 15 produces output angles βxand βy. The governing ratio, by geometry, is Tan βx/Tan βy= Ux/Uy. So, for the 4:3 rectangular aspect ratio 1.2" diagonal LCD, Tan βx/Tan βyis 1.333. If βxremains +/-22.5 degrees in air, it is therefore preferable for optimum field coverage, that βyis about 30% narrower at +/-17.3 degrees in air. Yet, the specific angular limits are less important to optimum field coverage than the asymmetry dictated by their ratio. 5.1.1.7 Color Segregation Minimizes Wavelength Sensitivities One advantage of the color-segregated layout of the projection system of FIG. 17 is that each folded angle transformer cube 289 operates within its own narrow band of wavelengths, and thereby relaxes constraints on the performing bandwidth range of retardation films 294 and reflective polarizers 292, which for full-color use would have to exhibit substantially constant performance over the whole visible spectrum. The phase retardation contributed by films 294 is typically a function of transmitted wavelength. Multi-layer broadband designs are employed by manufacturers such as Nitto Denko to minimize retardation changes that occur across the visible spectrum. Reflective polarizers 292 also show reflectivity differences as a function of incident color. Such effects, however, are isolated in the present system to each mono-colored channel and therefore have no net consequence on overall system performance, as they are automatically compensated by color-balancing adjustments of the power applied to each light source panel.Accordingly, each separate red, green and blue illumination cube in the system of FIG. 17 , perform in approximately a wavelength independent manner. The adjacent dichroic combining cube 274 superimposes the three monochrome image beams so that a single full-color image results on the projection screen (not shown). Each of the angle transforming relay cubes 289 are identical physically, except for the monochromatic color of the LEDs used within each light source. Proper electrical power is applied to the array of LEDs in each source 1 so that the desired mix of white-field color is achieved (e.g. specific color temperature and CIE color coordinates) for every full color composite image frame of superimposed red, green and blue image frames. 5.1.1.8 Output Lumens and System Efficiency Total lumens output by the illustrative projection system of FIG. 17 depends on the product of transfer efficiencies encountered by light rays as they pass through the various sub-systems in each of the system's three parallel mono-colored channels. In the present example, each light source panel is 13.25 mm by 9.94 mm and contain at total of 72 LED array units in a 12 by 9 array. Assuming a minimum of 10 unpolarized lumens per unit, the total number of unpolarized lumens per light source panel is 720. Assuming a polarization recycling gain of 1.5, there are 825 polarized lumens per panel. Then, if the corresponding transfer efficiencies for angle transformation, dichroic re-combination, reflective LCD transit and passage through the projection lens are 0.75, 0.81, 0.9 and 0.9 respectively, the total lumens provided in each color, assuming equal mixing, is about 400, making total white-field screen lumens about 1,200 as planned. Since each LED used outputs 20 lumens, total RGB input lumens are 4320, making total efficiency just less than 30%. This compares with a total efficiency of 20% for conventional halogen arc lamp systems, a 50% improvement.While 1200 lumens is substantially the same on screen performance that is achieved in systems using a reflectorized short arc discharge lamp such as the 100 W unit manufactured by Philips, it is achieved in this case with the compact solid-state panel lamps of as for example FIG. 16 that don't require optical infra-red filtration, the expense of a surrounding reflector, or a forced air convention fan for cooling, but that do allow for practically instantaneous electronic color temperature adjustments, and 10-20 times the standard service life. In addition, the profile of beams from the rectangular multi-layer illuminating pixel array is more conducive to image display applications than the corresponding profile of the beams collected from short arc discharge lamps, which are fundamentally much more intense in the center than at the edges and circular in cross-section. The percentage of beam light falling outside a 4:3 aspect ratio rectangle inscribed in a circular beam cross-section is 38.9% by geometry. External homogenizing devices such as rectangular cross-section integrating bars, diffusers and lens arrays are used to even out the circular beam profile at extra cost, space and inefficiency. By comparison, the flux density across the output beam of the solid-state panel lamps described above, edge to edge, is nominally constant. 5.1.1.9 Cube Geometry As has become well known in optical system layout, each beam-splitting cube 289 (whether air or glass) and dichroic combiner cube 274, as shown in FIG. 17 , needs to be sized properly to handle the angular divergence and optical path lengths involved. For convenience, the operative equations for horizontal size, X, and vertical size, Y, in each meridian are given in equations 11 and 12 in terms of semi angle, β, of light source 1, the transformed output angle, ω, of the relay, the edge dimension of the light source, ui, and the corresponding edge dimension of the SLM, Ui(i representing either the x horizontal meridian, or the vertical y meridian). The solution for X is given in equation 12. These equations apply to the combiner cube when β=ω and ui=Ui=K, K being the larger of X and Y for the relay cube. When the combiner (or relay) media is dielectric, the defining angles β and ω used must be those in the applicable refractive index, where for example, the angle in the media is Sin-1(Sinβ/n), n being the refractive index of the media. When, β is 22.5 degrees as above, the angle in refractive index 1.49 is actually 14.8 degrees. Similarly, when β is 12 degrees, the angle in refractive index 1.49 is actually 8 degrees. Y = u i + 2 ⁢ XTan ⁢ βX = U i + 2 ⁢ YTanωX = U i 2 ⁢ Tanω + u i 1 2 ⁢ Tanω - 2 ⁢ TanβFor the illustrative 13.25 mm by 9.94 mm light source aperture, 24.384 mm by 18.2,88 mm LCD aperture, +/- 22.5-degree light source cone angle, and +/-12 degree relay angle, the relay cubes needed, if air filled, are approximately 2 inches on a side, and closer to 1" on a side if predominately dielectric (i.e. reflective polarizer 292 is an industry standard polarizing beam splitter cube such as manufactured by Meadowlark Optics. The smallest possible combiner cube-needed with the +/- 12-degree output from the illustrative 2" relay that is predominately air, is from equation 13, 58 mm on an edge, or about 2.3 inches. If a standard polarizing beam splitter cube made of glass or plastic is used in the system of FIG. 17 , this cube has X = Y = 33.1 mm and a depth of 24.8 mm. The corresponding combiner cube 274 is in the plane of FIG. 17 , a cube 45.9 mm on a side, and 34.5 mm deep.Reflective polarizer 292 can either be a multi-layer beam splitting plate that contains a reflective polarizing film such as manufactured by Minnesota Mining & Manufacturing Co. as DBEF™immersed in air, or a conventional transparent dielectric prism cube with inorganic reflective polarizing layers pre-deposited on the prism cube's internal diagonal such as manufactured by Meadowlark Optics for broad band uses. The plate type reflective polarizer has a very thin cover layer on the side facing the projection lens so as to minimize astigmatism. Attachment of the polarizing layer to the thicker substrate layer is such that sufficient optical flatness is preserved to minimize contributions to output side field curvature. Two identically aligned reflective polarizer layers and one absorption polarizer layer can be used to improve rejection of the unwanted polarization state. Doing so in the system of FIG. 17 is not preferred as it decreases transmission from a best possible 0.95 to about 0.81, and offers no real advantage over a clean up polarizer located on the relay cube's output face as described above. The prism cube type reflective polarizer has higher transmission and reflection efficiency (0.95 and 0.98 respectively), but a standard acceptance angle of +/- 2 degrees. Optimized in its design, acceptance angle increases to about +/-6 degrees. Used in a system such as FIG. 17 with a system output angle of +/-12 degrees (+/- 8 degrees in glass or plastic), there will be some reduction in reflection efficiency for ray angles beyond 6 degrees, which is not expected to present a problem. Transmission efficiency through the cube is less affected by the beam's angular range.Both the relay cube and the dichroic combiner cube can be configured in the traditional Philips prism arrangement of FIG. 16 . The advantage of doing so is that the prism arrangements 301 allow for the mono-colored axial rays from each light source panel input to be closer to normal incidence inside the prism medium (glass or acrylic), where all dielectric stacks (whether dichroic or polarization selective reflection) show preferable performance. 5.1.2 Example 2: Reflective LCD Projection System #2 Based On FIG. 19 A compact variation on the projection system example of FIGS. 17-18 is shown schematically in FIG. 19 that reduces the system footprint significantly. In addition, the number of reflective polarizers 292 and concave mirrors 290 are each reduced from as well. This improvement requires using two dichroic combiner cubes 274 (or Philips prism equivalents), both cubes with identical dichroic layers 278 and 280 as before, one for the reflective LCDs 268, 270, and 272, and another for the monochromatic light sources 284, 286 and 288. In this system, light from the three monochromatic sources is mixed in first combiner cube 338. This light then enters the reflective angle transformer cube 346, which in the current form has twice the focal length, FL, of the configuration used in FIG. 17 . Accordingly, this requirement has to be satisfied while applying equations 11-13 to calculate the relative sizes of source cube 340, transformer cube 346 and combiner cube 342. The design approach for doing this is to use equations 11-13 to calculate the minimum size of the source cube 340, Xs, and the modulator cube, Xm, which will in turn lead to the minimum size for the relay cube, Xr. Seeking to satisfy the preferred requirement of any lens system that its front and back focal lengths be equal, Xs+ Xris constrained to equal Xr+ Xd. Since all three optical structures 340, 344 and 346 are preferentially cubes, this forces Xs= Xd. As the input and output angles, β and ω, of such angle transformers similar to those illustrated by 346 are generally not equal, the cube sizes as calculated using equations 11-13 will not be the same. In this instance, the larger cube is taken as the constant for the system. As an example, the long dimension of each light source is as above, 13.5 mm, and the long dimension of each corresponding LCD is 24.384 mm. The input and output angles, in air are +/-22.5 degrees and +/-12 degrees respectively. In the media of each cube, these angles become +/-14.8 degrees and +/-8 degrees respectively. For these illustrative values, Xs= 28.6 mm and Xm= 33.86 mm. Physically, each cube 340 and 344 is made 33.9 mm on an edge in the plane of FIG. 19 . Then, in using equations 11-13 properly, the values of u and β are taken as 28.6 mm and 14.8 degrees, while the values of U and ω are taken as 33.86 mm and 8 degrees. This allows the minimum size of relay cube 346 to be calculated as Xr= 49 mm and Yr= 54.6 mm, the latter of which is taken as the cube edge in the plane of FIG. 19 . The corresponding physical focal length for illustrative concave mirror 290 becomes about 88.5 mm if in air or (88.5)(n) in the dielectric media, n being the refractive index of the optical path.This means that for the illustrative 1.2" diagonal LCD apertures and the 13.5 mm x 9.94 mm light source apertures, the entire image projection engine, less 2.0" - 2.5" diameter projection lens 276, can fit inside about a 3.5" x 3.5" box, which is the surface of the 3.5" floppy diskette used to store computer data. Box thickness would be less than 1.6". The scale used in FIG. 19 is about 10% larger than actual size. 5.1.2.1 Power Efficiency The power efficiencies of the systems illustrated in FIG. 17 and FIG. 19 are about the same. The number of lumens projected on the screen depend on the number of lumens emitted by the collective light sources multiplied by the sequential inefficiencies suffered along the optical path length to the screen, on transmission, reflection and refraction through the system. As one example, each light source 284, 286 and 288 consists of nominally 1 mm by 1 mm illuminating pixel, each in the general form of detail 308 in FIG. 18a . With each illuminating pixel using one 0.5 mm x 0.5 mm transparent substrate LED, such as those manufactured by LumiLeds Lighting, each pixel yields about 15 monochromatic p-polarized lumens in a uniform f/1.3 beam of rectangular cross-section. This output assumes a degree (up to 1.5x) of polarization recycling arranged within each illuminating pixel, as has been described earlier by means of reflective polarizer. In this case, the composite power emitted from the indicated 13.25 mm x 9.94 mm two-dimensional array of such pixels (nominally 13 pixels by 10 pixels) then is (13)(10)(15) or 1,950 lumens from each monochromatic red, green and blue source. This means that the total white field beam power so created by sources 284, 286 and 288 is 5,850 p-polarized lumens, assuming equal color mixing. Then, using the system of FIG. 18a as an example, the associated on state optical path efficiency is given in equation 14 with ηcom, the dichroic transmission efficiency, ηrprthe reflection efficiency of reflective polarizer 292, ηpcthe polarization conversion efficiency of phase retardation and mirror elements 294 and 290, ηrptthe transmission efficiency of reflective polarizer 292, ηlcdthe LCD passage efficiency, and ηlensthe transmission efficiency of the projection lens 276. Using expected values for these inefficiencies, the on state efficiency of the system of FIG. 18a is about as high as 0.36, primarily limited by a total of three light passes through the two dichroic combiner cubes, each pass having about 0.81 transmission, efficiency. η on = η com 2 ⁢ η pc ⁢ η rpl 2 ⁢ η lcd ⁢ η lens ⁢ η rprAccordingly, the total white-field lumens projected to a screen by the system of FIG. 18a , is at best (5850)(0.36) or 2,106. Typical commercial projectors deliver 1,200 white-field lumens.If the polarization recycling applied to each light source yielded only 20% gain rather than the 50% that has become typical, if aggregate light scattering effects reduced usable output by 10%, and if uncompensated Fresnel losses at uncoated air-dielectric surfaces reduced output (0.94)4or 0.78, the power on the screen still is about 1,200 lumens. If every effort was made to achieve highest on state efficiency possible, 0.36, a substantial relaxation in illuminating pixel size (102 in FIG. 16 ) could be effected. The current example assumes a total of 390 illuminating pixels, each having 1 mm x 1 mm output apertures and 15-lumen output. If these 390 pixels produce 2,106 white-field lumens, only 222 such pixels would be needed to yield 1,200 lumens. Using fewer LEDs leads to a proportional reduction in total unit cost. Each monochromatic source could contain (222)/3 or 74 square pixels, each in a 10 x 7 array. Keeping light source aperture size roughly constant at 13.5 mm x 9.94 mm, the individual pixel apertures can be increased 35% to about 1.35 mm x 1.35 mm, which thereby increases the dead spaces between LEDs in the light source array, if such an increase were deemed desirable.Other reflective elements with optical power can be used in place of the illustrative concave mirror 290 that has been featured as an example in the structures of FIGS. 17-19 . It is equally feasible for the right amount of optical power to be designed into a plano-convex or biconvex refractive lens whose back surface (plane or curved) has been coated with a highly reflective metal film. S alternative units substitute directly for the illustrative element 290. When using such a refractive element, however, the resulting power is adjusted to allow for the light's double pass through what is functionally a lens-mirror system. 5.1.3 Example 3: Reflective LCD Projection System #3 Based On FIG. 20 Another related embodiment is illustrated in FIG. 20 with a refractive Kohler illumination approach replacing the reflector-based ones illustrated in FIGS. 17-19 . In this variation, an illustrative aspheric bi-convex lens 356 substitutes for the angle transforming concave mirrors 290 in FIG. 17 and 291 in FIG. 19 , showing one of several equally preferential refractive elements. The lens may have a spherical, conic or aspheric surfaces, or any combination of such surfaces. The lens 356 may be a Fresnel or a pair of Fresnel lens elements. One advantage of a refractive configuration, despite the extra footprint and volume it occupies, is that the need for wide band quarter wave phase retardation films 294 used in FIG. 17 and FIG. 19 as part of the polarization changing mechanism is eliminated. Polarization changing is contributed in this case by the phase retardation and metallic reflection occurring within the reflective LCDs themselves.In the structure of FIG. 20 , each reflective LCD such as 270 may be located in one of two possible positions on the periphery of each secondary angle-transforming cube such as 350. The embodiment as shown in FIG. 20 places LCD 270 in line of sight with light source 284 as for example in the green angle transforming subsystem 354. The alternative locations, 90 degrees to the line of sight with light sources 284, 286 and 288 are shown as dotted rectangles 358, 360, and 362. The configuration shown in FIG. 20 operates analogously to those shown in FIG. 17 and FIG. 19 , and is illustrated by ray paths drawn in the red subsystem. Pre-polarized monochromatic output light (p polarized) from the solid-state panel lamp, beginning as illustrative ray 364, is transformed from +/- 22.5 degrees at the source to, as one convenient example, +/-12 degrees, in the same manner as with the concave mirror 290. Instead, the optical power of lens 356 brings all rays collected to its front focal plane 368. The illustrative ray 364 continues through lens 356 as ray 370 which passes through the reflective polarizer layer 292, also as before, either in air or in the medium of an immersing beam splitter cube 372, such as those described above. In this configuration, all light within the beam splitter cube has an angle no greater than +/- 8 degrees (or the equivalent angle in the beam splitting medium, Sin-1[(Sinω/n], also as before). On reaching, and passing into and out of the reflective LCD 268, the on state image light is changed from incoming linear polarization state p to the outgoing orthogonal polarization state s that is reflected by the reflective polarizer layer 274 outwards and towards the projection lens 276 as illustrative ray 374. So-called off state light that is not to be part of the modulated output image, develops in spatial regions where the LCD's electronic pixel bias contributes incomplete or no phase retardation to the passing light. Accordingly, the polarization state of such outgoing light is either partially converted or not converted at all to the s-polarized state that reflects efficiently from reflective polarizer 292 to projection lens 276. In this example, all p-polarized light leaving the LCD passes back through reflective polarizer layer 292 along the path it or some other input ray arrived on. In doing so, this light returns back to the light source 288 from which it came, just as was discussed earlier. Once back at the light source 288, this rejected light is mixed inside the source cavity structure as set forth in FIG. 14 , FIG 15 and FG. 16, and may be re-emitted (less any transit losses) inseparable from newly emitted light. Such re-emission potentially increases the overall system efficiency without any disadvantage, and without either increase the spatial extent of the beam or widening its angular extent. This behavior is a unique characteristic of a cavity source. Once such recycled light returns to the cavity its re-emission from the cavity is the thermodynamic equivalent to increasing the input power that generates the light in the first place. 5.1.4 Example 4: Reflective LCD Projection System #4 Based On FIG. 21 The angle transforming structure of FIG. 20 can be extended, as in FIG. 21 , to improve on the conventional polarization recovery process used within light sources 288, 284 and 286, or other equally strategic system recycling reflective polarizer mirror plane locations, which at best convert only 50 % of the unusable polarization (i.e. s polarized light) into the desirable one (i.e. p polarized light). In the conventional polarization recovery process a flat reflective polarizer is incorporated, for example, in layer 28 above the angle-controlling layers 58 and 60 as in FIG. 15 . So positioned, the transmission axis of the reflective polarizer is oriented for highest passage of the polarization that must pass through the ensuing optical system (i.e. s polarization), while reflecting or recycling light polarized in the orthogonal state (i.e. p polarization) within white reflecting cavity 217 as in FIG. 15 . Light so trapped inside white cavity 217 scatters randomly off cavity walls 85 and layers such as the undersides of 58 and 60, until a fraction of this light converts statistically to the transmissive polarization state along output ray paths allowed by layers 58 and 60. This recycled light then exits with an incremental gain over the original flux transmitted by the reflective polarizer layer incorporated within sheet 28. All other spatial locations for this recycling element are not as preferable as they return less light to the light source cavities by virtue of the extra optical path length inefficiencies the light encounters and the increased angular spreading that this extra path length imparts.There is at least one such system variation, however, that eliminates the need for polarization recycling within the source altogether, and recovers substantially all the power of the unused polarization in a completely different way. This alternative situation occurs when a second LCD 269 is added to the first LCD 268 in each of the three angle transformer configurations of FIG. 20 . This structure is illustrated for just one transformer subsystem 354, in FIG. 21 . In this configuration, lens 356 transforms the angle β of all unpolarized light such as represented by ray 380 from light source 288 so that all light transmitted on the output side of the lens such as represented by ray 382 is converging with maximum angle not exceeding ω (+/-12 degrees in air, +/- 8 degrees in dielectric media, as in all previous examples) and that still contains both s and p polarized flux. When this unpolarized light reaches reflective polarizer 292, it is split evenly into two polarized beams, one containing ray 384 (s polarized and reflected towards LCD 269) and one containing ray 386 (p polarized and transmitted towards LCD 268). The LCDs are each arranged as described above so that the so-called on state output light has the orthogonal linear polarization of the incoming illumination. Consequently, LCD 269 reverses incoming s polarized illumination provided for example by ray 384 to outgoing p polarized image light represented by ray 388 that passes sequentially through reflective polarizer 292, beam combiner 274, and projection lens 276, when used in the complete projection system of FIG. 20 . And, equivalently, LCD 268 reverses its incoming p polarized illumination that has transmitted through reflective polarizer 292 for example as ray 386 to outgoing s polarized image light as represented by converted ray 390 that passes sequentially through beam combiner 274 and projection lens 276 as reflected output ray 392, after reflection by reflective polarizer 292. The two monochromatic on state output image beams 394 are exactly superimposed on each other spatially and combine to create an unpolarized composite beam.While viewing unpolarized light on a projection screen is ordinarily quite normal, the output of unpolarized light as mixed by the subsystem of FIG. 21 does eliminate the possibility of reducing the amount of unwanted background light with a clean-up polarizer oriented to block the unwanted light while passing the wanted light. This means that special care is taken to minimize unwanted background and ghost reflections in the first place by means of anti-reflection coatings wherever appropriate such as for example on the faces of lens 356 and on the output faces of each LCD 268 and 269. 5.1.4.1 Potential 3D Viewing One potential advantage of having two polarized LCD image sources for each monochromatic color, as provided by the subsystem of FIG. 21 is that these separate images can be later separated for independent projection by two projection lenses, one for each polarization, to create a stereo image. In doing so, each LCD, 268 and 269 is controlled by separate electronic pixel addressing means, one for each effective left and right eye images. One standard device for spatially separating beams of s and p polarization is a prism cube in the form taken by the tri-color combiner 274. In this application the internal prism diagonals are coated with reflective polarizing layers whose transmission axes have been oriented 90 degrees to one another. 5.1.5 Example 5: Reflective LCD Projection System #5 Based Upon FIG. 22 Another form of the system of FIG. 21 is illustrated in FIG. 22 . In this variation, four extra elements have been added, a second monochromatic light source 287 normally having the same wavelength as first source 288 and placed on the cube face positioned 90 degrees from the first source, a second reflective polarizer layer 406 tilted 90 degrees from the first and oriented to pass p polarized light from the first source 288, a broad band quarter wave phase retardation layer 294, and a concave mirror 290, both as described above, and placed on the cube face directly opposite the new light source 287. Light sources 287 and 288 can each be pre-polarized, one s polarized (287) and the other p polarized (288) or both sources can remain unpolarized, letting the composite structure of FIG. 22 perform both the polarizing and angle transforming functions needed by each LCD.The basic operating principle of the monochromatic angle-transforming system configured in FIG. 22 is demonstrated by a selected series of illustrative rays that isolate on the behavior of the additional light source 287 only. Light from the first source 288 will follow a similar pattern and then overlap with light from the extra source during input collection by lens 408, which in turn results in converging composite output rays 426 representing the p and s polarized light from both sources. In this system, the lens function is represented only schematically by element 408, and its scale between upper and lower cubes 428 and 430 has been exaggerated for visibility. In its most compact form, element 408 could be a Fresnel lens. Convex optical power could also be added to the opposing output and input faces of the upper and lower cubes, if dielectric.Illustrative unpolarized light ray 410 from extra monochromatic light source panel 287 is split evenly by interaction with reflective polarizer layer 406 into two linearly polarized rays, p polarized ray 412 and s polarized ray 422. S polarized ray 422 is reflected into the collecting aperture of angle transforming lens element 408, whose back focal plane distance BF equals the 90 degree optical path length taken between the plane of lens 408 and the output aperture plane of light source 287. P polarized ray 412 passes efficiently through reflective polarizer 406 towards the quarter wave phase retardation layer 294 and the collecting aperture of illustrative concave reflecting element 290. On reflection and by element 290, and re-passage through retardation layer 294, the polarization of ray 412 is converted as described several times above, to s polarized ray 414, which heads back towards reflective polarizer 406. On striking reflective polarizer 406 ray 414 is reflected towards first light source panel 288, and into its aperture within an angular range allowing efficient transmission, through its upper layers 60 and 58 (as in detail 308 in FIG. 18 ). Once inside one of the reflecting cavities 228 of any illuminating pixel such as 308, the continuation of ray 414 is a statistical one based on multiple opportunities for scattering and reflection from cavity elements, and all memory of incoming angle and polarization is reduced significantly if not eliminated. As such, the re-emitted ray that eventually emerges, 420, has an equal probability of being p or s polarized. If it emerges s polarized, it will be blocked by reflective polarizer 406 and reflected towards the polarization changing concave mirror element 290, whereupon it will be re-directed to the aperture of the extra light source 287, and re-cycled within its pixel cavities. Such unacceptable output light will continue to move from source to source in this manner until transit losses diminish its energy or a usable p or s polarized output ray is so created by the light source cavity's built in randomization processes. P polarized ray 420, successfully created by this randomization process and symbolized by ray path 418, is re-emitted towards and transmitted through reflective polarizer 406 only within the allowed angular output range of light source layers 58 and 60 (i.e. +/- 22.5 degrees). As such, ray 420 is treated by angle transforming lens element 408 no differently than s polarized ray 422 that came directly from light source 287 on the first pass. Accordingly the sum of all recycled rays (s polarized from light source 287 and p polarized from light source 288) add to the directly reflected flux from each source to contribute a composite output beam symbolized by ray 426 whose total number of lumens would be greater by the recycled fraction so contributed than 0.5 of the lumens from light source 287 and 0.5 of the lumens from light source 288. Whether this lumen total is greater or less than the total lumens provided if each light source 287 and 288 had been pre-polarized directly using separate reflective polarizer layers 28 placed above each light source aperture as has been described above, depends on the respective recycling efficiencies of the two methods.Yet, with either recycling approach available, it seems preferable to perform the polarization recovery process directly within the light source panels 287 and 288 themselves, thereby avoiding the long optical path lengths and the various reflection and transmission efficiencies involved in the recycling processes described for ray 412. The structure of FIG. 22 is advantageous for two reasons, which ever the polarization recovery approach used. It provides a way to more than double the amount of pre-polarized angle transformed monochromatic light provided to a projection system such as has been described by FIG. 20 , and while doing that, it provides a separate means of controlling light level for each of the two LCDs 268 and 269, which is useful when they are used in a stereo projection application. This means of light level control is the independent settings of electrical power to the light source panels 287 and 288, if not each set to the maximum power allowed.The exact x, y, z location of LCD 268 on its respective focal plane relative to LCD 269 is adjusted until the spatial image output overlap is exact, pixel for pixel horizontally and vertically. Off state light from both LCDs returns, as described before, to the light source cavity, as in the case of LCD 268 by transmission through reflective polarizer 292, and in the case of LCD 269, by reflection from reflective polarizer 292. In this case, the potential contribution to the dynamic brightness peaking mechanism described for the system of FIG. 17 is enhanced by the improved polarization utilization efficiency.Despite apparent pixel for pixel registration of LCD 268 and 269 by physical alignment, successful image overlap in output beam 394 also requires that image information applied electrically to one LCD (for example 268) be transposed along the x (396) and y (397) axes shown in FIG. 21 and FIG. 22 with respect to the other LCD (for example 267) or visa versa. Without performing such a mirror image transformation on one LCD's image, the two illustrative LCD output images will not superimpose correctly. 5.1.5.1 Image Inversion Means The image inversion required is illustrated graphically in the three-dimensional perspective of FIG. 23 , with the mirror image plane being orthogonal to the image plane of LCD 268 and parallel to z axis 452. Identical image information is applied to each of the LCD's pixel arrays forming the respective AB images shown, but the pixel columns on LCD 268 are made the reverse of those on LCD 269. By making this mirror image transformation electronically, light 440 from LCD 268 image point 400 and light 438 from LCD 269 image point 402 can be exactly overlapped spatially as illustrative s and p polarized output rays 442 and 444 demonstrate. This behavior is also shown for image points 434 and 436, which superimpose spatially as illustrative light rays 446 and 448. This same image transformation approach is applied within the overall projection systems of FIG. 17 , FIG. 19 , and FIG. 20 to achieve equally precise overlaps between red, green and blue image beams combined in each case by means of dichroic combiner cubes 274. With the arrangement of FIG. 17 as one example, and a conventionally arranged AB image applied to LCD 268, the same conventionally arranged AB image is applied to LCD 272, and the mirror image arrangement is applied to LCD 270. Then, since the system of FIG. 17 allows the locations of the LCD and the light source to be physically reversed, doing this only for LCD 270 and light source panel 284, allows the conventionally arranged AB image to be applied electronically to all three LCDs without such modification. In this case, the desired image inversion is performed optical by means of the 45-degree mirror plane of the reflective polarizer layer 292. In the configuration illustrated by FIG. 19 no such physical image correction is available, and electronic inversion of the image is required on LCD 270. The variation of FIG. 20 , however, allows the same degree of physical layout flexibility, as does the system of FIG. 17 . As drawn, with a conventional image arrangement applied to LCDs 268 and 270, the mirror image arrangement is applied electronically to LCD 272. The conventional AB image arrangement can be applied to all LCDs including 272, if the position of LCD 272 is moved from the position drawn in FIG. 20 to the dotted position 362, changing only the polarization of light source panel 286 from p polarize to s polarized. 5.1.6 Example 6: Transmissive LCD Projection System #1 Based Upon FIGS. 24-25 The basic light source panel and reflective LCD variations of FIG. 17 and FIG. 20 can also be applied with transmissive LCDs using the dichroic tri-color combiner cube 274 of FIG. 17 as the angle transformed image light router to a common projection lens 276, as can the Philips prism version shown in detail 310 of FIG. 16 . Corresponding system layouts are shown schematically in FIG. 24 and FIG. 25 . In each case, the monochromatic light sources of FIG. 15 are internally polarized as discussed above because the LCD operates preferably with light of a single state of linear polarization.In the system embodiments of FIGS. 17-25 , there are three fundamental relationships that extend to all subsequent examples as well. First, the etendue (aperture dimension times the Sine of the emitted angle) of the light source panel is matched to that of the LCD (spatial light modulator) etendue. Doing so assures that, before losses to inefficiency, the maximum possible transfer of lumens between source and projected image is affected. Since the spatial light modulator apertures are generally rectangular, it is sufficient to match source and modulator etendue along their x and y axes, as in equations 14 and 15. u x ⁢ Sinβ x = U x ⁢ Sinω xu y ⁢ Sinβ y = U y ⁢ Sinω yThis assures that light source panels 284, 286 and 288 are sized preferably for the sizes of the LCD used.Kohler type illumination optics has been illustrated as the preferential means to achieve the amount of angle transformation needed between source and image. This has been accomplished using reflective power in the embodiments of FIG. 17 , FIG. 19 and FIG. 24 , and with purely refractive power, in the embodiments FIG. 20 , Fig. 21 , FIG 22 and FIG 25 . The geometric relationships involved are summarized in equations 16 and 17, and apply to all following examples as well. The equations 16 and 17 use F to specify the focal length of the illustrative concave mirror elements 290 and 291, and the back focal distance of the spherical or aspheric lens elements 356 and 408. The equations also use the subscript d to refer to the aperture diagonal rather than the corresponding dimensions along the aperture's x and γ-axes. In this manner, a circularly symmetric lens or mirror is used, and truncated to remove those portions of the circle not receiving light. When using cylinder lenses for each of the system's x and y axes, equations 16 and 17 are applied along those axes, using ux, uy, Ux, Uy, βx, βy, ωxand ωyrather than the diagonal values indicated. The angular range along the diagonal for the light source panels of FIG. 16 is about +/-32 degrees. U d = 2 ⁢ FTanβ d 16 = 2 ⁢ FTanω d 17 5.1.7 Example 7: Field Sequential Transmissive LCD Projection System #2 Based Upon FIG. 26 The image projection system variations of FIG. 17 , FIG. 19 , FIG. 20 , FIG. 21 FIG. 22 , FIG. 24 and FIG. 25 have each used three reflective or transmissive LCD panels per system, one for each of the three monochromatic light source panel colors red, green and blue 288, 284 and 286 respectively. It is equally practical to use a single transmissive or reflective LCD panel, provided that single panel is capable electronic switching speeds that are fast enough to enable field sequential color illumination. Instead of applying the image information monochromatically to three separate LCDs and then mixing the monochromatic image beams into one composite image beam as in the configurations disclosed above, the tri-color illumination is applied to a single LCD in rapidly sequenced periods of red, green and blue that correspond to an image frame rate fast enough that the viewer's eyes are unable to distinguish the individual red, green and blue image frames and the perception is of full color imagery. This single modulator method, discussed further below, has been used successfully with the DMD in many commercial projector products. Recent advancements in LCD technology, however, are leading them towards the faster switching speeds needed as well.One such system embodiment based on the configuration of FIG.17 is illustrated in FIG. 26 . In this approach, a single dichroic combiner 274 is used to mix the angle-transformed output light from each of three separate monochromatic panels sources 284, 286 and 288. In this arrangement, the focal length of the illustrative mirror 476 must be sufficient to match the optical distance through combiner cube 274 to the single transmissive LCD 474. 5.1.8 Example 8: Transmissive LCD Projection System #3, Based Upon FIG. 27 A more compact variation on the embodiment of FIG. 26 is shown schematically in FIG. 27 that achieves compactness by making use of the same extra folding path as angle transformer 466 in FIG. 24 . This layout eliminates the large separation between, for example, light source panel 288 and mirror element 476 in FIG. 26 by creating this same optical path length over the ray path from light source 288 to polarization converting mirror plane 464, to reflective polarizer 294, and then to concave mirror 290. This approach uses two metallic mirrors 290 and 464 that are placed 90 degrees from each other, and two quarter-wave phase retardation layers 294. 5.1.9 Example 9: Transmissive LCD Projection System #4, Based Upon FIG. 28 Yet another compact projection system arrangement is illustrated schematically in FIG. 28 . This variation combines light from the three monochromatic light source panels 284, 286 and 288 prior to angle transformation in a single tri-color combiner cube 487, as was done for use with three reflective LCD panels in the compact system of FIG. 19 . The variation shown in FIG. 28 uses most compact angle transformer form 486 that includes an extra folding step by means of reflecting element 464, so as to permit transmissive LCD 474 to be closer to the angle transformer cube than it otherwise would be located. Alternatively, and not illustrated, this extra folding step can be removed and the transmissive LCD 474 (and projection lens 276) moved upwards until it is on the focal plane of reflecting element 290. 5.1.10 Example 10: Transmissive LCD Projection System #5, Based Upon FIG. 29 Still another compact projection system arrangement for single transmissive LCD panel 484 is shown schematically in FIG. 29 . In this variation, three monochromatic refractive angle transformers 490, 492 and 494 are combined with a single tri-color dichroic combining cube 274 in a space-saving way that overlaps the output beams within the combiner that was not possible with the 3-panel transmissive system of FIG. 25 . 5.1.12 Example 11: Transmissive LCD Projection System #6, Based Upon FIG. 30 A more compact variation on the arrangement of FIG. 29 is shown schematically in FIG. 30 combining a single refractive angle transformer with the composite tri-color output beam of single dichroic combiner cube 274 . 5.2 Projection Systems with Digital Micro Mirror Device (DMD) All light source panel projection system examples thus far have been limited to reflective and transmissive LCDs. The same approaches, however, can be applied with similar advantage, to the reflective digital micro mirror device (DMD) manufactured by Texas Instruments, thereby replacing the mechanical means of sequential color generation with the monochromatic light source panels 284, 286 and 288 of the current invention.The DMD is a reflection mode SLM that features an array of typically 17 micron square micro mirrors across the rectangular reflecting aperture that deflect very rapidly in response to electronic control signals that change the direction of f/2.4 (+/- 12 degree) light falling on each illuminated mirror image pixels. Electronic signals address individual control elements on the DMD's CMOS substrate, pulling mirror corners down into contact with the substrate in what can be described as asee sawmanner. Mirror deflection speed can be faster than video frame rates because of the extremely low mass of the thin-film mirrors. Molecular reorientations in liquid crystals, by comparison, are generally more sluggish, making standard LCDs less preferable SLM candidates for field sequential color illumination, With a DMD, light is either deflected within the field of view of the system's projection lens, or outside it, thereby creating the pixel-by-pixel contrast ratios that make up a digital image. Color, in commercial DMD projector products, is derived from the white input beam of a reflectorized halogen discharge lamp. White light from the lamp is broken into brief sequential time bursts of red, green and blue by color filter segments on a rapidly spinning disk (color wheel) placed in the beam path. Electronic bias applied to the mirror array, mirror by mirror, during each period of synchronized monochromatic illumination corresponds to an image frame that has been modulated for the particular color. These very short sequential red, green and blue modulated color image frames are integrated and perceived by the viewer as being a full-color image. Image intensity is developed by a summation process within each modulated color image frame of the number of mirror deflections that are made into the field of view.DMD projection systems that rely only on the limited deflection angle of the micro mirrors themselves to create image contrast are not as preferable as those systems that use defeat of total internal reflection in a prism structure to increase the effective rejection angle with respect to the systems projection lens 276 . This approach is possible in a transparent dielectric medium because the critical angle predicted by Snell's Law between a light ray and the dielectric-air boundary plane is about 42 degrees for acrylic (θc=Sin-1(1/n), n=1.49). If the angular extent of the collimated light beam is as in the examples above, +/-8 degrees in dielectric, there is ample room for this +/-8 degree beam on both sides of the critical angle. When the beam is to be internally reflected it must be making an angle with the air-dielectric plane of 42 degrees plus 8 degrees or 50 degrees. At a 50-degree angle of attack, light rays at +8 degrees will strike the boundary at exactly the 42-degree critical angle, and light rays at -8 degrees will remain well inside the critical angle, at a strike angle of 58 degrees - so that all rays are reflected dielectrically. Yet, when the DMD's micro mirrors deflect portions of the beam by 20 degrees or more, those portions of the beam strike the boundary at angles greater than the critical angle, and retract through that boundary into air and through all subsequent dielectric materials according to Snell's Law. This construct allows input light to be channeled to the DMD on one path, and the projection lens to image the DMD on another, much the same as was achieved in the systems of FIG. 17 and FIG. 19 with a reflective polarizer and a means for polarization conversion. 5.2.1 Example 12: DMD Projection System #1, Based Upon FIG. 31 One specific example of a compact DMD projection system using monochromatic light source panels 284, 286 and 288 is given schematically in FIG. 31 using tri-color dichroic combiner cube 274 of FIG. 19 (which could also be a Philips prism arrangement as in FIG. 16 ) and the refractive non-imaging, Kohler-type angle transformation arrangement of FIGS. 25 , 29 and 30 . In the system of FIG. 29 , the angle transformer's converging output beams 488 were transported through the body of dichroic combiner cube 274. In the system of Fig. 31 , it is the three monochromatic beams from the light source panels 284, 286 and 288 that transport through dichroic combiner cube 274, and it is the transformer's output beams 500 that are transported instead through the total internally reflecting prism coupling block 502 to focal plane 504 of lens element 356 arranged to coincide with the mirror plane 506 of DMD substrate 508. Illustrative red ray 510 emitted from light source panel 288 passes through both reflective filters 278 and 280 of combiner cube 274, is processed by lens 356, passes through input face 512 of prism coupling block 502 and impinges on the prism's tilted output face 514 at illustrative point 516. Provided the angle, A, made with surface normal 518 exceeds the critical angle calculated for the transparent dielectric medium of prism block 502 (Ac= Sin-1[(Sin 90 )/n], where n is the refractive index of the prism medium, and Acis about 42.2 degrees for n= 1,49 ), the ray reflects from prism surface 514 as if from a perfect mirror. Reflected ray 520 continues trapped in the medium of prism block 502 until reaching prism base 522 at angle σ 526 to surface normal 524, angle σ being A-α, where a is the prism angle 528. Since the prism geometry effects choice of the back (and front) focus distance of lens 356, and determines overall system compactness, the geometric relations are explained in expanded detail by magnified view 530, which shows the effect of DMD mirror tilt, which can be either in the form of 532 or 534, the two extreme mirror positions set electronically. In DMD mirror position 534, for example, the mirror tilts counter clockwise an angle µ (540) measured from plane 506 of the DMD substrate.When any modulated DMD mirror is in position 534, ray 520 refracts out of prism base 522 (and any planar cover glass protecting the DMD itself) into air space 538 directly above the DMDs as governed by Snell's Law, and then reflects as ray 542 back through prism base 522 and towards tilted prism face 514. In this case, the geometric goal is that axial ray 510 from the center point of light source panel 288 is so reflected that it travels directly along the surface normal 546 of prism base 514, so that on reaching prism face 514, it does so making an angle α with prism face surface normal 518 that is sufficiently less than the critical angle that the ray refracts into the small amount of air space 544 above prism face 514, and in turn, through mating prism coupling block 548, and out into projection lens 276. This ray path represents the on state for DMD mirrors that contribute light to the projected image. This illustrative condition is satisfied when k=2µ. The underlying geometric relationships are given in equations 18-20. Hence, if axial ray 510 makes an angle of 50 degrees with the normal 518 to prism face 514, as explained above, and the DMD tilt angle µ is 20 degrees, the corresponding prism coupler angle α (528) is calculated from equation 19, (A - Sin-1[(Sin2µ)/n]), and for A = 50 degrees and an acrylic prism is 24.4 degrees. τ = Sin - 1 n Sinσα = A - σε = 90 - τδ = Sin - 1 Sinε / n )When the DMD mirror is flipped electronically to its off state position 532, ray 532 in DMD air gap 538 is reflected as ray 550 which makes angle ε with DMD substate plane 506, as shown in detail 530, and given by equation 20. For the example conditions, ε = 50 degrees and δ, the angle off state refracted ray 552 makes with DMD surface normal 546, is about 31 degrees in the dielectric prism medium. As a result, off state ray 552 makes an angle of δ-α or about 6 degrees with prism face normal 518, which is far from both the critical angle, and the ray refracts from prism block 502 to prism block 548 and exits into air far outside the field of view of projection lens 276. The sole purpose of the double prism block unit 560 used in the embodiment of FIG. 31 is to shift the DMD's off state light far enough outside the range of view of projection lens 276 so that the DMD's image contrast ratio is maximized. This red, green and blue light from the tri color light source panel block 340 is thereby permanently lost, and cannot be recycled, either for increased efficiency or for what has been described above as a dynamic brightness peaking when the number of on pixels becomes considerably larger than the number off. That is, maximum image brightness in the DMD projection system of FIG. 31 is a constant per pixel no matter how many (or how few) pixels are switched into an on state condition. 5.2.2 Example 13: DMD Projection System #2, Based Upon FIG. 32 A variation of the DMD projection system of FIG. 31 that is arranged not to have such a visually static behavior is illustrated schematically in FIG. 32 . In this case, the two prism-coupling blocks 570 and 572 are each cut with a unique geometry defined by prism face angles Φ, Ω and γ (574, 576 and 578 respectively). In this illustration, block 572 is drawn with γ= 0. The resulting face angles assure, as shown in magnified detail 580, that all converging input light rays 500 associated with the DMD's off state (i.e. mirror position 582) are retro reflected back along one of the converging paths they came in on, and thereby return to the source cavities. The same pseudo-Kohler β to ω angle-converting illumination system is used in the system of FIG. 32 as was used in FIG. 31 , except that now the lens 582 is adapted to provide a practical means for tilting the system's effective focal plane through an angle Ω 576 about the center point 586 of the DMD aperture, rather than letting it remain parallel to lens plane 592, as it would be under normal circumstances with every focal point falling on plane 584. Since the DMD mirrors are fixed to lie along tilted prism face 606, the light arriving from tri-color source 340 would be out of focus over most of the DMD mirrors leading to loss in uniformity and efficiency. Avoidance of such losses requires that the focal plane be tilted to match slope 606 of the DMD mirror plane. The means for tilting the focal plane of any standard finite imaging lens is known ordinarily as Scheimpfluging, and is accomplished by rotating the lens plane in the same direction as the desired tilt. The Scheimpfluging method applies only to finite imaging systems where neither the object plane nor image plane coincide with the system's focal planes, and where the image has a magnification defined by the ratio of the respective object and image distances. In the present circumstance, however, both object (light source panel 584, 586, or 588) and corresponding image plane are placed deliberately at the lens system's each of the system's focal planes as the means of preventing a sharp image. Under these deliberate non-imaging conditions the conventional Scheimpfluging process will not work properly. 5.2.2.1 Focal Plane Tilting in Non-Imaging Illumination System An alternative to conventional Scheimpfluging is represented schematically in FIG. 33 . This preferable two-lens focal plane rotating system 582 tilts one focal plane 640 relative to the axis of the other 638, by fixing one lens element 646 and rotating the other 648. In the example of FIG. 33 , the input lens 646 is fixed and the output lens 648 is rotated. In operation, input lens 646 , the first of composite lens pair 582, operates on incoming light rays 639 under finite imaging conditions, with plane 638 treated as an object plane rather than a focal plane. Output light rays collected by lens 648 from lens 646 appear as if emanating from a virtual object plane to the left of plane 638, and they are in turn routed to final image plane 640 from what is now a finite (rather than infinite) object distance to the right of plane 641. By the rotation of lens 648 through angle 642, image plane-640 now tilts through angle 644 in accordance with the conventional Scheimpfluging relation. Input lens element 646, in the illustration represented in FIG. 33 , is biconvex with 200 mm spherical first surface radius, a 6.5 mm thickness, and a 40 mm conic (parabolic) second surface radius. Output element 648 is also biconvex with a parabolic first surface radius of 50 mm, a 9 mm thickness, and a parabolic second surface radius of 100 mm. The semi-diameters of lenses 646 and 648 are 22 and 25 mm respectively. The semi-height of light source panel 288 is taken as 6.625 mm, the maximum input angle β is 22.5 degrees, the spacing between source 288 and lens 646 is 29.4 mm, and the corresponding transformed output angle ω is 12 degrees. Under these particular circumstances, rotation 642 of 12 degrees results in tilt 644 of about 17 degrees. Output rays 660 arrive on and pass through focal plane 640, as intended, at points corresponding the summation of light emitted from source panel 288 at a given angle.Hence, with such a lens pair taken as lens system 582 in FIG. 32 , not only will the focal plane of the lens system tilt in parallel with neutral DMD mirror plane 599, but the off-state light reflected from mirror position 582 will return to the light source cavities such as the one represented in detail 308 of FIG. 18 , where as described earlier, it can be re-cycled in a different output angle and polarization to contribute a dynamic boost or peak in image brightness. For simplicity, only one illustrative off state return path, the one associated with axial illumination ray 600, is represented schematically in detail 580 of FIG. 32 . This ray enters prism coupling-block 570 on face 620 and proceeds through the finite air gap 604 between prism block 570 and 572 into block 572 and towards its exit face 606, which is tilted to horizontal axis 608 through angle Ω, 576 (the same angle 644 in FIG. 33 ). On entering air gap 585 above the DMD's substrate plane 610, which is approximately parallel to the prism block's exit face 606, continuing ray 612 strikes the DMD mirror switched into its off state position (582) at normal incidence. The geometrical relationships that assure axial ray 612 arrives along the surface normal 612 to this mirror position 582 is given in equations 23-25, constrained by τ=µ, µ once again being the DMD micro mirror tilt angle. These relationships extend from equations 18-21 applied to the new geometric orientation of FIG. 32 . Ω = 90 - Sin - 1 Sinτ nτ = Sin - 1 n Sinσσ = 90 - ΩFor µ = 20 degrees as before, Ω is about 76.7 degrees, which represents a tilt of only 13.3 degrees from the vertical axis. Under these circumstances, incoming axial ray 612 reverses its direction as ray 613, as if it were being emitted towards lens system 582 from focal point 586. Neighboring rays of original incoming ray 600 are all converging to (or near) the same focal point 586, and while these rays do not traverse backwards along the path they arrived on, they traverse backwards along the symmetrical path taken by a neighboring ray set forth by the law of reflection at mirror plane 582. Accordingly, every arriving ray is so returned via lens system 582 to the original emitting aperture of light source panel 288 (or depending on its color, to 284 if green, or 286 if blue) by proper reverse action of dichroic combining cube 274. When the illustrative DMD mirror is switched to its on state position 584, as also shown in detail 580 of FIG. 32 , incoming axial ray 600, and its extension 612 refracted in the DMD air space 585 , reflects from the DMD mirror as output image ray 616 whose angle with the DMD substrate plane 610 is ε (or 90 - τ), as given in equation 20 earlier. The angle τ has already been constrained by Ω to match µ of 20 degrees. Hence, under these constraints, ε is 70 degrees, and the ray refracts into prism block 572 as continuing ray 672 that reaches prism face 620 at point 670. This ray path is shown as the bolder ray lines in FIG. 32 , but the designations are omitted for lack of space. Instead, this same detail is described more cleanly separately in FIG. 34 , which is a schematic isolation of the ray paths taken in prism block 572. The path of output ray 672 in prism block 572 between boundary points 587, 670 and 624 involves a sequence of geometrical relations, reflections and refractions coordinated to assure that output ray 676 emerges along the axis of projection lens 276. 6.0 Projection System Illuminator Integration Issues Illustrative examples of the integration of mono-colored light source panels into reflective and transmissive LCD projection systems were given above in FIGS. 17-30 , and for DMD projection system, in FIGS. 31-34 . These examples were based on today's red, green and blue LED performance, which are about 20 lumens for 0.5 mm to 1.0 mm chips. Over time, this performance is expected to rise, if history is a fair indication. The number of lumens per chip has risen 35% per year, each year, since 1965. As LED output performance improves, and chips become smaller, certain design preferences and device densities may change, or the total number ot RGB lumens produced by any projection system may increase.It may also become feasible to make LEDs considerable larger than is possible today. The incorporation of larger LED elements may also change the exact way in which the present invention is utilized.Moreover, all the present examples use a non-imaging angle transformation means, typified by sub-unit 289 in FIG. 17 . It is equally practical to use a conventional imaging relay, for form an image of the light source panel onto the aperture of the LCD or DMD, as is often done in the commercial projection systems of today. These issues are discussed in more detail immediately below. 6.1 LED Arrays and Array Density In the image projection system variations of FIGS. 17-22 , and 24-33, three separate LED-based light source panels 284 (green), 286 (blue) and 288 (red) were designated by the illustrative form of FIG. 15 in which the light emitting devices 70 were arranged in a two-dimensional array, the space between emitters 70 made about equal to (or less than) the chip size of the emitters themselves. This is thought to be the densest practical packing of such semiconductor light emitting diode substrate chips before the build up of un-dissipated heat associated with the electrical power used to produce the light emission interferes with the amount of light generated and otherwise degrades the device lifetime.From the standpoint of maximizing lumens emitted per square millimeter, it is advantageous that the light emitting diode, chips 70 be packed even more densely, if possible, than the 25% chip density in the arrangement of FIG. 15 . Having such a sparse array density, however, is not fundamental to the operation of any of the projection system configurations described above. All that is required at the systems level is that the lumens be applied in an f/ 2.4 cone to the entire surface area of the illustrative 1.2" diagonal 4:3 aspect ratio LED or DMD aperture. It is possible that this can be accomplished satisfactorily using a single LED substrate. 6.11 Potential Usage of Giant Single Chip LEDs As an example of this, the special case is considered where the manufacturers of LEDs achieve devices that are 5 mm x 5 mm and greater, having proportional lumen output productions to the 0.5 mm x 0.5 mm and 1 mm x 1 mm chip sizes being manufactured today. In this case, rather than using the array structure of FIGS. 14-16 , there could be practical single monochromatic LED versions of light source panels 284, 286 and 288. The principal advantage of the single LED cavity system detailed as 308 in FIG. 18 is that the reflecting cavity 228 above the LED chip 70 acts as a preliminary angle transformer, converting the wide angular range (+/- 90 degrees) of emission escaping successfully from the LED substrate (240 as in FIG. 14 ) into the cavity media 238 and 217, and there from into the smaller range of output angles (+/- 22.5 degrees) allowed to escape the prism faces of layers 58 and 60. Without such integrated optical layers 58 and 60 above the LED, output angles would remain +/- 90 degrees into air, such as air space 41 above LED cavity medium 238 and output aperture 42 as shown in FIG. 14 . The main problem with using such raw wide-angle LED output efficiently enough is that it is difficult, if not impossible, to collect all the emitted optical power using most mirror and lens based angle transforming optical systems such as those employed in FIGS. 17 , 19-22 , and 24-33. The maximum useful acceptance angles of well-designed optical systems are preferably less than +/- 30 degrees. One exception to this is given by the prior art class of dielectric non-imaging angle transformers that were discussed earlier in regard to possible mathematically shaped sidewall curvatures for the reflecting cavities of FIGS. 9 . The input apertures 698 of such dielectric angle transformers 700 can be optically coupled to each monochromatic LED's cavity medium, 718 and 238, and then added to tri-color light source cube 274 as shown, schematically in FIG. 35 . For such a tri-color cube 274 to be substituted for the ones used in the illuminating projection systems of FIGS. 17 , 19-22 , and 24-33 , the output aperture 702 of each dielectric angle transformer 700 should match the constraints established in the earlier projection system examples, seeking about 1000 lumens over a 13.25 mm x 9.94 mm rectangular aperture limited to f / 1.3 (+/- 22.5 degrees along axes parallel to the aperture edges). Achieving this performance with a single LED chip 70 and minimum power loss requires the cavity aperture 710 (Wx 712 x Wy 714 ) to be 5.167 mm x 3.80 mm. The LED substrate size therefore can be as large as about 4.5 mm x 3.2 mm, so as to allow some minimum surface area for diffusely reflecting cavity sidewalls 718 that along with aperture volume diffusing layer 720 provide the angle and polarization randomization needed for efficient cavity recycling when that mechanism is needed, and otherwise improve output uniformity. If LED chips sized 4.5 mm x 3.2 mm become available that emit, allowing some leeway, 1300 lumens, the monochromatic light emitting structures 722 of FIG. 35 can be used in place of the thin light source panels 284, 286 and 288 described by the structure of FIG. 15 . With approximately 10 lumens output assumed from the apertures of the light source panels of FIG. 15 (assuming 50% efficiency and 20 lumen LEDs), about one hundred and thirty 0.5 mm by 0.5 mm devices are needed to supply the target 1300 lumens. If made as a single 4.5 mm x 3.2 mm LED substrate today, that substrate, ignoring the likely thermal degradations, would output about 1080 lumens (20 x 9 x 6), which would just barely be enough to meet the target value, assuming the same 50% polarization recycling efficiency as above for the light source panels of FIG. 16 . While the single LED monochromatic f/1.3 light source systems 724 shown in FIG. 35 simplifies assembly, replacing 130 LEDs conceptually with 1 large junction light emitting device, the cost of doing so is the added length of the non-imaging dielectric angle transformer 700, which in this example would be given ideally by equation 26, with d0the semi-diagonal of transformer 700 's output aperture 702, dithe equivalent aperture diagonal for light source aperture 710, and βmtransformer 700's output angle just inside its dielectric media 726 (having refractive index, n), βmbeing given by the value of Sin-1[(Sinβ)/n] and using the diagonal value of β. L = d i + d o Tanβ mConsequently, the length L, 728, of the ideal transformer 700 having the indicated rectangular cross-section, as in FIG. 35 , is 11.05/ Tan ( 14.8 ) or 41.8 mm, about 1.6 inches. Such a large protrusion is probably not preferable in most commercial applications requiring compactness. Light source panels 284, 286 and 288 with the structure of FIG. 16 are, by comparison, only a few millimeters in their total thickness (T' in Fig. 16 ).Several effective truncation methods have been reported for such dielectric angle transformers that reduce their ideal length in exchange for only minor reductions in their ideal performance, but even after such approximations are made, the net transformer protrusion will still be a noteworthy one, and significantly greater than that of the structure of FIG. 16 .An efficient means of angle transformation from the substantially +/-90 degree emission of light emitting diodes to some tighter angular range is called for in most practical systems applications. Manufacturers of commercial light emitting diode packages often provide a simple spherical convex output lens surface option as part of the package, usually as a shaped extension of the encapsulating dielectric medium surrounding the emitting substrate or substrates. Doing so definitely increases the amount of usable output light yielded from the device as compared with the amount of light yielded from a flat (no-lens) output surface, but the efficiency of angle conversion is low. A simple one surface lens is not able to handle as wide an angular input range as the diode emits, and because of this, a large fraction of the emission is transmitted outside the angular range desired. In addition, the beam profile produced by this simple lens is generally intense on axis, with falling flux density away from beam center. 6.2 Compactness of Non-Imaging Type LED Illuminators The compactness of angle transformation approaches embodied in the systems of FIGS. 17 , 19-22 , 24-33 and 35 is due to the fact that they have been made to operate in two sequential angle transformation stages, as generalized schematically in FIG. 36 : a first stage 756 that deliberately converts +/-90 degree (generally 752) light ( 754 ) to +/- 22.5 degrees (or any angle substantially in that range, 758) followed as input 760 to a second stage 762 that then converts the +/-25.4 degree light (758) to the angle of use 766, which for the examples demonstrated so far has been +/-12 degrees in air (f/2.4). First stage transformation 756 includes the lens cylindrical pair method of FIG. 18 that was designed to achieve a different angular range in each meridian.Only by means of such a two-stage approach 770, can two different means of angle transformation 756 and 762 be used to achieve the large enough amount of angle transformation desired, efficiently, and more compactly than with any single transformation stage.The un-truncated length of first stage dielectric angle transformer 700 as it was used in the example of FIG. 35 is 41.8 mm (1.6"). Had the same dielectric angle transformer 700 been designed instead to perform the complete +/-90 to +/-12 degree angle transformation that has been required in all the above examples, its dielectric length alone would have to be 128 mm, which is 5 inches. This same light source coupled dielectric angle transformer element 724 in FIG. 35 , to be most effective, would have to be placed directly behind the transmissive LCDs to be so illuminated, as for example in FIGS. 24-25 , and could not be used efficiently as an illuminator, for example,-with the systems of FIGS. 26-27 , which require a sufficient working distance that allows the tri-color cube to be placed in between the transformer output and the corresponding LCD aperture. Dielectric angle transformer 700 has no effective working distance, as light 744 begins diverging directly from output aperture 702. The further the device aperture to be illuminated is separated from transformer output aperture 702, the more that aperture will be inefficiently over-filled by a larger field of light. So, if the 5-inch long single stage monochromatic dielectric angle transformer 724 in FIG. 35 replaced folded transformation system 454 in the system of FIG. 24 , the total system length in FIG. 24 would increase by more than 3 inches. 6.3 Imagine Type LED Illuminator The classic means of angle transformation in any optical system is the imaging lens relay, wherein one or more lenses are employed to relay a sharply focused and magnified (expanded or contracted) image of an object to a displaced image plane. Such a finite imaging system could be used in place of the single aspheric lens of FIG. 18 or the two-lens system of FIGS. 32-33 to convert the +/-22.5 degree light produced by light source panel 288 to the +/-12 degree light needed at the LCD or DMD apertures, as in the above examples. When doing so, the light source panel aperture is then placed at a suitable object plane and the magnified image is relayed to the corresponding image plane, depending on the system's design parameters, positioned to coincide with the aperture of the LCD or DMD. The principal drawbacks of this approach, compared to those used in the present inventions, is a comparative lack of compactness and spatial uniformity. The reason for the relative lack of compactness is the fact that efficient imaging systems require using several lenses with object and image separated from the lens's focal planes by finite distances. The reason for the relative lack of uniformity is that the imaging system image is a sharply focused replica of the object's uniformity. Any spatial brightness structure occurring within aperture 102 of illustrative light source panels 288 (as in FIG. 17 ) or 248 (as in FIG. 16 ) would be faithfully reproduced within the illuminated aperture of the LCD or DMD, and is not preferable. On the other hand, the pseudo-Kohler structures of second stage angle transformers 762 used in the systems of FIGS. 17-22 , and 24-33, do not form sharp images, but rather allow the light at every point on the image to be a mixture of light from every point on the object. Because of this, point-to-point brightness variations on the object whose light is to be transformed in angle are not transferred to the resulting image. 7.0 General Lighting Applications The same advantages of LED lighting that make it an attractive alternative to arc discharge lamps in video projectors, lead to equally attractive alternatives to many types of conventional light bulbs in a broad range of general lighting applications.Specifically, the thin two-dimensional mono-colored LED array-based light source panels illustrated in FIG. 15 can also be used directly in such single color lighting applications as traffic signaling, warning flashers, and special effects lighting. These same panels can also be made to incorporate white LEDs or LED triads (one red, one green, and one blue) to provide RGB rather than mono-colored illumination. When white LEDs are incorporated in these cases, they may either be of the fluorescent phosphor coated type, or the newer tri-color stacked LED design. And, the mono-colored panels can also be mixed together as they were in the projection systems of FIGS. 17-22 and 24-33 using the dichroic principles of FIG. 16 , to provide concentrated sources of multi-colored illumination for the higher lumen lighting applications such as automotive head lighting, theatrical spot lighting, architectural luminaires, and LCD backlighting.After a more detailed description of the color mixing process as applied to direct illumination, each general lighting application is explored by way of a specific example. 7.1 Color Mixing for Efficient High Lumen Multi-Colored LED Illumination Prior Art dichroic prism cubes and Philips prism arrangements shown in FIG. 16 have been well described for purposes of separating a single free-air input beam of white light as created by white light bulbs into three primary-colored output beams. Then, they have also well described as a means for recombining such pre-separated primary-colored beams into single output beam mixture. Their use in illumination with the mono-colored beams produced by the LED light source panels of FIG. 15 , however, represents a special case, as the light sources and the light mixing entity are integrated as a single unit that generates the useful output illumination. Moreover, the instantaneous beam color of the output illumination is determined by the exact amount of electrical power applied each of the three constituent light source panels attached to the prism surfaces. 7.1.1 Light Source Panel Integration with Color Mixer The integration of light source panels with an efficient color-mixing element is depicted for the traditional prism cube structure in FIG. 37A showing perspective view 862 , as well as side and top views 774 and 776, and for the traditional Philips prism arrangement 301 of FIG. 16 in FIG. 37 B. Either integration is referred in general as light source cube 340. Light source cube 340, in one illustrative form, is composed of 4 substantially identical Porro (45 degree - 45 degree - 90 degree) prisms made of glass or plastic that are cemented together as a monolithic block. Prior to cementing the interior prism surfaces, dichroic coatings of type 278 and 280 previously described are applied to the faces of any two opposing prisms, as shown in detail 216 of FIG. 16 . The result is color-mixing, cube 772, which ls then integrated with three mono-colored light source panels (i.e. green, 284; blue, 286; and red, 288) as shown in detail 862 of FIG, 37 . The thin, monolithic light source panels are preferably glued directly to the three adjacent surfaces of cemented cube 772, with glue applied only to the light source panel's rectangular periphery, just outside its emitting aperture.As explained earlier, dichroic coatings 278 and 280 both transmit the light of light source panel 288. Dichroic coating 278 also transmits the light of light source panel 286 and reflects the light of light source panel 284. Similarly, dichroic coating 280 also transmits the light of light source panel 284 and reflects the light of light source panel 286. For most visible light applications of interest, the three light source panels will each supply light of a primary color (i.e. red, green and blue). In some applications, light of any three distinctly different wavelength bands can be used, even in the infrared. Porro prisms are defined by their 2 equal 45-degree face angles and their one 90-degree apex angle.Optionally, thin-film coatings can be applied to each outer surface of cemented prisms 772, coatings 790, 792 and 792 on the surfaces containing each light source panel, coatings 796 and 798 on the side faces, and coating 800 on the cube's output face. Coatings 790, 792 and 794 can be applied either to the cube surface area outside the area of each light source panel, or as a continuous coating covering the entire cube face in between the cube surface and the light source panel coupled to it. When coating s 790, 792 and 794 surround the light source panel apertures, they may be made absorbing black or specularly reflective (metallic or dielectric). When coating s 790, 792 and 794 are made to underlie the light source panels, they must be dichroic themselves, transparent to the wavelength band of the light source panel just above them, and reflecting to one or both of the two other light source panel colors. Side coatings 796 and 798 can be made either absorbing black, or specularly reflective (metallic or dielectric). Front face coating 800 is a dielectric anti-reflection coating to reduce output Fresnel reflection loss. The front face may also be affixed with either an absorption polarizer or a reflective polarizer, as described earlier.Light source cube 340, coated on its outer surfaces or not, may be combined advantageously with any separate optical system 802, as shown schematically and generally in side view 774 and top view 776 in FIG. 37A . In this case, the input aperture of optical system 802 receives light beam 780 directly from light source cube 340, generally in air, and then processes this light so as to output light beam 804, whose angle, polarization, color, and spatial uniformity may be been purposely altered. As a few examples of the many that are possible, optical system 804 may be a lens, a series of lenses, a mirror, a combination of lenses and mirrors, a transparent or reflective diffuser (bulk, surface or holographic), a polarizing system, or a facetted plate. In all previous application examples, optical system 802 is stage two angle transformer 762 (as in FIG. 36 ). Light source cube 340 in another illustrative form, shown schematically in FIG. 37B , is composed of three prisms made of glass or plastic, two of which are cemented together as a monolithic block, the third, separated from the cemented pair, by a small air gap 217. The purpose of air gap 217, as described earlier, is to allow total internal reflection of blue input ray 255 from integrated light source panel 286, and also of red input ray 251 from light source panel 288. Output rays 862 emit through the aperture of prism 281 in a beam equivalent to that of FIG. 37A . 7.1.2 Color Mixing Efficiency The total number of lumens supplied within composite beam 780 from light source cube 340 is given by equation 27 as the sum of lumens from each monochromatic light source panel, wherein nr, ngand nbare the total number of LEDs as counted along each edge of the respective light source panels, Lr, Lgand Lbare the respective number of lumens generated at each light source panel aperture (after any and all path length and absorption inefficiencies such as those associated with the multiplicity of reflections and refractions occurring within the layered structure of FIG. 16 ), and fr, fg, and fbare the respective mixing fractions of each primary color component (fr, + fg+ fb= 3) established by setting the electrical power applied to each light source panel, and thereby, to the constituent LEDs within. Nominally, fr= fg= fb= 1. L w = n rx ⁢ n ry ⁢ L r ⁢ f r + n gx ⁢ n gy ⁢ L g ⁢ f g + n bx ⁢ n by ⁢ L b ⁢ f bSeveral examples of direct applications of light source cube 340 follow without detailed descriptions of the optical systems 802 associated with them. 7.2 Example 1: Color-Mixed Automotive Head Lighting Based On FIG. 38 One direct lighting application example of light source cube 340 is as an alternative light source for use in the headlights, brake lights or backup lights of an automobile, bus, train, airplane or related vehicle currently using incandescent or halogen light bulbs surrounded by a reflector for that purpose, as illustrated generally in schematic representation 806 in FIG. 38 . One miniature light source cube 340 is used with a clear-colored facetted lens (and possibly a diffuser) 811 to spread the light into the auto industry's standard viewing directions. The exact proportion of red, green and blue light is set by electronic power controller 818 which controls the lumens generated by each light source panel 284, 286 and 288 separately. Power controller 818 may include preset power ratios associated with head light color preferences set by the manufacturer to increase customer appeal or to improve driving visibility under specific driving conditions (e.g. standard white, blue-white, daytime driving, nighttime driving, dusk time driving, snow, rain, or fog) that may be activated automatically via micro processor 820, or at the driver's command. Automatic activation of lighting cube 340 's optimum brightness and color is controlled by micro processor system 820 linked to power controller 818 and optionally, to driving visibility detection system 822 . Visibility detector 822 is any optoelectronic system that samples-and analyzes the air space through which the vehicle is passing as a means of determining best lighting conditions. A passenger side front view of the head lighting cluster in a modern automobile is shown schematically in detail 808, FIG. 38 , indicating the right turn indicator system 810 (amber light), the low headlight beam 812 (white light), the high headlight beam system 814 (white light) and the surrounding housing structure 816. Typically, one incandescent or halogen bulb plus a lens or filter to set color and angular directions is used within each lighting system 810, 812, and 816. Conventional light bulbs used in such applications supply between 1500 and 2000 lumens of white light for head lighting, and less for the other lighting functions. Similar treatments exist at each side of the rear of the vehicle for turn signaling (amber), brake indication (red), and back up warning (white). With its multi-color capability, light source cube 340 potentially performs one or more lighting or warning function using the same tri-color element. For example, the same light source cube can serve simultaneously as a brake light (red) and a backup light (white), or simultaneously as a headlight (white) and a fog light (amber). Other advantages of solid-state headlight system 806 would be the shape and brightness uniformity of its rectangular beam cross-section, its simplicity and compactness and its 100,000-hour service life. It is generally difficult to engineer beam shape and uniformity of conventional head light systems because of the amorphous size and shape of the incandescent filament or the halogen discharge. The result on the road is often a considerable compromise in both shape and uniformity. On the other hand, beam shape and the resulting roadway lighting pattern is easy to engineer with light source cube 340 by simply changing the size and shape of its constituent light source panels 284, 286 and 288. Typically, the light emitted non-directionally by conventional light bulbs in use is partially collected by a concave and or faceted specular reflector that redirects the emitted light rays into an output beam whose angular directions may be further influenced by a lens element so that the result is an output beam having spatial and angular characteristics specified for the task at hand, usually by designated governmental standards setting organizations such as the Society of Automotive Engineering (SAE). Light source cube 340 is therefore not a direct replacement for conventional light bulbs in such conventionally designed headlights. Rather, and as depicted in detail 806, light source cube 340 is at the core of a new automotive head lighting system 806 designed to make best use of light source cube 340 's +/-22.5 degree (+/-β degrees) angular cone and rectangular beam cross-section, while simultaneously meeting the associated Industry Standards for roadway illumination. 7.3 Example 2: Color-Mixed Theatrical and Studio Lighting Based On FIG. 39 Another direct lighting application of light source cube 340 is as an alternative high power light source for the theatrical lighting systems used to spot light or flood light performance stages, studios, or remote locations. Similarly improved spot and flood light instruments are also useful for shorter throw distance lighting applications in hospital doctor's office and dentist office operating theaters. In both cases, large amounts of visible lumens (2000 to 30,000 and more) are delivered to a performance area with especially smooth brightness uniformity. The basic configuration 826 of light source cube 340 in such lighting applications is illustrated schematically in FIG. 39 for one of many possible light block 828 arrangements. In this particular illustration, a 3x3 array of light source cubes 340 is deployed to generate 9 times the lumens of any one light source cube 340. The cubes are mounted on interconnection board 830, which routes power to the individual red, green and blue light source panels on each constituent cube from electronic power controller 832, that can be further controlled by micro processor 834 and optional remote control 836. The backing layers 838 of interconnect board 830 provide additional heat sinking, and a means of attachment to the lighting instrument housing 840. Light from the array is a composite beam having an overall beam cross-section matching that of the array itself. The angular characteristics are set by light source cube 340, +/-β degrees along each of the array axes (+/-22.5 degrees as in all previous examples). Lens or lens system 842, shown in FIG. 39 just for example as being plano-convex, may be one or more spherical, aspheric or Fresnel lenses, whose general purpose is to narrow or widen the intrinsic output angle of light source cube 340. The specific number of light source cubes to be applied in array 828 depends on the lumens yielded per cube, and the total number of lumens needed for the lighting task at hand. Total lumens can then be satisfied either by increasing the size (and lumen output) of a single light source cube (and their constituent light source panels), or by increasing the number of cubes used in the array. In such very high lumen applications,The light beam delivered by each light source cube 340 is spatially uniform across its beam profile, and concentrated as a +/-22.5 degree (+/-β degree) angular cone, which is nearly ideal for flood lighting, depending on the distance between source and performance stage. The narrow beam profile simplifies the additional optics that must be used within the instrument to provide further beam angle control such as for spot lighting, and increases the optical efficiency, delivering more lumen to the spot area than with conventional lamps and optics. Spot lighting often requires beams as narrow as 10 degrees, and typically less than 20% of the lumens produced by the incandescent or halogen lamp is delivered in the spot. Aside from improved efficiency, the multi-color performance of light source cube 340 and lighting system 826 completely eliminates the need for the mechanical color wheels and gelatins needed in the conventional lighting instruments 824 to provide color of choice. Filters and gelatins react unfavorably with the heat they absorb, and degrade in their purity over time. Color selection with the instant invention is electronic, exact, and instantaneous. In addition, and specifically in the case of theatrical lighting, the need is often to turn such lighting on and off instantly, repetitively, or fade to black, all of which is impossible without risking damage to standard high-wattage incandescent and halogen lamps. The light source panels used in each light source cube, however, can be instantly switched on and off, and dimmed completely to black without any change in beam color during the process, or any degrading effect on service life. Dimming conventional lamps by reducing their electrical power often changes the lamp's whiteness significantly, and thereby, even the intensity of light that passes through the color filters being used.For both theatrical and medical uses, reliability of service is a particularly critical factor, as frequent changing of burnt out light bulbs is not an attractive option during either a theatrical performance or a medical procedure. To avoid just such unwanted interruptions in service, conventional lamp usage is logged, and the conventional lamps replaced as a precaution well before the manufacturer's estimated end of life has been reached. Such lamp replacements are costly and time consuming, as well as, in the case of theatrical lighting, potentially dangerous, as the theatrical lamps are usually located high and at great distances from their point of use. 7.4 Example 3: Color-Mixed Outdoor Luminaires, Based On FIG. 40 Yet another direct lighting application of light source cube 340 is as an alternative light source system 846 for outdoor luminaires, as represented schematically in the illustrative roadway lighting example of FIG. 40 . A wide variety of similar outdoor lighting applications for area luminaires, parking luminaires, architectural luminaires, as a few examples, follow the same approach. Standard incandescent or halogen lamps are replaceable with one or more light source cubes 340 in compatible lighting units 846 that sit atop roadside or area lighting utility poles 848. System 846 shows a single light source cube 340, but in applications requiring larger number of lumens than can be generated by any single light source cube 340, arrays 828 of light source cubes 340, as were introduced in FIG. 39 , can be installed. While the actual luminaire 850 can be made significantly more compact than the one shown in FIG 40 with lens 864, lens cover or diffuser 854, support 836, light source cube 340 and electrical control connection wires 866, the example of 846 is made to resemble one popular housing shape (cobra) of present roadway lighting usage. A major performance difference between general lighting system 844 represented in FIG. 40 and conventional designs is that the roadway illuminating beam 852 from any luminaire 850 is directed generally downwards and toward the roadway (or area) to be illuminated within the specific angular cone of +/-β (+/- 22.5 degrees) of light source cube (or cubes) 340. The advantage of such directed (spot) lighting is that fewer lumens (and watts) are required to provide the roadway (or area) brightness required, and much fewer lumens are wasted lighting areas falling significantly outside the illumination field of interest. While this cone 852 can be enlarged (or contracted) by supplemental luminaire optics 854 and 864, which may be a lens, a diffuser, or both, the pure output beam from light source cube 340 itself may provide sufficient ground coverage due to its height 856 above the ground, as shown in FIG. 40 . If a single light source cube 340 using constituent red, green and blue light source panels having square cross-sections were used, the cube's square illumination footprint 862 on the roadway would contain substantially all lumens generated. For example, the +/-22.5 degree output beam from light source cube 340, elevated a height 856 above the area to be illuminated, deposits substantially all its generated lumens within a 25 foot by 25 foot footprint. Widening illumination cone 856 to +/- 40 degrees using a secondary lens or diffuser spreads footprint to 50 feet by 50 feet.Many roadway and area luminaires in use today flood very large areas purposely with general lighting as a means of enhancing personal security and as a way to provide a facsimile of daylight conditions. The cost of doing so in lighting applications that don't require such massive area coverage is that large amounts of energy is wasted, and large amounts of unused light contaminate the nighttime sky, an annoyance to night sky viewing is some areas of the country know as light pollution.As one example of this illumination wastage consider one commercial 250-watt metal halide luminaire that generates 23,000 lumens. Photometric data provided by the manufacturer (McGraw Edison) indicates that for a 30 foot mounting height, 1.4 foot-candles of light are delivered within approximately a 30- foot square area beneath the luminaire. Since a foot-candle is the number of lumens deposited per square feet, this means only 1,260 lumens of the 23,000 generated are being utilized in the 900 square foot area directly under the luminaire, an efficiency of less than 6%.One virtue of flooding a specifically limited target area is that very little wasted light is directed into the higher angle field of view of the on coming roadway traffic, thereby potentially increasing automotive safety. Current luminaires shaped in the form of lens cover 854 deliver light from the lens's entire surface, a large portion of which on occasion interferes with driver visibility, especially during rainy, snowy or foggy weather conditions where light scattering can diffuse this high angle light and actually decrease roadway visibility.As in the case of using light source cube 340 in automotive head lighting, the illuminating color (and brightness) can be adjusted for optimum visibility as a function of weather conditions if desired. Instead of only having only the fixed white illumination color (temperature) of conventional light do lighting elements using conventional light bulbs, used as the basic lighting element, light source cube 340 could be controlled electronically to provide the needed lumens and a color matched to the weather conditions via the microprocessor control system of FIG 38 or 39 . In this case, each roadway lighting unit could contain its own local photo detector and microprocessor, or the settings for all lights in a region could be controlled remotely. And, when using multiple light source cubes 340 within the same luminaire, a central cube can be used to flood a specific target area under the luminaire, as in FIG. 40 , but satellite cubes can be tilted away from the central cube to increase the size of the area illuminated. In such designs, the satellite cubes can be turned off in weather conditions where their light actually reduces driver safety. 7.5 Example 4: Color-Mixed Traffic Signals, Based On FIG. 41 Still another direct lighting application of light source cube 340 is as a compact light source for use in traffic lights, illustrated generally in FIG. 41 . One light source cube 340 and one or more lens 872 and/or diffuser elements 874 can be used in a slim package to replace the standard bulky three-light red, amber, green light bulb and colored lens systems 870 in common usage around the world. While three lamp systems have become standard, they are bulky, and their need for periodic light bulb replacements creates both maintenance difficulty and nuisance. Alternative structures using a single long-lived multi-colored light source cube element 340 or equivalent maximizes convenience and compactness, while completely eliminating a dangerous optical effect known as sun alias, a phenomenon of late afternoon lighting caused by direct sunlight reflecting inside the reflective housings of conventional traffic signals in their off state. These sunlight reflections are at times strong enough that the signal appears on, confusing on coming traffic and creating the potential for dangerous intersection collisions. Use of a single signal element as in 876, however efficient, eliminates the spatial separation between the separate lighting signal units in 870, which may help color blind daytime motorists distinguish which signal is activated. For traffic control systems requiring discrete spatial separation between red, green and yellow signals, light source panels 284 (green) and 288 (red) can be used, along with 878 (amber), mounted on a common interconnect board 880, along with dedicated lenses 872 and diffusers 874, as in detail 882 in FIG. 41 . Whether arranged separately or about a single light source cube, power is supplied to each mono-color light source panel by power controller 832 triggered by the standard traffic light timing circuit 884 .7.6 Color Mixing Elements with Improved CompactnessThe compactness of light source cube 340 depends on the geometry of the light source panels, and to some extent, on their angular output characteristics. A most general geometric relationship exists between the edge size X of light source cube 340 in any given system application using cube 340 and the total number of red + green + blue lumens, LTOT, needed within the cube's output light beam when all LEDs in the constituent light source panel arrays are operated at (or near) their maximum allowable power. This general relationship assumes that the full output beam from each light source panel exists from the cube's output aperture without interference by reflection from any other outside cube surface. The relationship also depends quantitatively on whether the output beam is limited by a down stream etendue constraint such as exists when using this beam to illuminate the spatially and angularly constrained apertures of LCD or DMD spatial light modulators in the above image projection system examples. There are many equally important lighting applications such as those of FIGS. 38-41 , where the lighting systems impose no such constraint and use beams of particular rectangular cross-section (a by b) with edge angles β. For these unconstrained cases, the edge size, X, of light source cube 340 in the plane perpendicular to its dichroic-coated reflecting planes, is given in equation 28 and 29 below, an alternative form of equation 13. The corresponding light source panel edge, ux, is given in equation 30 in terms of square illuminating pixel size Δ, desired rectangular output beam aspect ratio descriptors a (along the x axis) and b (along the y axis), the number of output lumens yielded per illuminating pixel Lr, Lg, and Lb(with Lpt=Lr+Lg+Lb). When there is a system-level etendue constraint on ux, the constrained value of uxis substituted for the value determined by equation 30. The out-of-plane thickness of cube 340 is determined by applying these same equations to the smaller light source panel dimension, uyas in equation 31, with the distinction that equation 28 is used only in the dimensions of side view 774 in FIG. 37 and that X" = ui+z in top view 776, with ui made the light source dimension uxor uyas appropriate. X = u x ⁢ 1 + z 1 - z 2z = 2 ⁢ Tanβu x = Δ ⁢ ab b ⁢ L TOT L ptu y = Δ ⁢ ab a ⁢ L TOT L ptAs one of many possible examples of cube sizing equations 28-31 , consider the light source cube size needed to supply 2000 lumens in a square beam (a=b=1). Suppose that the basic illuminating pixel is also square and 1.5 mm on a side, 10 lumens are yielded per illuminating pixel (whether red, green or blue) and that the beam angle β along each edge is +/-22.5 degrees in air (14.8 degrees in the cube) as in all above examples. From equation 30, uxis 12.25 mm. Then from equations 28 and 29, X is 26 mm, and the complete cube is 26 mm x 26 mm x 26 mm. If, as another example, the same output beam's aspect ratio were 2:1, which might be more of interest in automotive head lighting applications where beam sweep across the roadway to be illuminated is preferably wider than beam sweep between the roadway and sky, uxbecomes 17.32 mm, and the complete cube, 36.77 mm x 18.38 mm x 18.38 mm.Yet, cube-sizing equations 28-31 represent a general case, and do not identify the most compact cube sizing possible. Equations 28-31, relate to a dichroic cube 274 that is always made larger than the constituent light source panels 284, 286 and 288 used, so as to avoid reflective interference of non-axial light rays. Illustrative side 774 and top 776 views of light source cube 340 were shown schematically in FIG. 37 . Top-view. 776 of FIG. 37 is enlarged as 890 in FIG. 42 showing adjacent light source panels 288 and 284, as well as the cube's optional sidewall reflectors 798 and 796. Extreme angle light ray 892 leaves light source panel 288 at point 896 at angle β that immediately becomes βminside the cube medium. When this ray reaches output cube face 900 at point 898 it continues outwards as output ray 894 at angle β. The cube dimension clearing ray 898 is then the distance between points 989 and 902 , which as above is X" = ui+ z.A means for increasing cube compactness is shown immediately below 890 in detail 904 of FIG. 42 , which in the limit reduces cube size to just that of the light source panel aperture length. In top view 904, cube edge 799 is truncated along line 916 thereby forming new cube edge face 916. The dotted region 918 represents the cube medium removed in doing so. With this for shortening of cube 340 ray 892 leaving point 896 at angle βmstrikes for shortened cube edge 916 at point 906 making an angle βmwith the face plane. When β is 22.5 degrees, βmis 14.8 degrees, and the corresponding angle with face normal 920 is 90-βmor 75.2 degrees, which is almost twice the critical angle. Consequently, incoming ray 892 cannot escape by refraction at point 906 and is total internally reflected as reflected ray 912, as if from a near perfect mirror. Reflected ray 912 is the mirror of original transmitting ray 892 but makes angle - βmwith cube face 916 and is directed out the cube's output face 900 at point 908. Since this truncation is performed on both sides, any change in the flux density occurs on both sides of the output beam in this top view perspective and actually restores the beam's original flux density to the one it had at across the light source panel aperture edge between points 896 and 897 by folding the edge rays inwards.The same means of cube size reduction is applied in details 924 and 926 of FIG. 43 to the cube's side view perspective 774 as shown originally in FIG. 37 . Detail 924 shows a truncation applied to the cube's original output face 900, for-shortening it to plane 928 including points E, C and G. Once again, the dotted portion 930 represents the cube medium removed in doing so. In this instance, illustrative ray 932 leaves point D on green mono-colored light source panel 284 and enters the cube medium at angle βmto cube face surface normal 934. Ray 932 travels along line D-E a short distance until reaching face point E at angle βmto the face surface. As above, this angle significantly exceeds the critical angle for total internal reflection in the medium. Accordingly, ray 932 does not refract at point E as an output ray and is reflected to point F on dichroic reflecting face 278. Since dichroic reflecting layer 278 is an efficient reflector of green light, ray 936 is reflected at F towards point G on output face 900 as ray 938. Ray 938 makes angle βmwith output face normal 940, and as such, refracts into air at point G with angle βair. The same behavior applies to rays emitted from symmetrically disposed blue light source panel 286, except these rays reflect from dichroic reflecting layer 280 rather than 278 which is transparent to them.Detail 926 in FIG. 43 shows the compacting effect of performing a second truncation to cube face 942 on which light source panel 288 is mounted. For-shortening cube face 942 to plane 944 and re-locating light source panel 288 from 942 to 944 has an equally beneficial effect to the one above, as illustrated by ray path H-I-J-K through the cube medium. Extreme ray 946 leaving light source panel 284 at point H ordinarily would have exited cube face 900 at point L, follows alternative path H-I-J-K and exits truncated cube face 928 at point K at an output angle from flipped in sign (direction) from + βmto - βmjust as was the case with ray 932 in detail 924. The effect of such a redistribution on spatial an angular uniformity across the output face 928 aperture is symmetrical and beneficial in that it concentrates the output flux to an output beam aperture that approaches that of the original aperture of each light source panel.Still another useful truncation to cube 340 is illustrated in FIG. 44 , for-shortening cube face 948 to plane of 950 and relocating light source panel 284 as shown. This improvement, first illustrated in detail 952 brings light source panels 284 and 288 into closer proximity at truncated cube corner point 954. The effect of doing so further consolidates the flux distribution of effected rays on truncated output cube face 928. Illustrative ray paths A-P-Q and D-S-T fall as expected between aperture ray lines 956 and 958 on output face 928. Detail 960 shows two illustrative extreme ray paths for light source panel 288. Ray path X-Y occurs in the final region of the original light source cube 340 left to be truncated.This final truncation of light source cube 340 is shown in detail 964 of FIG. 45 . A size comparison between the fully truncated version 966, of light source cube 340, and the originally over-sized version 340 is shown in detail 962. The maximum size improvement from the cube's side view perspective is found from equation 20 as the factor (1+z)/(1-z2) where z = 2Tanβ. When b is 14.8 degrees in the cube medium, the potential improvement factor becomes 2.12, which means that a cube ordinarily 2.12 times the edge size uiof the light source panels becomes about ui. From the perspective of cube area, the fully truncated cube is reduced in size by a factor of 4.In lighting applications requiring fewer total output beam lumens than those generated with the smallest practical illuminating pixel size D, the smallest permissible LED chip sizes LL ( 236 as in FIG. 14 ) and the maximum allowable electrical power per constituent light emitting diode, any one or all of the following measures can be taken to set the lower level needed: the electrical power applied can be reduced (dimming), the illuminating pixel size can be increased, and/or the LED chip size can be reduced. 7.7 Example 5: Multi-Colored Light Bars and LCD Backlighting Based On FIGS. 47-50 Light source cubes 340 (or 966 ) and light source panels 970 ( 284 , 286, or 288 ) can be combined advantageously with clear plastic light pipes to provide high lumen illuminators for a wide variety of efficient LCD backlights.Long, small diameter cold cathode fluorescent tubes (CCFTs) are the light source of choice in most of today LCD backlights. One or more of these tubes are coupled into one or more edge of a thin, rectangular light distribution system placed behind the LCD screen to be backlit. White light is emitted from the entire surface of the CCFT into every angular direction. A reflector is used to direct this light into the light distribution system, typical a clear plastic plate.Some approaches proposed for replacing fluorescent tubes with an array of LEDs would arrange the LEDs (either in RGB triads or as the new white LEDs) along a rectangular bar the same length and width of the edge entrance to the backlight's plastic plate, letting the light distribution system provide the needed color mixing and brightness homogenization. Since the entrance edges to any given backlight plate is fixed in area, there is a limit to the number of LEDs that can be so distributed.Some current 18.1" LCD backlights use two CCFTs along the top edge and two CCFTs along the bottom edge of a 10 mm thick light distribution plate in order to generate LCD screen brightness in excess of 200 Nits (61.6 FL). As it is preferable to dispense this brightness over all viewing angles, the total number of lumens that must flow through the LCD to do this is 1340. Then, compensating for losses in the light distribution system, between 1500 and 2000 lumens need be provided as input light. With 1 mm x 1 mm white LEDs delivering 10 lumens apiece, 200 such emitters could be arranged along the backlight's 367.8 mm edge in some distribution where they would be more than about 1 mm apart. With 3 mm x 3 mm RGB LED triads, each potentially yielding 60 RGB lumens, only 33 such units would be needed. Since 122 such units could be arranged in a line along the 367.8 mm edge, each triad unit would then have to be on 11 mm centers. In either case the input light would be far from uniformly mixed and have a point-lie character, which is not desirable.The alternative to this offered by the present invention is a means for coupling the LED's lumens into a long plastic rod, which disperses and mixes them, using the rod as a means of coupling to the same backlight plate's entrance edge. 7.7.1 Color Mixing and Angle Expansion The basic means of light source panel coupling is illustrated schematically in FIG. 46 where the +/-β light source output beam couples directly to an angle transformer arranged to expand (rather than contract) the system's output angle to substantially +/- 90 degrees in air. While such angle expansion can be performed using a traditional condenser lens (spherical, aspheric or Fresnel) applied to or near the output face of the associated light source cube or adjacent to the output aperture of the associated light source panel, the ideal non-imaging angle transformer as described above in FIG. 8 with regard to sidewall shape 135 and with regard to angle reduction as in the system of FIG. 35 may be preferable in many of the backlighting applications to follow. Not only does the non-imaging concentrator perform with highest possible transformation efficiency, its physical shape and form simplifies the alignment needed for efficient input and output optical coupling. One example of this particular angle conversion sub-system is shown schematically as 970 in FIG. 46 wherein fully-truncated light source cube 966 is combined with tri-color light source panels 284, 286 and 288 as in all the examples above, and mated to the large aperture 972 of a non-imaging angle transformer 974 of square (or rectangular) cross-section. Output light from the light source cube section 964 immediately becomes input lighting within aperture 972 , and then by efficient reflections from sidewall boundaries 976 , output lighting 978 from transformer 974 's output aperture 980. The design of transformer 974 is such that output light 978 is substantially +/- 90 degrees into air. Alternative embodiment of this sub-system, 984 , combines transformer 974 with a single light source panel 976 that may be one of the mono-color elements 284, 286 or 288 or it may be a light source panel whose constituent illuminating pixels each contain triads of red, green and blue LEDs. 7.7.2 High Lumen Light Bars One beneficial use of the wide-angle light source systems of FIG. 46 is as the source of light for a backlighting illumination system. For this purpose, an extension element, 992 in FIG. 47 , is required to distribute output light 978 FIG. 46 within the context of backlighting, as in backlighting an LCD screen, a photographic transparency or an appliqué. This element is a transparent lightpipe illuminator 992 of length LPIPE 986 matched and attached to the transformer's output aperture 980, as shown schematically in FIG. 47 . Lightpipe illuminator 992 is any transparent, low optical loss material, including glass, acrylic, and polycarbonate as a few of the possible examples. Output light 978 as in FIG. 46 enters lightpipe illuminator 992 and is trapped by the total internal reflections illustrated in FIG. 47 , eventually escaping lightpipe 992 with minimum loss through its end face 987 as output rays 988. In order to facilitate reasonably uniform escape of the trapped light from lightpipe 992 through the long bounding faces running along the lightpipe length, surface scattering elements 998 are added along one or more of long faces 994 (or within the lightpipe volume) in the manner that has become commonplace in the so-called dot-pattern backlight plates used in the backlighting of almost all flat panel LCD screens. The escape mechanism is illustrated in detail 996 by total internally reflecting ray 1000 which encounters scattering element 998 on face 994 at point A, whereupon the ray is scattered backwards in approximately a Lambertian distribution of rays 1002 , with only a small energy fraction remaining in the specularly reflected ray 1004. All rays 1002, whose angles fall within about +/- 42 degrees from any lightpipe face surface normal are refracted outwards by Snell's Law into the air (or medium) surrounding lightpipe illuminator 992 . Light rays such as 1006 whose scattered angle with any surface normal exceeds about +/- 42 degrees remains trapped by total internal reflection until encountering additional scattering elements 998 at downstream location within lightpipe 992, where new chances for scattering and escape are presented.If light source system 982 in FIG. 47 couples substantially all its lumens into lightpipe illuminator 992, substantially all such lumens are radiated by the structure into air or the medium surrounding length 986 of lightpipe 992. As such, lightpipe illuminator 992 provides wide-angle output light in much the same light distribution, as does a fluorescent tube. Accordingly, to make best use of this widely dispersed light emission, a three-sided reflector system 1006, 1008 and 1010 is included, as is illustrated in FIG. 48 , to channel light emission from three faces of lightpipe illumination element 992 through one designated lightpipe output face 995. In one possible example of this compound reflector, each element is a plane white diffusely scattering sheet that may be in close proximity to lightpipe faces 994, 997, and 999. Density 986 ( FIG. 46 ) of scattering elements 998 deposited on one or more surfaces of lightpipe illuminator 992 (and optionally within the lightpipe medium itself) must be sufficient to extract substantially all light coupled into lightpipe 992 at aperture 980. When this ideal scattering cross-section is achieved, very few rays 988 exit lightpipe end face 987 and substantially all exit at face 995. Light extraction from lightpipe illuminator 992 occurs as it does in all prior art dot-pattern backlight plates, by the mechanisms illustrated in cross-section detail 1025. Illustrative lightpipe ray 1012 makes one of many total internal reflections at point A because ray 1012 reflects as ray 1014 from bottom surface 997 bounded by air without contacting a scattering element 998. Continuing ray 1014 strikes rear lightpipe surface at point B, reflecting as ray 1016, and also avoids a scattering event. Illustrative ray 1 016, however, on striking upper lightpipe surface 1001 at point C, does so on scattering element 1018 , and thereupon scatters in multiple directions. Scattered ray 1023 , for example, scatters into a direction eligible for escape through front face 995 , which in this illustration is covered with external output layers 1022 and 1024 , which may provide additional diffusive scattering ( 1024 ) and/or specular reflection as in the case of a reflective polarizer ( 1022 ). As another example, scattered ray 1020 scatters in a direction eligible for escape through face 999 , but in doing so strikes back-reflector sheet 1008 at point D, and is further scattered. One of the scattered rays at point D, ray 1027 , scatters in a direction eligible for escape through output face 995 , near point F on first output layer 1024. The resulting light source system of FIG. 48 is 1007, the sum of light source element sub-system 982, lightpipe 992 and reflectors 1006, 1008, and 1010, plus any output layers such as 1022 and 1024. One Illustrative application 1031 of light source system 1007 is illustrated schematically in FIG. 49 , with edge face 992 brought into close proximity with the corresponding edge face of dot-pattern backlight system 1026, itself composed as in most prior art descriptions of at least one lightpipe plate 1030 (containing scattering elements in its bulk or on its lower plane surface) lower reflector sheet 1032, and upper light diffuser 1028. Exit light from face 995 of light source system 1007 is coupled efficiently into lightpipe plate 1030 as total internally reflecting rays 1029 . These rays escape from backlight system 1026 by the same mechanisms described above as backlighting output rays 1033 , which back illuminate the correspondingly rectangular aperture of an LCD screen or any passive appliqué, pattern or film. One higher performance variation on system 1031, 1032, is also shown in FIG. 49 using two light source systems 962 , one at each end of lightpipe 992. 7.7.3 High Lumen Light Bars and LCD Backlighting The systems of FIG. 49 enable a very large number of lumens to be emitted through the output aperture of backlighting system 1026. Suppose as one of many possible examples, illustrative light source cube 966 (or 340 ) of previous examples is used in light source sub-system 982 with light source panels 284, 286 and 288 each arranged to provide 1300 lumens. The respective transmission efficiencies of dichroic combiner cube 274 and angle transforming element 974 are about 0.8 1 and 0.9 respectively. The respective light extraction efficiencies from lightpipe illuminator 992 and its surrounding reflectors, and from backlighting system 1026 are about 0.75 and 0.70. Hence, the total unpolarized RGB output light extracted in variation 1031 of FIG. 49 over the backlighting aperture is about 1,500 lumens. Adding polarization recycling means to layer 1022 or to the light source panels themselves, the polarized output becomes about 1,100 lumens. Then, if the backlighting system is for an 18.1" diagonal LCD screen (such as the LQ 181 manufactured by Sharp), screen brightness over all angles is about 110 FL (376 Nits). Using system 1032 of FIG. 49 under the same conditions, output brightness doubles to 750 Nits.A variation on the use of light source system 982 in backlighting applications is illustrated schematically in FIG. 50 . In this variation, the aspect ratio or shape of the constituent light source panels is changed dramatically from the nearly square (13.5 mm x 9.94 mm) implementations of panels 284, 286 and 288 to the bar-like shapes of 1032 (green), 1034 (blue) and 1036 (red) applied in FIG. 50 . Deployed in this instance more as a light source bar, the constituent mono-color light source panels are each composed of, for example, one row of illuminating pixels 1042. In this example, pixel structure 221 of FIG. 16 is used as the means 1037 to expand the light emitted from each constituent LED 70 into the four virtual image outputs 1048, 1050, 1052 and 1054 that comprise each illuminating pixel 1042 , as originally described in FIG. 4 . Optical layers 58 and 60 in 221 , as arranged all earlier examples, maintain output light within +/-β (+/- 22.5 degrees in air). Light bars 1032, 1034 and 1036 are arranged about the adjacent sides of an elongated version of dichroic combiner cube 274, 1040 , whose axial length has been made equal to that of the light source bars themselves. The resulting combiner bar 1033 is coupled to dielectric angle transformer 1038 whose aperture length has been extended to match the aperture length of the combiner bar, forming light bar sub-system 1056 in FIG. 50 . Light bar sub-system 1056 is coupled along one edge of backlighting system 1026, as in lighting system 1058 of FIG. 50 . The backlighting system variation of FIG. 50 provides a more compact means for distributing LEDs and their illuminating pixels than does the arrangements of FIG. 49 . System 1058 of FIG. 50 disposes illuminating elements 1042 in a row along an edge of the backlighting system, while light source system 1007 of FIG. 48 concentrates its illuminating pixels in an external block (or blocks) 964. If as one example of system 1058, illuminating pixels 1042 are so that each constituent light source bar contains the same number of elements ( 130 ), as did the more symmetrical light source panels ( 284, 286 and 288 ). For this to be possible, each illuminating pixel would be (375.76/130) or 2.8-mm.on each edge, which is not unreasonable. In this case, backlighting system 1058 yields about 2,100 unpolarized (1,600 polarized) RGB lumens and a resulting LCD brightness, over the full output aperture of about 155 FL (538 Nits).Of these two backlighting examples, the system of 1058 is preferred over the system of 1031 on the basis of its 1.4x higher brightness per LED chip (or brightness per watt). System 1058 is also preferred because of its efficient utilization of backlight system volume. For highest lumen delivery performance applications, one light source bar 1056 can be deployed on each of the four edges of backlighting system 1026 .Light source sub-systems 1007 ( FIG. 48 ) and 1056 ( FIG. 50 ) are used as the input light source for any standard (dot-pattern or facet-pattern) backlight (edge light), replacing the conventional fluorescent tube and its surrounding reflector, or any equivalent bar of LEDs and associated reflectors.Merely arranging LEDs in or along a long rectangular bar, and directly coupling the wide angle light from that bar to the edge of a dot-pattern backlight system 1026 is one approach, but it involves certain fundamental design limitations and that are minimized or avoided altogether using the preferable forms of FIGS. 49-50 . 7.7.4 Small Aperture Backlights Yet another potential backlighting example is presented by the small 1" -2" diagonal sized direct-view transmissive LCDs used to preview pictures taken with digital cameras. The backlight's power must be as low as possible to minimize drain on the camera's limited battery capacity. Yet, to assure high-contrast viewing in the most brightly lit of user environments, it would be desirable that the backlight be made strong enough so that the display brightness is on the order of 500 Nits, and if possible, over a full range of viewing directions. The LCD diagonal on the compact Olympus D-400 digital camera is 47 mm. The LCD's transparency to unpolarized light is about 5%, and to polarized light, about 9% (as in the above examples with Sharp's LQ181 18" diagonal LCD screen). The active display area of this much smaller display is 0.0114 ft2. Using unpolarized backlighting, the preferable backlight must then provide 2 lumens from the LCD or about 20 lumens from the backlight. Suppose the 47 mm diagonal conventional dot-pattern backlight delivers to the LCD 70% of the lumens coupled into a lightpipe single edge. This in turn means that about 30 lumens must be delivered by the source of white edge light.Since each of today's modern red, blue and green LED chips as introduced in the examples above deliver about 20 lumens apiece, it would be best to use only one LED chip of each color. If so, the three LEDs and a 50% coupling efficiency to the backlighting systems 37.6 mm input coupling edge would supply the 30 input lumens needed. Yet, with each chip approximately 0.5 mm on an edge and the coupling edge 37.5 mm in length, some mixing means must be involved to assure that the backlight system's output light 1033 is well mixed over the entire output aperture.In principal, either light source sub-systems 1007 or 1056 provide the means for color mixing and light distribution over the 37.6 mm coupling edge. Of the two approaches, system 1007 and its backlighting implementation 1031 of FIG. 50 is used as an example. In this case, each mono-colored light source panel used in sub-system 982 consists, in one example, of a single 4 mm x 4 mm illuminating pixel 1042 providing at least 10 lumens of unpolarized light to its corresponding 4 mm dichroic combiner cube. The total etendue of the combiner cube aperture (in the combiner media) is from equation 14-15 above, (16) Sin2(14.8) or about 1 mm2. This suggests that the preferable size of output aperture 980 of dielectric angle transformer 974 is about 1.5 mm square, which then becomes the preferable cross-section for lightpipe illuminator 992 , which is made 37.6 mm in length 986 ( FIG. 47 ). In this example, corresponding dot-pattern lightpipe plate 1030 is made 28.2 mm by 37.6 mm by 1.5 mm. Using the same transmission efficiencies in the examples above, the backlighting system's output light 1033 contains 11.5 unpolarized RGB lumens (8.6 lumens polarized). With 9% LCD transmission of polarized lumens through the LCD this small (0.0114 ft2) display panel exhibits 68 FL (or 235 Nits) of white-field image brightness. Then adding a second identical light source sub-system 982 as in backlighting system 1032 in FIG. 50 , a total of 6 LED chips can supply a viewable brightness of 470 Nits over all angles of view across the 28.2 mm by 36.7 mm display screen aperture. 7.8 Example 5: Task/Flood Lighting Based on FIG. 51 Numerous other general lighting and image display applications exist for the light source panel, and the light source cube. The light source panel itself can be used as an efficient general lighting element, wherein its +/-β (+/-22.5 degree) output beam is used for lighting a specified task area, with or without with secondary optics that spreads out or condenses the illuminated area differently than does the intrinsic illumination angle. In such task-lighting applications, the preferable light source panel embodiment is selected for the specific lighting task involved. One standard illustrative work surface area to be lighted is a 60" by 30" desk. A standard under-cabinet commercial illuminator housing is 52" long, 11" deep and 2" thick. The housing contains one 34 watt Philips fluorescent tube and is typically mounted approximately 17" above the surface to be lighted. Direct illuminance measurements made in foot-candles (lumens per square foot) for such a treatment, show 65 fc at the very center of the field illuminated, with intensities rising to a maximum of 85 fc along the back edge, dropping to 10-30 fc at field corners and toward field edges rather quickly. In order to supply 100 fc over the entire 12.5 sq. ft. surface to be lighted requires a source of 1250 white lumens. Achieving this coverage with no additional angular diffusion from the standard light source panel of FIG. 16 having a +/- 22.5 degree illumination cone, physically requires one centrally mounted 16" by 46" panel, or as one of many other possibilities, a single 16" x 31" array of 6 separate 1" light source panels each spaced from each other by 14" gaps as shown schematically in FIG. 51 . If using six 1" square light-source panels each would supply 208 lumens over their 1" square output apertures. Yielding 30 RGB lumens from each illuminating pixel implies that there are about seven illuminating pixels per panel. One possible format for the 1" square light source panels, involved would be sixteen tri-color illuminating pixels per panel in a 4 x 4 illuminating pixel array. Each illuminating pixel then is a 6.35 mm square and each triad of red, green and blue LEDs are contained within their associated 3.175 mm square reflecting cavities. Accordingly, each 1" square light source panel of these dimensions supplies 480 lumens when driven at about 0.25 watt per LED, which is roughly twice the lumens needed for the average 100 fc work surface luminosity sought. Operating the LEDs at 0.11 watt apiece achieves this 100 fc task lighting performance and does so over the entire surface. There are 288 LEDs (16x3x6), so the total operating power for 100-fc performance (assuming equal amounts of red, green and blue) is 28.8 watts, roughly the same as for the 34-watt fluorescent tube used commercially.This illustrative six-element task light spreads illumination evenly over the entire work surface, is several millimeters in thickness rather than several inches, and allows precise electronic control over the lighting color and color temperature. Moreover, it is dimmable to any lighting level, and provides a peak work surface illuminance of up to 230 fc everywhere when, operated at full (72 watt) power that is applicable to tasks demanding such : higher lighting levels. The 34-watt commercial housing with its fluorescent tube wastes considerable more than half the lumens it provides, is much bulkier, only supplies a one color temperature, is not dimmable, and creates uneven illumination.There is no clear standard in overhead lighting luminaires for offices and workspaces. Many of these lighting requirements are fulfilled by a wide variety of overhead proffers built into (or hanging from) the ceiling. Other common lighting treatments involve combinations of overhead flood and spot lamps having a wide variety of sizes, wattages and physical arrangements. Whatever their configuration, overhead luminaires are expected to convey sufficient light to task areas and less light as background illumination of the floors and walls. In one typical office environment, 80-125 fc of illuminance has been provided on key work areas and about la fc in the more peripheral areas. In one typical 17' x 17' x 8' conference room containing a central 10.5' x 6' table, lighting is provided by an array of twelve separate sealed-beam halogen flood lamps sited 66" above the table top. The maximum tabletop illuminance is measured as 75 fc. General illuminance away from the table is 10-15 fc. The lamps are 75-watt GE PAR 30/L Long Neck Wide Floodlight Indoor Light Bulbs, each supplying 1050 lumens over a useful life of 2000 hours.The same performance can be achieved using six 2" square light source panels, each panel an 8x8 array of illuminating pixels, supplying 1,920 lumens in the illustrative +/-25.4 degree angular range common to all examples thus far. These six light source panels are arranged the same way as in the task lighting example of FIG. 57 , this time along the edges of a 62" x 122" backing plate mounted to the ceiling and centered on the conference room table, one light source panel in each corner, and one on the center of each long edge. In this particular configuration, however, every 2" light source panel sits 58" from its neighbors. Each panel in this arrangement produces a 60" x 60" illumination footprint on the tabletop plane that contains a uniform distribution of 1,920 lumens. At the 58" spacing, these 60" square lighting patterns are contiguous with each other, just as shown in the example of FIG. 57 . The resulting tabletop illuminance from this configuration is 76.8 fc (6 panels x 1,920 lumens/panel divided by 150 square feet). Each of the six 2" square light source panels used contains a total of 192 LEDs driven by 48 watts (0.25 watt per LED). Total electrical power is therefore 288 watts, about one-third the conventional twelve-lamp usage, saving in this case, 612 watts.Yet another way to achieve the same 76.8 fc illuminance spread over a 10' x 15' area is to use a single light source panel 1096 or light source cube 1110 mounted on the ceiling in the center of the room in conjunction with an output lens 1092 or diffuser to increase the spread of the light from the illustrative +/-22.5 degrees 1102 to the larger angles 1108 needed to make the desired footprint. This straightforward principle is illustrated schematically in FIG. 52 for a bulk plano-concave lens element 1092 and for a negative Fresnel lens 1093. Lenses 1092 and 1093 can be either spherical or cylindrical, depending on the illumination pattern 1106 sought. When light source panel 1096 contains tri-color illuminating pixels (separate red, green and blue LEDs within each pixel) as in the example above and has a substantially square output aperture, its intrinsic illumination pattern 1104 is transformed to a symmetrically enlarged illumination pattern 1106 by a spherical lens, or to an asymmetrically enlarged pattern 1106 by two crossed cylindrical lenses. The most compact arrangement is provided by the use of one spherical (or aspheric) Fresnel lens for symmetrical patterns and by two sequential cylindrical (linear) Fresnel lenses, axes crossed 90-degrees to each other, each Fresnel designed for the required angle in the direction it controls. In the conference room example, the required angles are approximately +/-53.75 degrees to spread the light 15' and +/-42.3 degrees to spread the light 10'. Since the sizes of the light source elements (panels or cubes) are so much smaller than the spreads to be created, the light source dimensions can be neglected. 7.9 Example 6: Direct View Image Display Thus far, all application examples of light source panels and light source cubes have involved their use as illuminators. Every mono-colored LED was operated collectively as a mono-colored group, each group having particularly even uniformity whatever its monochromatic or composite output color. When independent control is provided for the red, green and blue emitters within each separate illuminating pixel, the spatial uniformity is modulated rather than even, and the light source panel becomes a spatial light modulator capable of monochromatic or full color image display across the beam's aperture. This usage, however, is differentiated from most present image display technologies in that pixel sizes range from 1-2 mm on the low end to tens of millimeters and more on the high end. In this context, each illuminating pixel of the earlier examples becomes an image display pixel. As such, one large light source panel, or a two-dimensional array of separate light source panels can be deployed as one large pixel image display for outdoor use in stadiums, along highways, as electronic signage, or as display walls in large office workspaces.Operating a light source panel as an information (or image) display assumes that a means of interconnection is arranged beyond the one described in FIG. 14 which provides a common buss for each of the two diode contacts in the array that is interconnected. For display, each mono-colored diode has to be controlled separately, requiring a dynamic interconnect system analogous to those used with LCDs and DMDs.Assuming such interconnection means is implemented practically, display applications of the light source panel range from low information content alphanumeric characters and icons, to full-color, full-motion, high-information content video displays. Image display brightness is governed by the lumens generated by the individual pixels, the pixel aperture and the pixel's effective output angle. Potentially, each LED chip is capable of enormous direct viewing brightness, as it generates a relatively large number of lumens over a very small surface area. This capacity is evident by just staring directly at almost any fully powered LED, which often appears too bright to look at. That such pinpoint brightness is possible from an LED is not surprising as the highest performing chips release 20 viewable lumens over all angles from a surface area of about 0.3 mm2(3.2E-06 ft2). This corresponds to a Lambertian brightness of 6M FL. When used in display, these lumens have to be dispersed over a considerably larger pixel area. If as in previous examples, the fully-powered illuminating pixel nets 30 RGB lumens over its illuminating aperture, the effective Lambertian brightness of a 20 mm square pixel is still about 7,000 FL (24,000 Nits), which while still much higher than that of common direct view LCD image displays, provides the necessary brightness to compete with direct sunlight in outdoor viewing situations such as stadium scoreboards, view screens and highway billboards. Yet, there are many possible lower brightness applications that become practical when operating the constituent LEDs at a fraction of their maximum levels and spreading their light over larger pixel areas. One example of this is a rolling message board with completely contiguous 8 mm square pixels. As just one example, a display module containing 100 pixels (300 LEDs) operating at 20 mW per diode draws a total of 6 watts and produces an RGB brightness that is 0.02-watts/pixel and pixel brightness is still about 3500 FL (12,000 Nits). Visually more appealing than the dot-pattern like appearance of traditional arrays or clusters of pre-packaged plastic encapsulated LEDs, the comparable light source panel displays described above would allow much more realistic font and image representations regardless of pixel size used.8.0 Precise Control Over Source Images and Beams Using the Methods of Elevated Prism SheetsPreferred embodiments of the multi-layer illuminator inventions of FIGS. 1 - 3 , 7 , and 10-15 , share two distinguishing features: the emitting sources are separated from each other by non-emitting regions, and the illuminator's directional output light is made to appear continuous by the use of prism-like array sheets elevated within the illuminator a preferable distance above the emitting sources.The physical mechanisms by which elevated prism sheets convert discontinuous, non-directional input light into continuous and directional output light is both complicated and non-intuitive. 8.1 Brightness Enhancement Films and Their Use in Standard Backlights Despite the fact that prism arrays have become common elements in practically all illuminators used to back light LCDs, their influence on the spatial uniformity of output light and the development of spatially uniform beams of light has been neither well-established nor productively exploited.The most common prism-sheets used to enhance backlighting brightness are manufactured by the Minnesota Mining & Manufacturing Company under the trade name BEF, an acronym for brightness enhancement film. Such plastic prism films are generally composed of 50-micron wide micro prism grooves each having 90 degree apex angles. Such films are commonly placed between a uniform wide angle fluorescent light source and an LCD screen, prism points towards the LCD, for the express purpose of brightening the output appearance of the display (hence their commercial description as brightness enhancement films). Display brightness increases through the use of such BEF sheets because the sheet's prismatic grooves concentrate lumens transmitted through the display into a narrower range of viewing angles than the un-modified illuminator would have otherwise developed on its own. Two BEF sheets, their prism axes crossed 90-degrees with respect to each other, are commonly used to achieve the highest possible LCD brightness enhancement. The standard brightness enhancement application is with the "dot pattern backlights" already described above. Within a "dot pattern backlight," substantially uniform light emitted by one or more cylindrically shaped fluorescent tubes is fed through one or more edges of a transparent lightpipe disrupted only by a distribution of scattering features (dots or facets) arranged to cause uniform escape of light through the lightpipe's large rectangular aperture and into the mating aperture of the LCD. Diffuser sheets are used above and below the dot pattern lightpipe to make the backlight's spatial uniformity at the rear of the LCD, featureless. No effort has ever been made to adjust or set the exact height of the BEF sheets above the lightpipe illuminator in a preferable manner. The magnitude of LCD brightness enhancement produced by the prism sheet is not affected the prism sheet's height above the diffused lightpipe.A less common type of LCD backlight involves a parallel array of fluorescent tubes within a white diffusing box. In these higher brightness backlight's, one or more diffuser plates are used between the diffusing box and the LCD to even out the illumination. Backlight brightness is usually high enough with multiple lamps that the expense and angle narrowing of BEF sheets is rarely warranted.There has only been one known LCD backlight application, where the positioning of a BEF sheet has been used to modify the spatial uniformity of the backlight's output. This special purpose backlight involved two different types of light sources: one array o separated fluorescent tubes for high brightness daytime use, and one electro-luminescent source placed in the spaces between the fluorescent tubes, for low level night time use. In this case, the physical positioning of a single BEF sheet was used as a means to balance out the illumination provided by each source. 8.2 Prism Sheets and the Precise Effects of Their Elevation on Output Beam Uniformity Successful practice of the present inventions depends on setting the spacing between sheets of prisms and the discrete emitting arrays beneath them, along with the characteristics of the prisms themselves. Preferably elevated, the prism sheets enable spatially discontinuous emitters to appear continuous and with the collective output illumination angularly directed within +/-β, the extreme angle β depending on the prism's geometry (β=22.5 degrees when the prism apex angle is 90 degrees). Prism sheet elevation above discontinuous emitting arrays is to provide even beam uniformity while concentrating the angular cone of output illumination as compared to that of the original emission.Preferable practice of the multi-layered illuminator inventions described herein ( FIGS. 1-3 , 7 , and 10-15 ) relies on disposing the prism sheets at a unique height above the light emitters, that unique height depending quantitatively on a variety of prism characteristics such as apex angle, base width, base angles, refractive index, the height of the prism base above the emitter's output plane, emitter size, spacing between emitters, and the brightness variation that exists within the emitter's boundaries. There are also two important external factors affecting this multi-layer illuminator's performance, including how the illuminator's prisms are to be viewed in use (i.e. either directly by eye or indirectly, through one or more light scattering materials) or whether the prism's output beam, unviewed, is to be used to provide source of general illumination. 8.2.1 Prism Sheets and Their Geometries The basic prism sheet cross-section is represented schematically in FIGS. 53 for triangular prism elements and in FIG. 54 for aspheric prism-like elements. Aspheric elements 1218 of FIG. 54 are quite unlike classical spherical lenticular lens structures, and behave more like prismatic lenses. The general prism form 1200 is shown in cross-section 1202. The apex angle θv 1204 is best in the range 35 degrees to 60 degrees half angle as shown, and preferably 45 degrees. Base angle α 1206 is 90- θv. Base width δW 1206 depends on the dimensional scale of the emitting elements they are to be used with and the method of prism sheet manufacture. For the LED emitters of FIGS. 14-16 , as one example, base width δW is preferably 25 to 50 microns. For larger emitters such as the Coming's 12 mm wide fluorescent there is latitude to use larger prisms. For applications requiring maximum compactness it is advantageous to make the prisms as small as practical, which makes the prism sheet as thin as possible. Prism sheets are easily cast and cured, embossed or compression molded. Substrate layers may be a different material than the formed prisms, and can be for example, polyester, polycarbonate or acrylic. When embossed (or molded), the prism material is melted, formed to a tool, and cooled. Various polymers and polymer composites are suitable for this process. Prism sheets can be laminated or bonded to thicker plastic or glass layers (for example 217 , FIG. 16 ), to achieve the exact spacer height G1" in FIG. 16 that is required. Prism height H ( 1210 in FIG. 53 ) depends on prism angles and base width, with δW=2HTan θv, or as in FIG. 54 , the polynomial expression given in equation 32 where k is the conic constant, R, the radius of curvature, a, b, c, and d, the aspheric coefficients. H x = x 2 / R 1 + 1 - 1 - k ⁢ x / R 2 + ax 4 + bx 6 + cx 8 + dx 10 + …When the aspheric terms are adjusted, aspheric elements of the general shape illustrated in FIG. 54 are obtained which act in a prism-like manner. As one example, a radius of 0.135, conic constant, -1, a, b, c, and d coefficient of 2, 50, -4000 and 10,000 respectively develops for δW of 0.5 mm and H that is about 0.2 mm. This design can be easily scaled to smaller dimensions.Often, for manufacturing tooling relief, a small gap or tool land 1212 is allowed between prism elements. Similarly, the apex may have a similarly small flat mesa.One unique aspect of elevated prism sheets 58 and 60 as used in all previous illumination examples, is that they do not have to exhibit the extreme standards in cosmetic perfection that have been associated with 3 M's BEF in direct view brightness enhancement applications. Cosmetic defects in 3 M's BEF are directly viewable through the LCD display screens beneath which they are used. And, the LCD viewing standard is for zero viewing defects. Accordingly, extreme quality measures are taken during BEF's manufacturing, packaging and handling to prevent cosmetic damage to the prism substrate and the prism tips, which are extremely fragile. Preventive measures include discarding all damaged BEF sheets. No such costly measures are needed with present prism sheets 58 and 60 . Cosmetic defects in prism sheets 58 and 60 cannot be directly viewed, and are therefore much less critical to function. Some degree of spatial mixing has been included above the prism sheets that blur or totally homogenize any visual defects local prism imperfections might contribute. Light source panels 248, 221 and 225 as in FIG. 16 , for example, provide a diffuse scattering layer 28 , which hides minor scratches and abrasions. The projection systems of FIGS. 17-22 , and 24-32 , each employ a Kohler-type angle transformation process intended to average out any spatial non-uniformity in the light source panels 284, 286 and 288 containing prisms sheets 58 and 60. 8.2.2 Prism Sheets and Advantageous Virtual Image Formations Whatever their origin, prism and prism-like structures develop virtual images of the light sources placed beneath them that displace as a function of the prism elevation and the prism geometry.Classic large prisms are well known for their ability to shift and displace well-collimated beams of light by means of refraction. When a well-collimated light source is viewed through such a prism, the light actually comes from a virtual representation of the source, and not the source itself. The virtual source is an image of the real source and has been shifted in position away from the real source location. When the prism apex is centered over the real source, and pointing away from the source, two virtual source images are so formed. One source image and beam displacement is associated with each of the two oppositely tilted prism facets.These same two virtual source images are characteristic of arrays of prisms as well.The illumination system invention of FIG. 1 shows the two bilateral virtual images 26 and 27 of real emitting strip 24 being displaced by the action of prism sheet 7 with respect to one another as a result of prism apex angle 8 and prism sheet height 18. Provided the prisms used have substantially smaller base widths δW than width 42 of emitting object 24, the two virtual images overlap almost exactly when prism elevation 18 is made substantially zero (and the prisms are sufficiently small. These overlapping virtual images then separate form each other as prism sheet elevation 18 is deliberately increased. The illumination system inventions of FIGS. 3 and 7 show the more complicated set of quadra laterally disposed virtual images resulting in two-dimensions by virtual of the action of two crossed prism sheets 58 and 60. The set of four virtual images 108 of a single, square, emitting object 110 has been shown schematically in FIG. 4 . 8.2.2.1 Single Prism Behavior The virtual image shift with prism sheet elevation (or offset) is explained schematically for a single half-prism element relative to a very small diameter line emitter, in FIG. 55 . In this generalized schematic cross-section 1220 , only the left hand side of the idealized prism element is shown (for emphasis). A single (paraxial) light ray 1222 is followed first in air as if leaving the narrow line emitter 1224 at a point P, located in this example directly below the prism's base 1226 a distance 1228 , OFF, that resides on a line 1230 drawn vertically downwards from the prism's apex 1232. This ray 1222 is shown to pass into the prism through its base 1226 as continuing ray 1238, whereupon it transmits through the prism material 1234 towards the hypotenuse edge 1236 . On reaching slanted output face 1236 , ray 1238 , depending on its incoming angular direction 1240 , θ1, either transmits as ray 1244 into air at an angle 1242 , θ3, if less than 90 degrees, or suffers total internal reflection (TIR), if 90 degrees or greater. The critical boundary ray 1246 is shown heading along prism face 1236 . Illustrative output ray 1244 shown in FIG. 55 emerges directly upwards along what would be the standard direction of view or use 1248 of output light.As seen from FIG. 55 , TIR prevents observation of light from a region of half width (S+S'), whose boundary at zero offset (S) is defined by the onset of TIR, as in equation 33 , where H is the effective height of the prism above the object, α is half the prism's full apex angle, and θ2is the angle made by the transmitting ray with the prism base's surface normal (90-α-θ4). S = Tan ⁢ θ 2 1 + Tan ⁢ θ 2 / Tanα ⁢ HWhen the source of light 1224 is offset a vertical distance 1228 (OFF) below the prism's base 1226 , the boundary half width is increased by S' as in equation 34 , where θ1=Sin-1(nSinθ2), with θ2=90-α-θ4, θ4=Sin-1(Sinθ3/n) and θ3=90-α+φ (θ3=90 degrees for TIR). Sʹ = OFF ⁢ Tan ⁢ θ 1The exit position of any output ray angle 1242 , θ3, can also be calculated from these equations by using the desired output angle less than 90 degrees, rather than the 90 degrees needed as the pre-condition to TIR. For example, 45 degrees is the angle used to represent ray 1244 that is transmitted vertically upwards and directly towards the viewer along axis 1248 . It is this vertical ray, under all conditions, that defines the center of one side of the prism's output angular distribution.Output ray 1244 as shown in FIG. 55 is unique in that it points directly in a major direction of use, prism sheet surface normal 1248 , which is also the most common axis of view. Not all emergent rays from a given object point 1224 are as visible along axis 1248 . Two illustrative sets of paraxial input rays 1254 and 1260 resulting in output rays that would not be perceived by a viewer positioned along axis 1248 are shown in FIG. 56 . These rays are actually traced using commercial ray-tracing software ASAP™(Breault Research Organization). Illustrative prism element 1258 in prism sheet section 1250 , fed with oblique input rays 1254 return practically horizontal output rays 1256 , far outside a viewer's field of view along axis 1248. Illustrative prism element 1258 in prism sheet section 1252 , fed with oblique input rays 1260 return output rays 1262 that fall just outside a viewer's field of view along axis 1248 . These rays 1262 , while not viewable by eye, make up a key portion of the prism sheet's output beam. The unique set of rays that would be seen (imaged) by a viewer staring down axis 1248 is shown schematically in FIG. 57 , which adds ray-tracing detail to the schematic representation of Fig. 55 . Peripheral output rays 1270 and 1272 shown in FIG. 57 fall within +/- 3-degrees of view axis 1248 , and emanate from common object point 1224 . The backward intersection of these rays, via dotted construction lines 1280 drawn along each ray in FIG. 57 determines the virtual image point 1274 and its focal plane depth 1276 for the corresponding object point 1224 . The corresponding virtual image displacement 1278 is given by the lateral shift ΔX that transmission through the prism brings about for the prism sheet offset 1228 from object point 1224 . Useful mathematical relationships for these displacements will be derived shortly. 8.2.2.2 Role of Neighboring Prisms It is also necessary to understand that not all rays emitted by a light source array placed beneath a prism sheet are transmitted directly, and that neighboring prism elements in the sheet become involved in both the transmission and rejection process.Illustrative ray bundle 1285 is traced from object point 1284 in leftmost prism element 1302 . These rays undergo TIR at leftmost hypotenuse face 1294 of prism element 1302 , and refract through the prism's opposing hypotenuse face 1296 at an angle that is not only far outside the viewer's field of vision, but on line with neighboring prism element 1304 . The one ray 1288 that escapes capture by neighboring prism element 1304 practically runs along the plane of the prism sheet. The larger fraction of rays 1290 return as ray bundle 1286 to the light emitting objects from which they came below the prism sheet, by entering and reflecting from neighboring prism 1304 . A small fraction of rays 1290 remains trapped within the prism sheet structure, as illustrated by ray 1292 . Yet other rays, such as the practically vertical bundle 1314 traced in FIG. 59 , undergo two total internal reflections within their initial prism element 1302 , one on face 1294 and a second on face 1296 , the combined action of which can be seen to return all this flux as rejected bundle 1316 . 8.2.2.3 Infernally Reflected Light and Its Recycling These angle-specific total internal reflections, when combined with some type of reflective return mechanism, constitute the basis for the backlighting-specific brightness enhancement that has been the hallmark of 3 M's commercial prism array film, BEF. Rejected photons that reflect or scatter back into angles of prism transmission, increase viewable power within the directly transmitted range of angular output. While reuse of wasted photons is an admirable feature of prismatic structures in general, the reuse in and of itself does not influence the output uniformity in any appreciable manner, and is therefore not a critical feature for their use in the present invention. Photons are reflectively recycled randomly, and as such are equally apt to enhance bright regions of non-uniformity, as they are to affect dark ones.The reflective recycling of the rejected ray fractions illustrated by bundles 1286 (in FIG. 58 ) and 1316 (in FIG. 59 ) is important to the present inventions only in that recycling efficiency increases the percentage of input light that becomes a usable part of the prism sheet's total light output. 8.2.2.4 Human Perception of Output Light From One Prism Array Human perception of the prism sheet's angular viewing characteristics is affected by TIR processes within the prism sheet and by the limited acceptance angle of the human eye. The reason such effects are important to understand is that they influence how well one perceives the prism sheet to be working as opposed to how the prism sheet actually works within the illuminator inventions presently described. Visual perceptions not critical to illuminator function may cause misinterpretations of the prism sheet's effectiveness. One example of this is given for traced rays in FIG. 60 , which follows the wide range of ray angles emitted from infinitesimal line emitter 1320 placed in close contact with base plane 1322 of single prism element 1324 . A viewer staring along axis of view 1248 visualizes the line emitter 1320 as two separated sharply focused virtual line images 1338 and 1340 via output ray bundles 1326 and 1328 . Other output ray bundles 1330, 1332 and 1334 are hidden from view by their angular directions. Yet, collecting all these output rays 1332, 1326, 1334, 1328 , and 1330 on diffusion screen 1342 and looking at the screen, a very different result is perceived. Direct view of screen 1342 shows general illuminance from ray bundles 1332, 1334 and 1330 , and concentrated illuminance from bundles 1326 and 1328 , which might appear to be blurred representations of the line emitter 1320 .There is, as indicated in FIG. 60 , a much wider usable output field than perceived by a human viewer's eyes, when all paraxial ray directions are used as they are in each of the present inventions. The human eye sharply focuses light collected over only about +/- 1 degree. Human perception outside this angular range falls off rapidly. This difference is easily demonstrated when a real prism is used with a single, sharply ruled pencil line to approximate behavior of a line emitting element. A 14 mm high glass prism with a 90-degree apex and a 28 mm wide base is used as an example. A viewer standing over the prism apex sees two well-focused pencil lines displaced from each other about 6.5 mm. This is exactly the value given by the paraxial approximation of equation 33; θ3set to 45 degrees.The dichotomy between visual perception and full system behavior raises an important design issue that impacts preferable use of prism sheets in the present inventions. The critical elevation G1 of prism sheet 7 as in FIG. 1 (and FIG. 2 ) or G1' for prism sheets 58 and 60 in FIGS. 3 , 7 and 15 can be set by visual judgment made directly through the prism sheet or it can be made through diffusive layers 28 elevated above them. Critical elevations G1 and G1' can also be set by system-level mathematical calculation. Deciding which is the best approach depends on the way in which output light from the prism sheet or sheets is to be used.The same dichotomy between human viewing and system performance exists as well for micro prisms, as shown in FIG. 61 , which is a schematic representation of the cross-section of a single micro-scale prism sheet 58 and its effect on light emitted from the 7.4 mm wide aperture of a single stripe emitter 1344 (equivalent to 24 in FIG. 1 ). Stripe and prism axes are arranged parallel. The smaller the micro prism 1346 height H, the smaller the internal image displacement distance S (as in FIG. 55 ). In the limit, the total image displacement with a sufficiently small micro prism array becomes approximately S', as given in equation 34 . For convenience in depicting (and modeling) a dense micro array, a single 90-degree prism design has been scaled down from the illustrative 14 mm high prism described above, to a 145-unit array with 0.276 mm prism element base widths that is actually traced. Micro prism elements 1346 are further placed in optical contact with a thin (0.1 mm) planar support substrate 1348 made of the same optical material (e.g. acrylic, n=1.49). This convenient depiction, though 5.5 times larger in scale than 3M's BEF, is actually its functional equivalent with regard to geometry and optical performance. Emitting stripe 1344 may also be thought of as a very dense array of parallel and infinitesimal emitting lines, each of which is separately split and displaced as in FIGS. 55 , 57 and 60 .The paraxial theory from the geometry of FIG. 61 and represented in equation 35 predicts, that when a uniform emitting stripe of width W is offset from the prism substrate by the equivalent distance 1348 , W, a human observer citing along axis 1248 sees two virtual stripe images 1350 and 1352 practically touching each other. The geometry implied by this is what is actually experienced in a real experiment. As one example of this, two sharp pencil lines are ruled parallel to each other and 8 mm apart on a sheet of white paper. For easier viewing, the stripe area between the pencil lines is colored orange. Standard 1 mm thick glass microscope slides are stacked as physical spacers between the plane of the paper and the plane of a single sheet of 3 M's BEF, prism grooves aligned parallel to the rule pencil lines. One stack of slides is placed on each side of the stripe to be observed so that the gap between BEF and paper is air rather than glass. With an 8 mm offset between prism array and stripe plane, the virtual images created by the prism sheet appear right beside each other, with about 1 mm (or less) of white space between them. This suggests a small deviation between paraxial theory and the reality occurring when skew rays are taken into account. Perfect registration actually occurs when the offset is made slightly less than the paraxial approximation, which is confirmed both experimentally and by full ray tracing. Direct view along 1248 shows what appears to be a single orange stripe of width 2W. OFF Paraxial = W 2 ⁢ Tan ⁢ θ 1In general then, the ideal offset for the special case of on-axis viewing a uniform stripe emitter directly through a single 90-degree micro prism sheet is just slightly less than the emitter's physical width, W, at least to a first approximation. And, when there is an array of identical stripe emitters, the ideal spacing between them for perfect virtual image registration is also about equal to the width, W, of the constituent emitter. The reason for this, as diagramed schematically in FIG. 62 , is that the virtual image of a flat emitter has practically unity magnification. Consequently, each virtual image ideally displaces a distance equal to slightly more than half its width. This means that when the offset between the stripe plane and the prism sheet is just less than W mm, the image displaced W/2 mm to the right from one emitter and the image displaced W/2 mm to the left by the adjacent emitter, can fit together with practically no overlap in the virtual empty space that exists between the two real emitters spaced W/2+W/2 or W apart. 8.2.2.5 Human Perception of Output Light From Two Stacked and Crossed Prism Arrays A similar analysis is made for two orthogonal prism sheets 58 and 60 placed above a two dimensional array of emitting squares (as in FIG. 4 and the multi-layer illuminator cases of FIGS. 3 , 7 and 15 ). This is shown schematically in FIG. 63 , including for simplicity, just a single emitting square 110 , its four shifted virtual images 1356 , 1358 , 1360 and 1362 (hidden), and the two prism sheets 58 and 60 elevated above the emitting plane the preferred height 1384 (Gl'). Each prism sheet 58 and 60 has a passive substrate layer 1372 and 1376 , and a layer 1374 and 1378 of parallel prisms. Prism-element geometry limits the angular extent of the output beam 1386 in the axis perpendicular to their respective groove axes. That is, prism sheet 58 limits output light to +/-βy 1392 and prism sheet 60 to +/-βx 1390 . As such, output light 1386 appears to originate and project from each of the four virtual images 1356 , 1358 , 1360 and 1362 (hidden). This arrangement is fundamental to the LED arrays deployed in the inventions of FIG. 15 , and all subsequent application examples. The prism sheets 58 and 60 give forth a directed beam whose cross-sectional uniformity is affected by maintaining proper spacing 1384 between the prism sheets and the emitters. Developing an exact analytical expression for spacing 1384 is complicated by passage of paraxial and skew rays through both a lower 58 and an upper 60 prism-sheet, which creates too many analytical possibilities for refraction and reflection. A simplistic approximation can be made for a skew ray that encounters both a lower and an upper prism element. This situation is described by placing another prism element just beneath element 1220 of FIG. 55 . In this arrangement, the output of the first prism element becomes input for the second. Under these conditions, geometric relations for θ1, θ2θ3, and θ4as used above reveal that the lower prism sheet 58 should be spaced about W/2 above the emitting array for contiguous virtual images shown in FIG. 63 . Actual experiment (as well as full ray trace analysis) applied for example to 8 mm wide emitting squares exactly 8 mm apart show that the actual spacing is slightly more than W/2 and is closer to 5 mm or 0.625W.The invention of FIG. 15 , as represented functionally in FIG. 63 , can be used as a source of illumination to be viewed directly (as, for example in the backlight applications allowed by FIGS. 1-2 , the traffic light applications of FIG. 41 , and the potential taillight applications of FIG. 38 ), as a source of illumination to be viewed indirectly (as in the projection system examples of FIGS. 17-34 and the backlight applications of FIGS. 49 and 50 ), and as a source of illumination that provides illuminance on a viewed surface (as in the automotive headlight applications of FIG. 28 , the theatrical lighting applications of FIG. 39 , the roadway lighting application of FIG. 40 , and the task lighting applications of FIGS. 51-52 ).When the light source panel illuminators or combinations are viewed indirectly, spatial uniformity is finalized by system elements placed between the viewer and the light source. In each of the projection system applications of FIGS. 17-34 , for example, a second stage angle transformer provided output light, every spatial point of which represented an average of all points on the light source aperture. Consequently, any uniformity artifacts caused by the invention of FIG. 15 are diffused significantly by system behavior.When a viewer is able to see the light source panel illuminators (or combinations) directly, it is preferable to enhance spatial uniformity by their conjunction with conventional diffusers, as in the inventions of FIGS. 1-2 . The amount of conventional diffusion used depends on the application. 8.2.3 Practical Example Where Virtual Image Beam Overlap is Necessary: Serpentine Fluorescent Backlight All examples of the present invention thus far have concentrated exclusively on cases where the virtual image displacement brought about by prism sheets 58 and 60 were used to achieve a substantially contiguous or nearly contiguous pattern of images, as in FIG. 4 or FIG. 12 . Spacing between emitters was made approximately the emitter's width and the elevation of the prism sheets then set for the contiguous or nearly contiguous condition. Not only isn't it always possible to achieve sufficient spatial uniformity by the image displacement mechanism alone, but at times the emitting array used will not have emitter widths and spacings that can be made equal or where its preferable to make them equal.Under either or both these circumstances, beneficial results are still possible.One example of this situation is presented by the invention of FIG. 1 applied as an LCD backlight using a new flat fluorescent lamp developed by Corning, Inc. In LCD backlight applications, the viewer always looks directly through the LCD screen at the effective uniformity of the backlight providing the LCD's illumination. Featureless illumination is the performance standard by which most, if not all, backlit image display applications are judged. Meeting this standard typically requires a featureless backlight appearance. One preferable emitter for such backlight applications is a new, flat serpentine, fluorescent lamp shown schematically in FIG. 64 . The lamp's perspective view 1396 shows a prototype 10.3" X 13.75" glass structure having a continuous hollow channel winding in a serpentine manner in sixteen parallel 12" sections from electrode 1398 to electrode 14000 . If unwound as a single straight channel the total running length, electrode-to-electrode would be approximately 18 feet.The flat fluorescent lamp's cross-section 1402 in FIG. 64 is shown for 3 of the 16 parallel channels. This unique cross-section is formed by Corning from a single layer of borosilicate glass that while still molten is folded in half so that the two halves, a molded surface 1404 and a relatively flat surface 1406 seal together cleanly and completely at all common joins 1408 without collapsing the molded structure. The result on cooling is the continuous hollow channel plate shown in perspective 1396 . This hollow glass plate is transformed into a fluorescent lamp by coating the interior channel walls 1410 and 1412 with a standard phosphor, adding electrodes, an appropriate gas, a getter, and then sealing under pressure. Matched with a ballast, power supply, and optional impedance conditioning conductors 48 , the lamp emits white light through both glass surfaces 1406 and 1404 from its excited phosphor coating. Direct view of the lighted emitting plate is similar to what one would see looking at an array of parallel fluorescent tubes. The spaces 1416 between channels 1418 appear dark, and the overall emitter brightly striped.The illustrative emitting geometry is shown in cross-section 1402 ( FIG. 64 ). The lamp thickness 1420 , T1, is 7.25 mm. The phosphor-coated channel width 1422 , W1, is about 12 mm. Flat section width 1424 , W2, is about 8 mm. The horizontal distance between phosphor-coatings 1424 , W4, is about 3 mm. The basic repeat distance 1426 , W3, for each channel is about 15 mm.The striped lamp's illustrative geometry 1402 has not been matched to the ideal geometry for the invention of FIG. 1 , emitter widths and spacings made equal. Corning, to generate the maximum lumens possible from the lamp's aperture, created channel separations of approximately 3 mm.Despite this tight emitter spacing, the multi-layer method of FIG. 1 can still be used advantageously to achieve the backlight illuminator performance required. Elevation of prism sheet 58 and the associated emitter image displacements the elevation causes are optimized for the minimum peak-to-valley brightness variation possible with the complex parallel channel emitter cross-sections involved. Then, associated diffusion layer 20 is elevated above prism sheet 58 the minimum distance 22 , G2 , that makes illuminator 1434 ( FIG. 64 ) appear visually featureless at all angles of view.Uniformity optimization is possible because of the complex nature of the fluorescent emitting channel's actual brightness profile, which peaks at the center and tapers off across the rounded sections 1436 because of changes in plasma density and coating thickness. In addition, back reflector 50 recovers backside light output by the channels through surface 1406 ( FIG. 64 ), and scatters it in all forward directions, including the dark spaces between channels. For these reasons, there is an optimum overlap of virtual channel images that can be set by varying the thickness 1430 of spacer layer 1428 as in backlight cross-section 1434 ( FIG. 64 ).For this illustrative example, layers 34 and 20 are 60-degree x 60-degree holographic diffusers manufactured by Physical Optics Corporation (POC), layer 50 , a white diffuse reflector manufactured by Kimoto, Inc., layer 58 , 90- degree prism sheet with 50 micron wide prisms manufactured as BEF-50 by 3M, layer 1428 a 2 mm thick acrylic plate, and gap spacer 1432 made 8 mm in thickness. In addition, there is a 1 mm air gap between layer 50 and lamp 1402 .In this arrangement, output light is observed as being visually featureless at all angles of view, across the LCD backlight's 15" diagonal aperture. On axis brightness measurements fall between 18,000 and 20,000 cd/m2 (nits) depending on lamp efficiency for 12-volts dc and 2.8-amps dc (34 watts) applied to an optimized ballast circuit attached to electrodes 1398 and 1400 .With the specific elements used in this example, high viewing brightness is observed over a wide range of vertical and horizontal viewing directions, Brightness exceeds 10,000 cd/m2 over a +/40-degree range, and remains above +/- 6,500 cd/m2 over a +/-75 degree range. Still other combinations can be arranged for progressively narrower viewing ranges, with associated increases in viewing brightness.The narrowest possible illumination range is achieved when single prism sheet layer 58 is replaced by two orthogonal prism sheet layers 58 and 60 , as previously described, with thickness 1430 of spacer 1428 reduced accordingly, and output diffuser 20 , changed to a narrower range of scattering, such as for example a 30-degree x 30-degree or 20-degree by 20-degree holographic diffuser made by POC.Featurelessness is characterized by the degree of brightness variation that occurs spatially, both over large distances (i.e. about 10 mm to 100 mm) and over small distances (i.e. about 0.5 mm to 10 mm). When there are no visible hot spots, cold spots or shadows discernable anywhere within the viewing field, the result is considered to be featureless. Human vision responds to each scale of brightness variation differently, and featurelessness requires acceptable performance in both regimes. Judgment of featurelessness is a human response best made directly, either by direct viewing through neutral density filters or by a filtered CCD camera. The height 22 of output diffuser 20 is adjusted until stripe visibility vanishes. In the present example, this occurs when height 22 is about 8 mm. When spacer layer 1428 and prism sheet layer 58 are removed, a similar degree of featurelessness is achieved with a total gap spacing of 25 mm, which is about twice the comparable thickness in the present example. 8.2.4 Prism Design and Preferable Illuminator Performance Prism and prism-like arrays develop virtual source images, and beams emanating from them, whose degree of overlap depends on the prism's elevation above the emitters, which in turn depends on prism geometry and refractive index. As described above, prism sheets with more steeply angled prisms exhibit more image and beam displacement for a fixed prism elevation. Prism sheets with more gradually angled prisms exhibit less image and beam displacement for a fixed prism elevation.As one example of this for 90 -degree prisms and centered output along direction of view 1248 , the critical input angle, θ1, is 25.3 degrees; with θ3, 45 degrees; θ4, 28.33 degrees; and θ2, 16.17 degrees. When the prism's apex angle is reduced (or increased) from 90 degrees, all angles change accordingly. For example, when the apex angle is reduced to 80-degrees, θ1increases to 29.1 degrees, with the effect that there is a larger image displacement at any given prism-film offset than there was with 90-degree prisms. This means that images seen viewing an 8 mm wide stripe through 80-degree prisms register perfectly at a smaller offset than they would with 90-degree prisms. The actual offset needed is 7.14 mm, a gap-reduction of 0.86 mm or 12%, important in applications where greatest possible compactness is sought.Preferable performance of present illuminator inventions depend on more than the relationship between virtual image displacement and prism elevation, which can be determined for any prism design. Properly elevated for the prism designed, the directional output beam that results must also have a uniform cross-section and an included power that represents a significant fraction of the input light emitted. In most cases of practical interest, beam power needs to be well confined within the beam's effective angular range, nominally +/-β in both meridians, as described above. The less light transmitted outside this range the better, except in some limited flood and task lighting circumstances, when a small amount of wide-angle fill light is often tolerable. In video projector applications in particular, any beam power conveyed outside the maximum permissible illumination angular range is completely wasted.Prism sheet design variables affecting the illumination beam include the shape and angular inclination of the prism or prism-like facets that make up the prism sheets used, refractive index of the prism medium, and the efficiency of the recycling mechanism used to make a portion of the un-transmittable light, transmittable.It turns out that the preferable prism geometry for the present inventions is the symmetrical (45-degree - 90-degree - 45-degree) Porro prism with 90-degree apex angle. Other geometric variations show certain deficiencies in either or both the distribution of output light with angle, and total delivered output power transmitted. Narrower apex angles used with symmetric side angles 23 , result in a slightly narrower beam, but also much more significant light transmission at higher angles. Wider apex angles used with symmetric side angles 23 generally widen the beam angle. All geometrical asymmetries achieved with unequal side angles lead to wider, more diffuse beam angles. Similarly, any changes in facet curvature such as the preferable prism like facets of FIG. 53 , widen the beam's angular range and soften its angular fall-off.Beyond this, the prism's refractive index does not show a particularly significant effect on performance. The refractive index of acrylic is about 1.49 . Raising prism much beyond this is impractical as it restricts the amount of output light. 8.2.5 Elevated Prism Sheets and Tubular Emitting Arrays The present elevated prism sheet inventions are intended primarily for use with planar or nearly planar emitting arrays such as LEDs and flattened serpentine fluorescent channels. The inventions also apply to arrays of tubular emitters (i.e. standard fluorescent tubes) as a special case. The virtual images of tubular sources, however, develop curved fields, which must be considered properly in their best use. Because of this, the ideal prism sheet elevation differs substantially from the examples with planar emitters.For tubular emitters of diameter W, the comparable image splitting seen by an on-axis viewer is achieved when the emitting surfaces are separated from each other by a distance that is at least approximately equal to their emitting diameters, W, when the 90-degree prism sheet is elevated above the closest point on the emitting surface by about W/2 -- rather than by the full emitting width W, as was the case with stripes. The paraxial ray geometry of this curved-surface configuration is examined more carefully in FIG. 65 and FIG 66 .A single prism sheet 58 is shown schematically as elevated distance GT 1440 above the tangent plane 1442 to the circumference of three substantially identical tubular emitting sources 1442 in the cross-sectional view of FIG. 65 . Tubular emitters 1444 of diameter W emit light from every point on their circumference and every point along their length, in a Lambertian or near-Lambertian manner. Diffusely reflecting back plane 50 forms the bottom of a box-like container surrounding the emitters 1444 , so as to scatter light emitted from the bottom half of each tubular emitter generally towards the gaps between emitters and back though the emitters themselves. Prism sheet 58 , prism grooves running parallel to the emitting tube axes, is elevated preferably a distance equal to W/2 above tangent plane 1442 (W above emitter centerline 1448 ) so that the boundaries between emitted output beams 1500 and 1502 are substantially contiguous as perceived along axis of view 1446 .The way elevated prism sheet 58 develops left side output beam 1500 is described more rigorously isolating on the central emitting tube of FIG. 65 in the cross-section of FIG. 66 . In this case, the emitting cross-section 1444 , in FIG. 66 , is centered initially at point F. The 90 -degree prism sheet's cross-section is oriented as shown, and situated above the emitter in the plane of line I-J a distance W/2 (W being the emitting width or diameter). The axis of view is along lines parallel to H'-H. The axis of incidence for paraxial rays exiting the prism film along the axis of view is parallel to line K-B, and as developed above, makes an angle, θ1, with the axis of view, which in this illustrative case of 90-degree prisms is approximately 25.3 degrees. The portion of the emitting surface contributing visible rays to the left hand virtual image, at least theoretically, is highlighted with the thick black line running between surface points A-B-C-D-E, a section covering exactly half of the emitting surface. Effective rays from emitting point A cannot reach a viewer without having to pass through the emitter's interior and crossing a visible part of the emitting surface A-B-C-D-E. Accordingly, the visible portion of the emitter as seen through the prism sheet is not the upper half of the emitter that would be seen under normal circumstances, but rather the portion A-B-C-D-E, that is rotated counterclockwise θ1from surface B-C-D-E-A. This means that by citing through the prism sheet, the viewer is seeing effectively around the emitter's horizon point B. The projected width (M-L) of this emitting section, A-B, is (W/2) Tan θ1Sin θ1or about 0.86 mm for an 8.5 mm diameter cylinder.The virtual image's entire projected width is W, presuming visually effective paraxial light rays from the entire surface are received. Yet notice at the starting offset between prisms and emitter, which is W/2, that rays from extreme point E on the emitting surface do not reach the emitter's center line, F-H. The implication of this is that there would be an incomplete separation (or overlap) between the left and right side virtual images. The correction for this deficiency would be to increase the offset between the cylinder's vertex point, C, and nearest prism sheet point, H.This raises a critical design issue regarding specifications for the best output brightness uniformity. If rays E-N, D-H, C-K, B-L and A-M each presented comparable output flux to the axial viewer, then the conditions for perfect left and right virtual image registration at point H would require shifting the emitter's vertex point C downwards and away from the prism sheet an additional distance (W/2) Sin θ1. If, however, considerably lower flux reaches the viewer from the sections A-B and D-E, where emitted rays approach the points of emitter tangency, A and E, than making this correction could result in an apparent gap between the otherwise adjacent images. This design choice underscores the importance, in general, of understanding the emitter's intrinsic brightness uniformity, and as a function of the angle of view. 8.2.5.1 Experimental Confirmation With Tubes A few simple visual experiments using 3M's standard 90-degree prism sheet, BEF, illustrate the importance of understanding, and then correctly compensating for, a tubular emitter's emitting characteristics.First, a common 8.5 mm diameter artist's crayon is used as an illustrative cylindrical emitter. Any cylindrical object can be used for this purpose, but the readily available crayon is a particularly convenient and graphic one. In this case, the 1 mm glass spacers are placed on top of several adjacent dummy crayons so they will accurately set the air-gap between the 3M BEF-substrate and the top of the crayon under observation - BEF prism grooves running parallel to the edges of the crayon beneath. When the BEF sheet is suspended directly by the spacer crayons (i.e. no glass spacers), the smooth side of the BEF substrate rests exactly on the periphery of the crayon under view (constituting the condition of zero offset). In this case, the image of the viewed crayon appears to have enlarged to a width of about 14.5 mm as a result of its sub-division into two overlapping crayon images having an intermediate 4 mm wide region of apparent overlap symmetric about the line of contact. This central portion of the image appears considerably brighter (and sharper) than the displaced portions.When adding in five 1 mm spacers on each side, the prism offset increases to 5 mm. At this spacing, the crayon images appear to be approximately 0.5 mm apart along the centerline. An underlying sheet of white or colored paper is used to provide best contrast with the crayon images. Careful observation suggests that the crayon surface-images actually tilt downwards at approximately 45 degrees with respect to each other from their contact along the centerline, and appear as if the originally circular cross-section has been flattened into what looks like a slab. Because of this, the printing on the crayon's paper wrapper can be read as if from a nearly flat rather than curved surface. The visual width of each image, however, appears to have magnified slightly, from the original 8.5 mm diameter to a "flattened slab" of 10.5 mm width. This 1.23x magnification is not predicted by the paraxial ray geometry of FIG. 66 , which suggests no magnification. Magnification, and associated image blurring cannot be explained by the normal 2-degree angular acceptance of the human eye, and must be due to the behavior of skew rays.Aside from the apparent 1.23x image magnification factor, closer examination reveals still more about the nature of the curved image. A paper strip with 1 mm marking lines is applied and taped to the crayon wrapper's circumference. With addition of this scale, and the physical highlighting of the crayon's vertex point, we are able to view the resulting image organization more critically. The paraxial equations predict a vertex point shift of 1.9 mm at an offset of 4 mm. We actually observe, within the accuracy of observation, a shift of about 2.5 mm, which is close to the predicted shift multiplied by the apparent magnification factor. To the right and left of the vertex marking line we can see respectively, four and nine of the 1 mm circumference marking lines taped to the crayon surface, with the last marking line on each side very difficult to visualize without aid of a magnifying glass. The actual crayon circumference between points A and E in FIG. 66 is 13.35 mm . Hence, at least in this case, we seem to be seeing light from practically the entire 180 degrees designated by points A-B- C-D-E in FIG. 66 . 8.2.5.2 Experimental Confirmation with Stripes The same experiment is performed for purposes of comparison, with a planar stripe. In this case, the stripe is sharply ruled onto a white sheet of paper using two parallel lines 8.5 mm apart. This sheet of paper is then placed on a flat surface and positioned so that the lines run vertically. For clarity, the region between the lines is colored orange. Again, 1 mm glass slides are placed in two equally high stacks, each an inch or so to the left and right of the colored stripe, and both well out of the field of view. The BEF sheet is laid, as before, smooth side down, so as to be suspended, prism points facing the viewer. The film's grooves are oriented so they run parallel to the edges of the underlying stripe. When no glass spacers are used, one sees a single stripe as clearly as if the corrugated prism film were completely transparent. As the number of glass spacers used on both sides of the stripe is then increased, the stripe, as indicated by its total width edge-to-edge, appears to progressively wider. When the stacks are both 8 mm high and the stripe width appears to have almost exactly doubled. Visually, one sees two adjacent stripe images with only a thin (0.5 mm) white gap between them- suggesting that the offset used was just slightly larger than that which would have achieved perfect visual image registration. The brightness of each stripe image, as with the crayon images, appears to be approximately half that of the original stripe. Visual measurements are compared with paraxial calculations based on equations 1-3, and with the results predicted by a faithful computer ray trace model in Table 1, for a set of offset distances.Table 1.The total image width of an 8.5 mm colored stripe as viewed from above through a 90-degree 3M BEF prism sheet suspended above the stripe by various amounts (Air-Gap): as viewed by eye (Visual), by computer model (ASAP) and by paraxial calculation (Paraxial). 0 8.5 8.5 8.5 2 11.0 10.3 10.4 4 13.3 12.6 12.3 6 15.5 14.5 14.2 8 17.5 17.1 16.1 10 19.5 19.5 18.0 12 22.0 22.0 19.9 The commercial ray trace software ASAP™, developed and supported by the Breault Research Organization of Tucson, Arizona, was used to create a dynamic system model comprising one or more wide-angle stripe emitters, a functionally realistic prism sheet and a viewing condition made to approximate that of the human eye.The results in Table 1 show excellent agreement between the computer model and the visual measurements. The deviation is largest for the smaller offset distances, but never exceeds about 6%. Even the paraxial calculations are quite reasonable up to an offset of 8 mm, and the deviations beyond that never exceed 5-6%. 8.2.5.3 Visual Differences Between Tubes and Stripes One reason why the emitting tube is visualized differently than the emitting stripe when viewing it directly through prism sheet 58 is that the offset between any emitting line along the periphery of the tube and the prism array varies with position, as shown schematically in FIGS. 66 and 67 . It is seen in FIG. 66 that physical emission from the emitter's vertex point C appears to have come from spatial point K on prism sheet 58 . And with reference to FIG. 68 , the origin of such emission appears to come from a curved plane 1520 whose curvature is quite different that that of the physical emitter 1444 itself, and where the distance shifted depends on where on the cylinder the emission actually started. (Note: With the flat emitting stripes, every parallel line within the stripe shifts an identical distance to the left and right for any given air gap.)The triangular (Δ) points shown on cylinder periphery 1444 of FIG. 67 , however, represent the locations of selected emitting lines 1506 , 1508 , 1510 , and 1514 along the surface of the emitter. The exact sag 1516 (ΔS) of each point below the cylinder's vertex point D' is given by the standard mathematical expression reproduced for reference in equation 34 for a surface having a circular cross-section of radius R. In this expression, x is the axis parallel to the plane of the prism sheet base and R for the cylindrical emitting case is W/2. The points designated by squares (θ) represent the focal plane depth of the displaced virtual source image and its spatial shift corresponding to each original emitting point. The curve 1520 drawn through these points should be considered the effective focal plane for the left-hand directly viewed virtual image created under this condition. Emitting lines closer to the edge of the cylinder are shifted a larger distance than are those lines nearer to the cylinder's vertex. Hence, the way a prism sheet is elevated to homogenize the emitting channel's brightness non-uniformity depends strongly on the cross-sectional shape of the emitter, and then on any brightness non-uniformity over the emitting surface. Standard commercial fluorescent tubes are spatially uniform Lambertian emitters. Some other fluorescent sources, for example, such as the serpentine flattened channels depicted in FIG. 64 , show considerable center to edge brightness roll-off. Δ ⁢ S = x 2 / R 1 + 1 - x / R 2A similar representation for the experimentally evaluated 8.5 mm wide emitting stripe is presented in FIG. 68 . With prism sheet 58 elevated exactly 8.5 mm above emitting plane 1524 , virtual image plane 1522 is formed a little more than 6.5 mm below the prism sheet and a little less than 2 mm above the emitting plane. 8.2.6 Effects of Prism Sheet Elevation on Directly Viewed Output Uniformity The primary reason virtual image separation mechanisms are so important to understand, and correctly set, is that wrongly elevated prism sheets, even by a relatively small percentage, can introduce brightness variations that become as visually distracting to a viewer as the discontinuously emitting light source array viewed directly by itself. Optimally elevated, however, the prism sheets alone can significantly improve visual appearance along their axis of view by a combination of changes to the illumination including magnification, image shifting, image blurring and image overlap, such that the composite effect shows the minimum possible difference between peak brightness and minimum brightness across the viewing aperture for the conditions involved.Some applications of directly viewed illuminators, however, require what may be termed as featureless uniformity across the illuminator's output aperture. While there is a best overlap of virtual source images created in the present invention by means of exactly elevating the prism sheets above the source, the prism sheet elevation process alone may not produce sufficiently featureless uniformity on its own. There may still be visible brightness variations at the displaced image boundaries.While the image displacements themselves significantly improve output uniformity, still better results are obtained by filtering output light through one or more additional diffusion mechanisms. Rather than viewing virtual image's focal plane directly through the transparent prism-like layers, direct viewing is preferably accomplished indirectly by looking through one or more diffusively scattering layers that have been elevated above the elevated prism sheet (or sheets), as shown schematically and idealistically in FIG. 69 .In the generalized schematic cross-section 1542 of FIG. 69 , single prism-like layer 58 is elevated a distance G1 above the discontinuously-emitting light source 1530 , followed by two standard diffusers 1532 and 1534 , the first elevated a distance G2 above the prism sheet; the second 1534 , a distance G3 above the first 1532 . This sequential and multi-layered combination of prism sheet and conventional diffusion mechanisms develops improves on the brightness uniformity for a given total diffuser thickness than can be obtained using any conventional diffuser by itself, as shown in cross-sectional detail 1544 where standard light scattering diffuser (or diffusers) elevated a distance G above the same discontinuously emitting light source 1530 . In each case plane back reflector is placed beneath the emitter, so as to return any backward emission from the emitters through the gaps between emitters and the emitting channels themselves, establishing the emitting arrays intrinsic brightness variation as one between BMAXand BMIN, rather than BMAXand zero. It will be seen that the closer the ratio between BMAXand BMIN, the better the overall brightness uniformity achieved by the system of diffusers.Almost any brightness non-uniformity can be minimized to the point of near featurelessness by the standard elevated diffuser method of detail 1544 , provided the offset distance, G, between diffuser 1546 and underlying discontinuous emitter 1530 is made large enough, and the diffuser, strongly enough scattering. Under such circumstances, the larger the gap, the more equalized is the number of rays reaching any two small regions on the diffusion plane can become. The present invention's multi-layer combination of elevated prism sheet(s) and conventional diffusion, however, achieves the same visual result, but in a smaller total thickness. The reason this improvement is possible is due to the smoothing action of the elevated prism sheet(s) whose displaced virtual images significantly reduce the peak-to-valley brightness variations to be further minimized by standard diffusion, and they do so with an elevation that is small compared to that elevation which would have been required using standard diffusion alone. 8.2.6.1 Experimental Comparisons Between Standard Diffuser Performance and Standard Diffusers Combined with Elevated Prism Sheets Standard diffuser 1546 is elevated a distance G above a discontinuously emitting reference source 1530 , as in cross-sectional detail 1544 of FIG. 69 , until the discontinuous emission has been observed to have become visually featureless. As a simple experiment, the discontinuous source is a series of emitting stripes separated by non-emissive regions equally as wide as the stripes themselves. Although featurelessness is a qualitative judgment, distinctions as to when a striped pattern becomes invisible can be made with acceptable reliably. Instrument measurements of featurelessness are extremely difficult as the peak-to-valley differences in brightness involved can be significantly less than 1%.At the large offset distance G needed to establish featureless image brightness, the brightness at the maximum and minimum points on the standard diffuser's surface, BMAXand BMINare each considerably lower than half the undiffused stripe brightness, and ratio BMAX/BMINis reduced from a very large value at G=0 to less than 1.05 at G=G1.Specifically, with 8 mm wide windows, 8 mm apart, cut into a thin, opaque, white, diffusely reflecting backlit sheet, we find that the intrinsic BMAX/BMINis about 15-20 and that in this example BMAXis about 3030 FL everywhere within the stripes. Such an emitter was formed placing the pattern over a very bright and uniform wide-area fluorescent source to generate even emission through the stripes. Stacks of 1 mm thick spacers were used to offset a thin holographic diffuser from this illustrative emitting surface. The output was viewed by eye, and by means of a CCD camera. Brightness measurements were made with a Spectra Scan Model P-640 radiometer.At an elevation of 8 mm, the emitting stripes were clearly visible, and the gaps between them visibly darker. As the diffuser's elevation increased, the differences between bright and dark regions reduced, but did not disappear. Only when G exceeded 26 mm in this case did the stripes appear to fade to near invisibility. No visual contrast at all was discerned at a diffuser spacing of 28 mm. When two rather than one holographic sheets were stacked together, the diffuser elevation for featurelessness was achieved at 24 mm. All other traditional bulk diffuser materials tried required elevations as large as 32 mm for comparable results: When using a single diffuser layer, the average brightness was 1300 FL. When using two layers, the brightness was 1100FL.A multi-level prism sheet system in accordance with the present invention was constructed in accordance with cross-sectional detail 1542 of FIG. 69 , using only a single standard diffuser layer 1534 . Prism sheet 58 was first elevated the height G1 above the striped reference source at which the ratio BMAX/BMINminimized, which for this example was 7.9 mm. Then G3 was progressively increased until the observed output uniformity appeared similar to that obtained with just a standard diffuser. The best result was about 8 mm less than that of the single holographic diffuser layer described above, or about 21 mm in total. This final thickness varied +/- 3 mm with the precision used to set the prism-sheet's initial spacing. Such an improvement in overall thickness can be significant when extended to many of today's flat panel display applications where package slimness is considered an important aspect of market appeal.Output brightness in this case was about the same as with the standard diffuser, as no effort was made to recover any of the prism sheet's back reflected light.The output uniformity of a preferably elevated prism sheet is best for on-axis viewing, which in some cases reduces the system's hiding power in off-axis viewing directions. This tendency for reduced off-axis hiding power was effectively eliminated, by adding a relatively weak diffusing layer 1532 in FIG. 69 just above the prism points. Without this extra diffusing layer, the single diffuser elevation had to be increased 4 mm before off-axis viewing was as featureless as on axis viewing. Yet, with a weak diffusing sheet just above the prism points, no extra thickness was needed.Another perspective on the multi-layer diffuser's advantage over the standard approach was gained by removing the prism sheet from an otherwise well-adjusted configuration. When this was done, the bright and dark bands of non-uniformity reappeared quite strongly. A stack of three standard diffusers was needed at this 13-14 mm spacing to restore featureless viewing.With the above discussions in mind, we already know enough to reason, qualitatively at least, why in principal the sum W+G 2 for this hypothetical multi-level system can be less than the corresponding spacing, G 1 , for the standard diffusing approach alone -- with brightness, B2, of the multi-level approach potentially higher as well.Suppose, for purposes of argument that the emitter spacing in each case is made exactly equal to the emitting width. We know qualitatively that for a standard diffuser, the smaller the emitter's intrinsic peak-to-valley brightness ratio, the smaller the spacing G that is needed to smooth out the brightness variations to the level desired. We also know that the 90-degree prism sheet, acting by itself, reduces the worst brightness variations by filling in the dark spaces between the emitting ones with symmetrically disposed virtual images of the emitters themselves. And, we know that the closer these virtual images come to registering with each other without any gap or overlap, that the resulting brightness ratio will be significantly better than it was before its transformation. FIG. 18 has already established the focal depth of the virtual images of an 8.5 mm wide stripe as being approximately 7.3 mm below the prism plane or about 0.86 times the emitting width. The spacing of the standard diffuser above the prism sheet, G2, then must only be enough so that when added to the prism sheet's focal depth, FD, there is sufficient overall optical standoff to improve the already-improved brightness distribution to the standard adopted above. Since the prism-sheet, properly elevated, achieves the same (or better) peak-to-valley brightness ratio as a standard diffuser, but does so at a significantly smaller physical standoff (W<<G), it follows that we can expect the total thickness (W+G2B) of the multi-level system to be substantially less than the standard thickness G 1 . 8.2.6.2 Suppression of Variations in Virtual Image Overlap with Changes in Viewing Direction in Direct View Applications Generally, the optimum viewing-axis for the illumination provided by elevated prism sheets is perpendicular to the base plane of the prisms. When the viewing direction shifts away from this axis, the preferable virtual image registration achieved with the prism elevation above the source plane deviates from what was intended.Such directionally sensitive viewing may have little or no importance in some illuminator applications of the present inventions, but can be visually distracting in other applications. A simple design adjustment is introduced with regard to emitter spacing that can be used in applications where the constancy of the illuminator's visual appearance is important, and where the inefficiency and thickness increases caused by adding additional diffusion layers is neither practical nor preferable.One particularly severe example of this directional variability occurs when emitter spacing has been made exactly equal to the emitter width, and when the prism sheet elevation has been set for perfectly contiguous on-axis virtual image registration. As the viewing direction shifts further and further off-axis, the conditions of perfect image registration are compromised, and associated variations in brightness uniformity introduced, as illustrated in the spatial brightness plot of FIG. 70 . In fact, this same kind of spatial brightness instability can be induced when the prism sheet is miss-positioned, whether by human error, material tolerances, or thermal expansions due to changes in ambient temperature. In any of these unintended circumstances, the maximum to minimum brightness ratio presented to the output diffuser may become no better than that of the intrinsic emitter's BMAX/BMINratio itself. In such cases, the elevated prism sheet method offers little uniformity advantage over a standard single-level diffuser.There is, however, a practical means for limiting the severity of such effects. The spacing between emitters is chosen deliberately to minimize the effects of image overlap, whatever the mitigating cause. The most effective spacings range ideally from those less than the emitter width to those less than half the emitter width. The output uniformity with emitter spacing at half the emitter width is illustrated in FIG. 71 .It is the difference in brightness ratio between the examples of FIG. 70 and FIG. 71 that establishes a preference for narrower emitter spacings in applications calling for more directionally independent illuminator viewing. Other applications, such as the video projection systems of FIGS. 17-33 , or the indirect LCD backlighting applications of FIGS. 46-50 , both providing internal means of the illuminator's brightness homogenization and a single axis of output illumination, can use the elevated prism sheets illumination method without having to take any such extreme measures to stabilize off-axis viewing. This is equally true of general illumination applications in automotive head lighting ( FIG. 38 ), theatrical spot lighting ( FIG. 39 ), roadside lighting ( FIG. 40 ), and interior flood/task lighting ( FIGS. 51-52 ). Yet, for applications like classical LCD backlighting, traffic signals ( FIG. 41 ) and other directly viewed illuminators, including high mount stop lamps on the rear end of automobiles ( FIG. 10 ), the illuminator's visual appearance from any angle of view is considered an important aspect of its performance.By moving the emitters closer together than their physical widths W, the virtual emitter images can be made to overlap in the gaps between emitters for some finite distance that provides an effective tolerance against unintended shifts in image displacement, provided they are not so extreme that the overlap is eliminated. Under such conditions, the brightness ratio, 2BMAX/(BMAX+BMIN), as in the example of FIG. 71 , remains constant despite even moderate shifts in the degree of image overlap (and viewing direction) affecting only the apparent spatial frequency of the overlap peaks. A conventional diffuser's hiding power is relatively insensitive to changes in spatial frequency, provided, as in this case, the brightness ratio remains invariant. For such situations, the hiding power of the multi-level system's output diffuser is made just sufficient to handle the reduced brightness variation, which will always be more than half that of the intrinsic emitter.This emitter spacing illustration also points out the importance of providing as much illumination between the intrinsic emitters as possible, such as for example by back reflector 50 in FIGS. 1 , and 64 -66 , 1531 in FIG. 69 or by means of the fundamental recycling mechanism in prism sheets 58 and 60 and via the diffusely reflective materials placed beneath them. The larger BMIN, the larger is the prism sheet's effectiveness in reducing the intrinsic brightness variation. In the worst case, where the emitter has no effective emitting thickness, and very little if any light is contributed between the emitters, BMINwill approach zero, and the resulting brightness ratio will approach 2:1. In most practical emitting arrays, cross coupling between individual emitters, and reflectors below them, will generate appreciable emission between emitting areas, and the resulting brightness ratio internal to the multilevel diffuser will be considerably smaller than 2:1, leading to even more appealing applications.Provided the emitting array's intrinsic brightness ratio is substantially greater than 2:1, the prism-based multi-level configuration adds value over standard diffusers alone, and does so over a wide range of viewing directions. 8.2.6.3 Preferable Prism Sheet Elevation When Using Emitters Having Non-Uniform Emitting Apertures So far, the illustrations have centered on emitters (be they LED chips, LED cavities or emitting stripes and tubes) that provide (or can be made to provide) substantially uniform illumination over their entire emitting surface. Very few real life emitters behave quite this ideally. When each emitter element in the array displays a significant brightness variation, such as might exist from its brighter center to its dimmer edges, that variation can have a strong affect on the uniformity established by the prism sheet, and must be accounted for in the choice of prism elevation and the associated image overlap it creates. As an example of this, consider the same intrinsic (one-dimensional) emitter array envisioned in FIG. 71 , but with brightness falloff at the edges instead of the previously emitting uniformity. For illustrative purposes, it is assumed that brightness varies as a sinusoid: B=B0SinK, where K is the appropriate spatial coordinate. Many real emitters actually show such reduced brightness towards their edges where less flux is emitted for a variety of reasons, including the effects of total internal reflection and longer optical path lengths.The brightness variation that results from such non-uniformity is idealized in FIG. 72 which shows the conceptual cross-section of two emitting stripes 1600 and 1602 , the emitter's brightness profiles 1604 and 1606 , and the four overlapping virtual images 1608 , 1610 , 1612 and 1614 created by elevating prism sheet 58 from the emitting surface. As above, the associated image displacements depend on and are controlled by the exact amount of prism elevation, as well as by the direction of view. Also shown in FIG. 72 is the composite brightness variation 1616 that results from the sum of these particularly overlapping virtual images. This function has two critical brightness values 1618 and 1620 whose ratio determines the amount of hiding power that must be associated with the output diffuser (i.e. 1534 in FIG. 69 or 28 in FIG. 1 ). As the image displacement changes, for whatever the reason, the composite function changes in response, causing corresponding changes to the effective brightness ratio.This sinusoidal situation can be represented analytically and generalized to any emitting array characterized by emitters of width W and spacing S, where as the result of an offset prism sheet, there has been a peak-to-peak image displacement of Δ.The peak brightness of each virtual emitter image, B0, before any transmission loss (or gain) in the prism sheet, is half the brightness of the emitting object, BE, time a normalization factor K (B0= KBE/2). For this example, which relates to a brightness ratio, we will not attempt to calculate the actual brightness levels resulting from K, as it will cancel out. Keeping this in mind, there are two image brightness functions and three key brightness levels in the prism-shifted composite brightness distribution. The virtual image two brightness functions, BVIM-and BVIM+, are given in equations 5 and 6 , where the parameter, Δ, is the center-to-center shift in millimeters between each set of the two symmetric virtual images associated with each emitter. The origin (x = 0) for these functions is taken as the left hand edge of the left hand emitter. One key brightness level is of course the peak brightness of any given virtual image, which is KBE /2. The two other key levels, BC and BG , are calculated respectively at the center point of one emitter channel (i.e. x = W/2) and at the center point of the gap between adjacent emitter channels (i.e. x = (W + S/2). These two values are then calculated from equations 34 and 35 , as equations 36 and 37 . B VIM - = B 0 ⁢ Sin ⁢ 180 ⁢ W + Δ / 2 WB VIM + = B 0 ⁢ Sin ⁢ 180 ⁢ W - Δ / 2 WB C = 2 ⁢ B 0 ⁢ Sin ⁢ 90 ⁢ W + Δ WB G = 2 ⁢ B 0 ⁢ Sin ⁢ 90 ⁢ W + S - Δ WThe factor of two in equations 36 and 37 results because the brightness of any point on the composite brightness distribution is the sum of the two overlapping virtual image brightness at the same spatial coordinate x.Given this, the operative brightness ratio for any specific setting of W and S is the maximum value of B0, BCand BG, divided by the minimum value of B0, BCand BG.Using these expressions, the brightness ratio can be explored as a function of the degree of virtual image separation, Δ, for selected sets of W and S.The reason for examining these relationships analytically is to demonstrate a basis establishing the importance of specific emitter width-to-spacing ratio in obtaining preferable performance with a multi-level diffuser system of the present invention when direct viewing appearance is important. To illustrate this, we first let W = 8 mm and examine the effect of image displacement S on the corresponding brightness ratios versus image shifts via equations 34-37 , as in the plots of FIG. 73 . For reference, dotted line 1622 is drawn on FIG. 73 corresponding to the brightness ratio of 2.0, which is taken as one possible standard for the minimum improvement contributed by the elevated prism sheet illuminating system. In doing this, we also presume that only a single conventional output diffuser 1534 has been spaced G 2 +G 3 mm from the prism sheet 58 , as in FIG. 69 , so that its view is featureless when the prism's brightness ratio is 2.0 or less. Then, when the emitter spacing is 4 mm (half the emitter's width), we see that the brightness ratio remains below 2.0 despite any change in image shift up to 1.1 mm or +/- 0.55 mm from center point. If the emitter separation were only 0.5 mm more, 4.5 mm, the range of stability drops to 0.75 mm or +/- 0.375 mm, almost a 50% reduction. Yet, when the emitter spacing is reduced the same amount to 3.5 mm, the associated range of stability rises 30% to 1.45 mm or +/-0.725 mm. When the emitter spacing is halved to 2 mm, a reduction of 100%, the range of stability rises to 2.3 mm, a gain of 100%. Reference to FIG. 24 shows that for W=8 min, the variation of range of stability with emitter spacing is a roughly linear one.The same analysis is summarized for a 12 mm wide emitter in FIG. 73 . These results are compared with those of FIG. 73 in FIG. 75 where the effective brightness ratio is summarized for both emitter widths, but as a function of the common ratio of emitter width to emitter spacing. This comparison shows that multi-level diffuser systems become more stable both as the emitting element is widened and as the gap between emitters is narrowed. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a side view of a multi-layered light source panel optical system for a one-dimensional array of discrete surface emitting light channels (rods, tubes, plane surfaces) including an elevated prismatic light directing film and an elevated light scattering film arranged to provide uniform illumination for the image formed by a directly viewed spatial light modulator or other transparent image element.FIG. 2 illustrates a side view of a multi-layered light source panel optical system for a one-dimensional array of discrete surface emitting light channels (rods, tubes, plane surfaces) including elevated prismatic light directing films and light scattering films arranged on both sides of the light emitting array to provide uniform illumination to directly-viewed images formed on either side of the illuminator.FIG. 3 illustrates a side view of a multi-layered light source panel optical system for a two-dimensional array of discrete light emitting apertures separated from one another by a diffusely reflecting, non-emitting regions beneath two elevated layers of prismatic light directing film oriented with orthogonal groove axes, covered by an elevated lens array having one lens element per emitting region and a light scattering layer.FIG. 4 illustrates a perspective view of a two-dimensional plane of emitting apertures and how the two properly-elevated prismatic light directing layers above them, as in FIG. 3 form a virtual image plane with four nearly-contiguous virtual images related to any given emitting aperture.FIG. 5 illustrates a perspective view of a two-dimensional plane of emitting apertures and an example of the contiguous structure of a compartmentalized spacer layer between the emitters of FIG. 3 and the elevated light directing films placed above them, each compartment surrounding an emitting aperture with diffusely reflecting tapered sidewalls.FIG. 6 illustrates a perspective view of the forming tool used to form the compartmentalized spacer layer of FIG. 4 having tapered sidewalls.FIG. 7 illustrates a side view of a multi-layered light source panel optical system for a two-dimensional array of discrete light emitting apertures separated from one another by a diffusely reflecting, non-emitting regions formed as a spacer layer of specific thickness beneath two so-elevated layers of prismatic light directing film oriented with their groove axes orthogonal, and covered by a light scattering layer.FIG. 8 illustrates side and perspective views of a multi-layered optical system for a two-dimensional array of discrete light emitting apertures separated from one another other by non-emitting regions, the emitting regions aligned with input openings in a compartmentalized spacer layer having specularly-reflecting mathematically shaped sidewalls, this layer covered with a polarization-selective multi-layers, including a quarter wave phase retardation film, a reflective polarizer film and a light scattering film.FIG. 9 illustrates with reference to FIG. 8 , a side view of the sidewall shape of a hyperbolically sloping spacer layer and its mathematical relation to the light directing polarization selective layers covering its aperture, including the path taken by an illustrative input ray.FIG. 10 illustrates several perspective views of a practical usage for a two-dimensional array of commercially packaged light emitting diodes including one overlaying diffusive element per package, a transparent spacer layer of unique thickness, and two crossed prismatic light directing layers.FIG. 11 illustrates in several perspective views the form of a distributed multi-layer manufacturing process wherein a large number of multi-layered light source panel optical systems related to FIGS. 3 , 7 and 8 have been constructed repetitively over a very large area and continuous array that then can be sectioned into individually usable units, themselves combinable in a variety of useful ways.FIG. 12 illustrates for two adjacent emitting regions, each set of four corresponding virtual emitter images and the geometric mathematical relationships allowing for their contiguous arrangement.FIG. 13 illustrates a cross-sectional side view and a representational bottom view of the packaging layers for a two-dimensional array of light emitting diode chips having two electrical contacts on the same side of the chip, the packaging layers including provision for inter-digital electrical interconnection, separate diffusely reflecting compartments surrounding each diode chip, a clear dielectric encapsulant within each compartment, and a light scattering over-layer.FIG. 14 illustrates a cross-sectional side view of packaging similar to that ofFIG. 13 , but for light emitting diode chips requiring electrical interconnection to both sides of the chip, or to contacts on one side of the chip, that side having to face towards the primary direction of light emission.FIG. 15 illustrates a series of related cross-sectional side views of completed two-dimensional multi-layer light source panel illuminator packaging structures for light emitting diode chip interconnections related to FIG. 13 and the elevated prismatic light directing layers of FIG. 7 .FIG. 16 illustrates two prior art dichroic film coated prism methods for the mixing of three uniquely colored light beams into a composite beam.FIG. 17 illustrates a side view of a practical integration of three separate mono-colored light source panel illuminators formed as in FIG. 15 , in a compact video projection system using three reflective LCDs by means of three reflective non-imaging angle transformers and a single dichroic color mixing prism cube.FIG. 18 A illustrates a detailed side view of a reflective single-colored non-imaging angle transformer for a reflective LCD that includes, in addition to the light source panel illuminator of FIG. 15 and its illustrated cross-section, a reflective LCD, a reflective polarizer film, a wideband quarter wave phase retardation film and a concave metallic reflecting surface, as well as a few illustrative ray paths.FIG. 18 B illustrates a detailed side view of a tandem single LCD variation ofFIG. 18 A using one light source panel illuminator and two reflective non-imaging angle transformer stages with the first stage's output arranged as the second stage's input.FIG. 18 C illustrates schematic side views of a novel lens pair arrangement that transforms a light source panel's angular output in one meridian and does so in a form compatible with the reflective non-imaging angle transformer means of FIG. 18 A .FIG. 18 D illustrates short side and long side image meridian views of the single colored reflective non-imaging angle transformer shown in FIG. 18 B with the angle modifying means of FIG. 18 C integrated as a single cylindrical negative lens element and a biconic-shaped concave metallic reflecting surface.FIG. 19 illustrates a side view of a practical integration of three separate mono-colored light source panel illuminators formed as in FIG. 15 , in a compact video projection system using three reflective LCDs, the slight source panels arranged on the input faces of a set of dichroic color mixing prisms, the reflective LCDs arranged on the input faces of a second set of dichroic color mixing prisms, the two color mixing system outputs arranged as adjacent inputs to a reflective non-imaging angle transformer as described in FIGS. 17-18 .FIG 20 illustrates a side view of a variation on the practical video projection system of FIG. 17 using three reflective non-imaging angle transformers that each include a refractive lens element rather than a concave reflecting surface.FIG. 21 illustrates a side view of a variation on the reflective non-imaging angle transformer shown in FIG. 20 using two rather than one reflective LCD.FIG. 22 illustrates a side view of a variation on the reflective non-imaging angle transformer of FIG. 21 using two rather than one light source panel illuminators and both a positive lens and a concave reflecting surface.FIG. 23 illustrates a perspective view of the image inversion that takes place on the faces of a polarizing beam splitter such as is used in the reflective non-imaging angle transformers of FIGS. 17-22 .FIG. 24 illustrates a side view of a variation on the practical video projection system of FIGS. 17 and 20 whose three single-colored reflective non-imaging angle transformers use transmissive LCDs rather than reflective ones.FIG. 25 illustrates a side view of a variation on the practical three transmissive LCD video projection system of FIG. 24 where the three single-colored non-imaging angle transformers each use a purely transmissive design, each featuring a positive lens between the light source panel and the transmissive LCD.FIG. 26 illustrates a side view of a single transmissive LCD variation on the three transmissive LCD video projection system of FIG. 24 that positions the single transmissive LCD on the output face of a set of color mixing prisms, arranged so that its input faces receive output light from three single-colored reflective non-imaging angle transformers made in the form of FIGS. 17-18 , but with a focal length extended so that the distances between the LCDs and light source panels are equalized.FIG. 27 illustrates a side view of a more compact variation on the single transmissive LCD video projection system of FIG. 26 .FIG. 28 illustrates a side view of a compact single transmissive LCD video projector system using the dichroic prism arrangement of single-colored light source panel illuminators of FIG. 19 as input to the reflective non-imaging angle transformer arrangement used in FIG. 27 .FIG. 29 illustrates a side view of a compact single transmissive LCD video projector system using three purely transmissive non-imaging angle transformers of the form used in FIG. 25 .FIG. 30 illustrates a side view of a compact single transmissive LCD video projector system using the dichroic prism arrangement of single-colored light source panel illuminators of FIG. 19 as input to a purely transmissive non-imaging angle transformer of the form used in FIGS. 25 and 29 .FIG. 31 illustrates a side view of a compact video projection system using a single DMD illuminated by the dichroic prism arrangement of single-colored light source panel illuminators shown in FIG. 19 whose output is then input a transmissive non-imaging angle transformer consisting of a positive lens, two transparent 90-degree prisms, one for input, another for output, that are coupled through an air gap between their hypotenuse faces.FIG. 32 illustrates a side view of a variation on the compact video projection system using a single DMD shown in FIG. 31 , wherein the reflecting plane of the DMD and the focal plane of the transmissive non-imaging angle transformer are made parallel by a tilt applied to the system's positive lens element.FIG. 33 illustrates a side view of one of the three single-colored dichroic prism-coupled light source panels in the video projection system of FIG. 32 and the tilted bi-convex lens pair used to as part of the non-imaging angle transformer as a means of tilting the output focal plane.FIG. 34 illustrates a more detailed side view of the illustrative ray paths and geometric relations involved in the operation of the video projection system of FIG. 32 .FIG. 35 illustrates side and perspective views of a single-colored LED emitter and non-imaging angle transformer package, including its coupling to the input faces of a set of dichroic color mixing prisms.FIG. 36 is a conceptual generalization of the two-stage angle transforming systems used in the systems of FIG. 17-35 .FIG. 37 A illustrates side, top and perspective views of the basic red, green and blue single-colored light source panel integrations with a cubic set of four dichroic color mixing prisms and the composite-color output beam so created.FIG. 37 B illustrates side and perspective views of the integration of red, green and blue single-colored light source panel with a three dichroic Philips prism color mixing arrangement.FIG. 38 illustrates perspective views of practical lighting application of red, green and blue single-colored light source panel illuminators integrated with a color mixing system as shown in FIGS. 37 A and 37 B , plus a facetted output lens, a microprocessor and an electronic power controller to perform multiple automotive lighting functions.FIG. 39 illustrates perspective views of a practical theatrical or studio spot or flood lighting system based on an output lens and an array of color mixing elements, each containing separate red, green and blue light source panel illuminators, and their means of independent power control.FIG. 40 illustrates a variation on the color mixed light source panel lighting elements of FIGS. 38 and 39 within luminaires used for roadway and architectural illumination.FIG. 41 illustrates several perspective views on the use of three adjacent single-colored light source panels on a common mounting board, and a color mixed set for purpose of providing traffic signaling sources.FIG. 42 illustrates two side views of three single-colored light source panels integrated with a color mixing system as shown in FIGS. 37 A and 37 B , including a means of miniaturization.FIG. 43 illustrates two side views of three single-colored light source panels integrated with a color mixing system as shown in FIGS. 37 A and 37 B , including additional means of miniaturization.FIG. 44 illustrates two side views of three single-colored light source panels integrated with a color mixing system as shown in FIGS. 37 A and 37 B , including yet another means of miniaturization.FIG. 45 illustrates side views of a completely miniaturized version of optical system of FIGS. 42-44 .FIG. 46 illustrates perspective views of the integration of various light source panels with or without a color mixing system, and a non-imaging angle transformer element so as to emit a beam of widely diverging light.FIG. 47 illustrates perspective views of a combination of the illuminationsystem of FIG. 46 with a long clear lightpipe element made with light scattering dots imprinted on at least one and no more than three of its four long faces.FIG. 48 illustrates perspective and cross-sectional views of the illuminator ofFIG. 47 with its lightpipe section wrapped with a three sided reflector so as to output light from the lightpipe's long output face.FIG. 49 illustrates perspective views of the illumination system of FIG. 48 providing input light to an edge of a traditional dot-pattern backlight.FIG. 50 illustrates perspective and cross-sectional side views of a linear variation on the illumination system of FIG. 46 as a means to input uniformly mixed light to the edge of a traditional dot pattern backlight.FIG. 51 illustrates perspective and layout views of the practical use of six multi-colored light source panel illuminators of a form shown in FIG. 15 , arranged on the periphery of a rectangular ceiling support so as to provide uniform and efficient task or flood lighting to a work surface or workspace.FIG. 52 illustrates side and perspective views of several output lens variations on the light source panels as used with the task or flood lighting applications of FIG. 51 and also with the color mixed illuminators of FIGS. 37 A, 37 B, 38 , 40 , 41 , and 45 that expand the angular field coverage.FIG. 53 illustrates perspective and cross-sectional side views of a generalized prism array sheet, showing the prism element's geometrical relations.FIG. 54 illustrates perspective and cross-sectional side views of a generalized lenticular-like aspheric lens array sheet, showing the aspheric element's geometrical relations.FIG. 55 illustrates a side view of the left half of a prism element's cross-section, along with the trajectory of a single light ray emitted from a narrow emitting line P positioned beneath the prism's apex, a distance OFF from the prism base.FIG. 56 illustrates a cross-sectional view of four 90-degree micro prisms with paraxial rays from an underlying emitter, each of which would be transmitted as output but not seen by a viewer positioned directly above.FIG. 57 illustrates a cross-sectional view of a single 90-degree micro prism and a set of selected paraxial rays from one point P on an underlying line emitter that would be transmitted and seen by a viewer positioned directly above.FIG. 58 illustrates a cross-sectional view of four adjacent 90-degree micro prisms with a set of selected paraxial rays that undergo total internal reflection twice, once within the starting prism and then again within a neighboring prism.FIG. 59 illustrates a cross-sectional view of four adjacent 90-degree micro prisms with a set of selected paraxial rays that undergo total internal reflection twice, both times within the starting prism.FIG. 60 illustrates a cross-sectional view of a single 14 mm high by 28 mm wide 90 degree prism element and the set of paraxial rays that spread out from a narrow line emitter located just beneath the prism's base on a line with its apex and that pass through the prism material to an output plane placed just above the prism's apex in air.FIG. 61 illustrates cross-sectional and perspective views of the idealized virtual image-separation that occurs when a single uniform emitting stripe of width W is viewed directly through a 90-degree prism array sheet elevated above the stripe a distance W.FIG. 62 illustrates an idealized cross-sectional view of the virtual image-separations that occur when uniformly bright stripes of width W are viewed directly through the a 90-degree prism array sheet, elevated above the plane of the stripes, as in FIG. 61 , a distance W.FIG. 63 illustrates a perspective view of the idealized virtual image formations and separations that occur, and the output beam that results, when two 90-degree prism sheets arranged with grooves running 90 degrees to each other are elevated above a square emitting aperture.FIG. 64 illustrates perspective and cross-sectional side views of a representative flat monolithic serpentine fluorescent lamp developed by Coming Inc., applied within the multi-layered elevated prism sheet configuration of FIG. 1 .FIG. 65 illustrates a cross-sectional view of the idealized virtual image-separation that occurs when uniformly bright emitting cylinders of width W are viewed directly through the prism points of a 90 -degree prism array sheet.FIG. 66 provides a more detailed cross-sectional analysis of the viewable paraxial rays that emit from the surface of a cylindrical source when viewed directly through the prism points of a 90-degree prism array sheet.FIG. 67 illustrates a cross-sectional view of the virtual image-separation and focal plane depth in millimeters calculated for paraxial rays when a uniformly bright 8.5 mm emitting cylinder is viewed directly through the prism points of a 90-degree prism array sheet elevated 4.25 mm above the cylinder's vertex point.FIG. 68 illustrates a cross-sectional view of the virtual image-separation and focal plane depth calculated for paraxial rays when a uniformly bright 8.5 mm wide stripe is viewed directly through the prism points of a 90-degree prism array sheet elevated 8.5 mm above the stripe's center-point.FIG. 69 illustrates generalized cross-sectional views that show the differences between the multi-layer diffuser system application of FIG 1 and its elevated prism-like layer viewed indirectly through one or more diffusely-scattering layers and a conventional elevated diffuser system.FIG. 70 illustrates one possible off axis brightness uniformity that arises in the multi-layer illumination systems of FIG. 1 when the prism sheet is elevated above the emitting plane a distance exactly equaling the width of emitters.FIG. 71 illustrates the general type of brightness uniformity created by the prism array in a multi-level illumination system of FIG. 1 when emitter separations are made about half the emitting width W, and prism elevation above the emitters is adjusted for image displacements of less than W/2.FIG. 72 illustrates the general type of brightness uniformity with the conditions of FIG. 71 , but where there is also an intrinsic brightness fall off near the emitting element edges.FIG. 73 shows the maximum to minimum brightness ratio as a function of the virtual image shift, Δ, associated with prism elevation for an array of 8 mm wide emitters, each showing a sinusoidal brightness falloff from center to edge, as a function of the spacing between emitters in the array.FIG. 74 shows the maximum to minimum brightness ratio as a function of virtual image shift, Δ, for an array of 12 mm wide emitters, each showing a sinusoidal brightness falloff from center to edge.Fig. 75 shows the range of stability in millimeters for output brightness smoothness in a multi-level illumination system with 8 mm and 12 mm wide emitters at various emitter width-to-spacing ratios between 1.5 and 5.
A radiation sensing structure includes red, green and blue photodiodes stacked above an infrared radiation sensing photodiode.
WHAT IS CLAIMED IS: 1. A sensing structure, comprising: a first junction formed at a first depth in a semiconductor substrate to sense infrared radiation; and a second junction formed at a second depth in the semiconductor substrate to sense visible radiation, the second depth being less deep than the first depth. 2. The sensing structure of claim 1, wherein the second junction at least partially overlies the first junction. 3. The sensing structure of claim 1, wherein the second junction is positioned to sense a red light component; and further comprising: a third junction formed at a third depth in the semiconductor substrate to sense a green light component; and a fourth junction formed at a fourth depth in the semiconductor substrate to sense a blue light component; the third depth being less deep than the second depth, and the fourth depth being less deep than the third depth. 4. The sensing structure of claim 3, wherein the third junction at least partially overlies the second junction, and the fourth junction at least partially overlies the third junction. 5. The sensing structure of claim 3, further comprising an infrared pass filter positioned above the first junction and below the second junction; the infrared pass filter having a filter characteristic to substantially attenuate visible radiation and to substantially pass infrared radiation. <Desc/Clms Page number 13> 6. The sensing structure of claim 5, further comprising an infrared-notch filter positioned above the fourth junction, and having a filter characteristic to substantially pass visible radiation and to substantially attenuate infrared radiation that has a wavelength that differs from a notch wavelength, and to substantially pass infrared radiation that has the notch wavelength. 7. The sensing structure of claim 6, wherein the notch wavelength is selected from the group consisting of 830 nm, 880 nm and 940 nm. 8. The sensing structure of claim 3, further comprising an infrared-notch filter positioned above the fourth junction, and having a filter characteristic to substantially pass visible radiation and to substantially attenuate infrared radiation that has a wavelength that differs from a notch wavelength, and to substantially pass infrared radiation that has the notch wavelength. 9. The sensing structure of claim 8, wherein the notch wavelength is selected from the group consisting of 830 nm, 880 nm and 940 nm. 10. The sensing structure of claim 1, wherein the semiconductor substrate includes silicon. 11. A pixel sensor, comprising: an infrared photodiode; a red photodiode at least partially superimposed on the infrared photodiode; a green photodiode at least partially superimposed on the red photodiode; and a blue photodiode at least partially superimposed on the green photodiode. 12. The pixel sensor of claim 11, further comprising an infrared-pass filter interposed between the infrared photodiode and the red photodiode. <Desc/Clms Page number 14> 13. The pixel sensor of claim 12, further comprising an infrared-notch filter superimposed on the blue photodiode. 14. The pixel sensor of claim 11, further comprising an infrared-notch filter superimposed on the blue photodiode. 15. A pixel sensor, comprising: a first infrared photodiode to detect infrared radiation of a first wavelength; a second infrared diode, at least partially superimposed on the first infrared diode, to detect infrared radiation of a second wavelength that is shorter than the first wavelength; a red photodiode at least partially superimposed on the second infrared photodiode; a green photodiode at least partially superimposed on the red photodiode; and a blue photodiode at least partially superimposed on the green photodiode. 16. The pixel sensor of claim 15, further comprising an infrared-pass filter interposed between the second infrared photodiode and the red photodiode. 17. The pixel sensor of claim 15, further comprising a dual-infrared-notch filter superimposed on the blue photodiode. 18. A pixel imaging array, comprising a matrix of rows and columns of sensor structures, each sensor structure including: an infrared photodiode; a red photodiode at least partially superimposed on the infrared photodiode; a green photodiode at least partially superimposed on the red photodiode; and a blue photodiode at least partially superimposed on the green photodiode. <Desc/Clms Page number 15> 19. The pixel imaging array of claim 18, further comprising an infrared-notch filter superimposed on the matrix of sensor structures. 20. The pixel imaging array of claim 18, further comprising a read circuit to generate respective electrical signals from each of the photodiodes. 21. A color and depth information camera, comprising: a pixel imaging array that includes a matrix of rows and columns of sensor structures, each sensor structure including: an infrared photodiode; a red photodiode at least partially superimposed on the infrared photodiode; a green photodiode at least partially superimposed on the red photodiode; and a blue photodiode at least partially superimposed on the green photodiode; an optical system to form an image on the pixel imaging array; a control circuit coupled to the pixel imaging array; and an infrared radiation source coupled to the control circuit. 22. The camera of claim 21, wherein the control circuit is operative to cause the infrared radiation source to emit an infrared pulse; and read the infrared photodiodes in timed relation to the infrared pulse to obtain depth information from a scene illuminated by the infrared pulse. 23. The camera of claim 21, further comprising a shutter coupled to the control circuit and disposed in front of the pixel imaging array; the control circuit operative to cause the infrared radiation source to emit an infrared pulse; <Desc/Clms Page number 16> open and close the shutter in timed relation to the infrared pulse; and read the infrared diodes to obtain depth information from a scene illuminated by the infrared pulse. 24. The camera of claim 21, wherein: the pixel imaging array includes an infrared-notch filter superimposed on the blue photo diodes, the filter having a characteristic to pass infrared radiation at a notch wavelength; and the infrared radiation source is operative to emit infrared radiation at the notch wavelength. 25. The camera of claim 24, wherein the notch wavelength is selected from the group consisting of 830 nm, 880 nm and 940 nm.
<Desc/Clms Page number 1> STACKED SEMICONDUCTOR RADIATION SENSORS HAVING COLOR COMPONENT AND INFRARED SENSING CAPABILITY BACKGROUND A variety of video special effects can be enhanced or made possible by using a video camera that captures depth information in addition to color component information. The"ZCam"TM product available from 3DV Systems, Santa Clara, California, is a module that can be added on to a conventional studio video camera to provide depth information for objects in a scene captured by the video camera. The ZCam add-relies on a sensing array that is separate from the color sensing circuit and thus entails a high cost. It could also be contemplated to integrate depth information pixels in arrays of red, green and blue pixels, but this also entails additional costs. That is, a camera which included an array or red, green, blue and depth pixels would have less density and hence less resolution and thereby higher cost for a given number of color pixels. In addition, the alignment problem that is generally encountered with R, G, B arrays is exacerbated because the interpolation routine for turning spatially separate R, G, B pixels into a single"RGB"pixel must also contend with an additional pixel for depth information. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic cross-sectional view of a visible and infrared radiation sensing structure according to some embodiments. FIG. 2 is a schematic cross-sectional view of a visible and infrared radiation sensing structure according to some other embodiments. FIG. 3 is a schematic cross-sectional view of a visible and infrared radiation sensing structure according to still some other embodiments. FIG. 4 is a diagram illustrating a pixel imaging array according to some embodiments. FIG. 5 is a diagram illustrating a color and depth information camera according to some embodiments. <Desc/Clms Page number 2> DETAILED DESCRIPTION FIG. 1 is a schematic cross-sectional view of a sensing structure, in particular a stacked-diode visible and infrared radiation sensor, provided according to some embodiments. The sensing structure 10 is formed on a semiconductor substrate 12 of a first conductivity type, such as a silicon substrate of P-type conductivity. The substrate region 12 shown in FIG. 1 may, but need not, be a doped epi-layer on a silicon substrate. Overlying the region 12 is a region or layer 14 of second conductivity type, such as an N- doped region, to form a junction 16. The junction 16 is at a depth in the structure 10 that substantially corresponds to a peak absorption depth for infrared (IR) radiation in the structure 10. An IR photodiode 18 is accordingly formed at the junction 16. A thin film optical filter 20 may be provided above the regionl4. Filter 20 may be configured to substantially block or attenuate visible light wavelengths, while substantially passing at least some IR wavelengths. The filter 20 may therefore be referred to as an IR pass filter. Overlying the filter 20, if present, is a region or layer 22 of the first conductivity type (e. g. , a P-doped region), to form a junction 24. The junction 24 is at a depth in the structure 10 that substantially corresponds to a peak absorption depth for red light in the structure 10. A red photodiode 26 is accordingly formed at the junction 24. Overlying the region 22 is a region or layer 28 of the second conductivity type (e. g. , an N-doped region), to form a junction 30. The junction 30 is at a depth in the structure 10 that substantially corresponds to a peak absorption depth for green light in the structure 10. A green photodiode 32 is accordingly formed at the junction 30. Overlying the region 28 is a region or layer 34 of the first conductivity type (e. g. , a P-doped region), to form a junction 36. The junction 36 is at a depth in the structure 10 that substantially corresponds to a peak absorption depth for blue light in the structure 10. A blue photodiode 38 is accordingly formed at the junction 36. In some embodiments, a thin film optical filter 40 may be provided above the region 34. The filter 40 may be configured to substantially pass light wavelengths in the <Desc/Clms Page number 3> visible band while substantially blocking or attenuating most or all IR radiation except for IR at and/or near a certain wavelength (the"notch wavelength"). The filter 40 substantially passes IR radiation that is at and/or near the notch wavelength, and may therefore be referred to as an IR notch filter. The sensing structure thus includes R, G and B photodiodes stacked above an IR photodiode. The sensing structure shown in FIG. 1 may correspond to a single pixel in a pixel imaging array that, in some embodiments, may be used to capture both color and depth information from an image formed on the pixel imaging array. In some embodiments, the junction 36 of the blue photodiode 38 may be at a depth in the range of about 0.2 to 0.5 microns (e. g. , at about 0.2 microns), the junction 30 of the green photodiode 32 may be at a depth in the range of about 0.5 to 1.5 microns (e. g. , at about 0.6 microns), the junction 24 of the red photodiode 26 may be at a depth in the range of about 1.5 to 3.0 microns (e. g. , at about 2.0 microns), and the junction 16 of the IR photodiode 18 may be at any suitable depth for capturing IR radiation. From the foregoing, it will be appreciated that FIG. 1 (like other similar drawings to follow) is not necessarily drawn to scale. Instead of forming the regions 14,22, 28 and 34 as, respectively, N-, P-, N-and P- doped regions on a P-substrate, the sensing structure may alternatively be formed by a stack of P-, N-, P-, and N-doped regions on an N substrate. As another alternative, schematically illustrated in FIG. 2, the substrate 12 may be of-type and an additional N-doped region or layer 42 may be present, below the IR sensitive region 14, which may be P-doped. In such embodiments, the R, G, B sensitive regions 22,24, 34 may respectively be N-, P-and N-doped to provide a stacked RGB sensor of the kind disclosed in U. S. Patent No. 5,965, 875. An advantage of such an embodiment may be use of known triple well fabrication techniques of the kind described in the'875 patent in the formation of the stacked RGB sensor. In this embodiment, the junction 44 formed by the regions 42 and 14 may be allowed to remain inactive. <Desc/Clms Page number 4> In other alternative embodiments, schematically illustrated in FIG. 3, a sensing structure 10a includes two IR sensitive photodiodes 18a and 46 stacked below the RGB photodiodes 26,32, 38. More specifically, on a substrate of a first conductivity type, a layer or region 48 of a second conductivity type is formed to produce a junction 50 at a depth in the structure 10a that substantially corresponds to a peak absorption depth of a first IR wavelength in the structure 10a. The first IR photodiode 46 is accordingly formed at the junction 50. The next region 14a is of the first conductivity type to form a junction 16a at a depth in the structure 1 Oa that substantially corresponds to a peak absorption depth of a second IR wavelength in the structure lova, with the second IR wavelength being shorter than the first IR wavelength. Consequently, the second IR photodiode 18a is formed at the junction 16a. The regions 22,28 and 34 may then be of the second conductivity type, the first conductivity type and the second conductivity type, respectively. For example, the substrate 12 may be P-type, the region 48 may be N-doped, the region 14a may be P-doped, and the regions 22,28 and 34 may be N-doped, P-doped and N-doped respectively, as in the embodiment shown in FIG. 2. It will be recognized that the second IR photodiode 18a is at least partially superimposed on the first IR photodiode 46, the red photodiode 26 is at least partially superimposed on the second IR photodiode 18a, the green photodiode 32 is at least partially superimposed on the red photodiode 26, and the blue photodiode 38 is at least partially superimposed on the green photodiode 32. Any one of the sensing structures illustrated in FIGS. 1-3 may be employed as a pixel sensor in a pixel imaging array as described below. In embodiments according to FIG. 3, the optical filter 40a superimposed at the top of the sensing structure 1 Oa may be arranged as a"dual IR notch"filter. That is, the filter 40a may pass visible radiation and IR radiation at two notch wavelengths, while substantially blocking or attenuating other IR radiation. FIG. 4 is a schematic plan view of a pixel imaging array 52 provided according to some embodiments. The array 52 includes pixel sensors 54 arranged in rows and columns. The pixel sensors 54 may each include a sensing structure of the type described <Desc/Clms Page number 5> above in connection with FIG. 1, or of the type described above in connection with FIG. 2, or of the type described above in connection with FIG. 3. Although the pixel sensors 54 are shown for the sake of illustration as forming only four rows and eight columns, it should be understood that, in some embodiments, a pixel imaging array may include hundreds or thousands of rows and hundreds or thousands of columns of pixel sensors. The pixel imaging array 52 also includes a read circuit 56 which is associated with the pixel sensors 54 to generate and read out color and depth signals from the respective sensing structures of the pixel sensors. Although the read circuit 56 is shown as separate from the pixel sensors 54, it should be understood that in accordance with conventional practices portions of the read circuit 56 may be intermingled with the pixel sensors 54 to form so-called"active pixels". Each active pixel may comprise red, green, blue and IR photodiodes and transistors or other circuit elements (not shown) that are associated with each of the photodiodes and were formed on the substrate at the same time as the diodes. Examples of active RGB pixels are shown in U. S. Patent No. 5,965, 875, which was mentioned above. FIG. 5 is a diagram that schematically illustrates a camera 58 according to some embodiments. The camera 58 incorporates a pixel imaging array 52 that may be of the type shown in FIG. 4. The camera 58 may also include a housing 59. The pixel imaging array 52 may be mounted in the housing 59 which may support an optical system 62 configured to form an image of visible and IR radiation on the pixel imaging array 52. The camera 58 may also include a control/read circuit 62 which is coupled to the pixel imaging array 52. The control/read circuit 62 may be considered to include the read circuit 56 (FIG. 4) associated with the pixel imaging array 52. Continuing to refer to FIG. 5, the camera 58 also includes an IR emitter 64 which is coupled to the control/read circuit 62 and serves as an IR radiation source. The IR emitter 64 may include, for example, one or more IR LEDs, which are not separately shown. In some embodiments, the IR emitter may be selected to emit IR radiation at a single wavelength such as 830 nm, 880 nm or 940 nm. These wavelengths of IR radiation tend to be absent from operating environments because of absorption by ambient water <Desc/Clms Page number 6> vapor in the atmosphere, and therefore are suitable for use in IR communication and other applications in which it is desirable to avoid interference from ambient IR radiation. Emitting devices which operate at one of these wavelengths are widely commercially available. The particular wavelength emitted by the IR emitter 64 may correspond to a notch wavelength of an IR notch filter 40 (FIG. 1) which is part of the pixel imaging array 52 of the camera 58. In some embodiments, the camera 58 may also include a shutter 66 (shown in phantom), such as a gallium arsenide shutter, disposed within the housing 59 in the optical axis of the camera 58 between the optical system 60 and the pixel imaging array 52. (Although not indicated in the drawing, the optical system 60 may also be coupled to and under the control of the control/read circuit 62.) The control/read circuit 62 operates to control the camera 58, and particularly the pixel imaging array 52 and the IR emitter 64, to generate a color video signal as well as depth information. The color video signal may be generated in the form of frames at regular frame intervals such as once every 1/30 of a second. The color video signal may be generated by reading the RGB photodiodes of the pixel sensors at the frame intervals. In the time periods in between generation of the color video frames, the control/read circuit 62 may control the IR emitter 64 to emit one or more pulses of single wavelength IR radiation to illuminate a scene captured by the optical system 60. The control/read circuit 62 may also control and read the pixel imaging array 52 (and may also control the shutter 66, if present) in timed relation with the pulses emitted from the IR emitter to generate depth information for the scene based on stimulation of the IR photodiodes of the pixel imaging array 52 by single wavelength IR radiation reflected by the scene from the IR pulses emitted by the IR emitter 64. The operation of the camera 58 to generate depth information may be generally in accordance with conventional principles such as are employed in the"ZCam"product (or similar camera that senses depth through active lighting) referred to above, although the camera 58 differs from the ZCam by collecting reflected IR radiation from the scene by using IR photodiodes stacked with color photodiodes as described in connection with <Desc/Clms Page number 7> FIGS. 1-3. By contrast, the ZCam utilizes an IR sensing array that is separate from the color sensing array of a camera on which the ZCam is installed. In any event, operation of the camera 58 to generate depth information will now be briefly described. (Although the Zcam is discussed herein for the sake of concreteness, it will be understood by those who are skilled in the art, that the stacked color and depth sensing structure disclosed herein may be applied in any camera that employs active lighting to sense depth.) Operation of the camera to generate depth data in accordance with some embodiments relies on precise detection of the timing at which an IR pulse is reflected from the scene to the pixel imaging array. The length (elapsed time) of the pulse may be precisely controlled such that the distance interval for which depth information is to be found corresponds to half the distance traveled by the illuminating radiation (i. e. , the single wavelength IR radiation) during the duration of the pulse. The result is a"radiation wall"that has double the thickness of the distance interval for which depth information is to be found. The distance interval may be considered to be defined between a near distance and a far distance. Depth information is to be generated based on IR radiation from the pulse that is reflected from the scene to the camera. The reflected IR radiation is collected by the IR photodiodes of the pixel imaging array 52 during a"reading window". The timing of the reading window is defined either by operation of the shutter 66 (if present) or through electronic control of the timing of the reading process via the control/read circuit 62. If the reading window is to be defined by electronic control of the reading process, there may be associated with each IR photodiode suitable circuitry to allow charges generated by the IR photodiode to be shunted though another diode to a storage area. The latter method of defining the reading window may employ high speed switching but may be more sensitive to noise than controlling a shutter to define the reading window. Given the near distance for the distance interval and the length of the IR pulse emitted by the IR emitter 64, the starting point in time for the reading window may be defined as occurring at the point at which the leading edge of the emitted IR pulse could have returned to the camera if reflected at the near distance, and the duration of the <Desc/Clms Page number 8> reading window may be defined as half the duration of the emitted IR pulse. Reading of the IR photodiodes, whether controlled by shutter or by electronic switching, occurs only during the reading window. Depth information is obtained for each pixel by comparing an amount of current integrated at the pixel based on the received IR radiation with a normalization amount for the pixel. Normalization is required to account for differences in absorption/reflection of the illuminating radiation among various portions of the scene. The normalization amount for each pixel may be obtained from a prior or subsequent IR pulse for which there is full integration (e. g. , over a reading window of at least double the duration of the previously described reading window) of the currents from the IR photodiodes. Numerical depth data for each pixel may be generated by analog-to-digital converting the integrated and normalized depth information signal obtained for the pixel. With this process, higher depth data values are obtained for pixels that correspond to nearer objects in the distance interval. The depth information may be displayed as a gray scale image in which nearer objects appear brighter than more distant objects. The depth information may be employed for depth-keying to allow for image segmentation, object isolation and insertion, and similar special effects, as is conventionally done utilizing the ZCam product. However, because the image sensors described herein include an integrated IR sensing capability, a combined color-depth camera may be provided at lower cost than the conventional combination of a studio video camera with ZCam add-on. Similarly, the stacked color and depth information sensor disclosed herein is cheaper and more accurate than other cameras that employ active lighting to sense depth, because adding separate depth detection pixels to an R, G, B array spatially degrades the existing R, G, B pixel pattern. Stacking the depth pixel with the red, green and blue pixels as disclosed herein saves space, and hence cost and also avoids the alignment problems that arise in an array of separate, R, G, B plus depth pixels. The image sensors illustrated in FIGS. 1-3 therefore may facilitate integration of depth sensing into low cost consumer and amateur camcorders or other low cost video camera devices. For example, an image sensor with IR capture capability according to one of FIGS. 1-3 may be included, with suitable IR emitter, in a low cost video camera employed as an input device for a personal computer, to aid the personal computer in performing functions such as audio-visual speech recognition and/or gesture recognition. <Desc/Clms Page number 9> As another application of such a video camera, gesture recognition may be employed for control of a video game. In the gaming environment, a camera like that of FIG. 5 may also be used for so-called"avatar skinning"in which the player's facial expressions are mapped to a character in the virtual game world. Depth data provided by a camera like that shown in FIG. 5 may also be employed to improve image data compression in camcorders or for video conferencing. For example, the depth information may be used to distinguish between foreground objects (e. g. people) and background, so that the foreground objects may be coded with high fidelity and the background may be coded with low fidelity, or even omitted from transmission in the case of video conferencing. In biometric applications, a camera like that illustrated in FIG. 5 may be used to implement face recognition. The sensors illustrated in FIGS. 1-3 may be modified to replace the RGB photodiodes stacked above the IR photodiode or photodiodes with a single visible radiation photodiode stacked above the IR photodiode or photodiodes. Such sensors could be used to produce a visible radiation gray-scale image plus depth information. According to another possible modification of the sensors of FIGS. 1-3, the IR photodiodes need not have the same spatial resolution as the RGB photodiodes. For example, the IR photodiodes may have a lower spatial resolution than the RGB photodiodes by having the area of one IR photodiode generally correspond to the area of a two-by-two subarray of RGB photodiodes. If the sensor structure shown in FIG. 3 (having two IR photodiodes stacked below RGB photodiodes) were utilized in the camera of FIG. 5, the IR emitter 64 may be modified so as to emit two different IR wavelengths in respective pulses. The two IR wavelengths emitted by the IR emitter may be selected to correspond to respective IR wavelengths to which the two IR photodiodes are sensitive. For example, the IR photodiodes 46 and 18a (referring to FIG. 3) may be respectively sensitive to 940 nm IR <Desc/Clms Page number 10> radiation and to 830 nm IR radiation. In that case, the IR emitter 64 may be arranged to emit pulses of 940 nm IR radiation and also to emit pulses of 830 nm IR radiation. Using the 940 nm IR pulses and the photodiodes 46, a suitable pulse length and reading window may be used to detect depth information in a first distance interval. Using the 830 nm IR pulses and the photodiodes 18a, a suitable pulse length and reading window may be used to detect depth information in a second distance interval that is different from the first distance interval. For example, the second distance interval may adjoin the first distance interval immediately behind the first distance interval. It will be understood that respective normalization procedures may be carried out for both the 940 nm and 830 nm depth detection functions. In previous discussions herein of depth detection operations, it was assumed that depth information was gathered during a single reading window, subject to normalization. Alternatively, signals may be integrated over two or more reading windows (each window being defined after a respective pulse), to increase the dynamic range of the depth detection function of the camera. In other alternative embodiments, depth detection may be performed for different distance intervals using pulses of a single IR wavelength. For this purpose different reading windows may be defined after respective pulses, which may differ in pulse length. Depth detection for two or more different distance intervals may be performed in a single interval between capture of succeeding color information frames, whether a single IR wavelength, or respective pulses of different wavelengths, are employed for depth detection in the different distance intervals. In some embodiments, pixel imaging arrays having RGB photodiodes stacked above each IR photodiode may be distributed without the IR notch filter 40. A suitable thin film optical filter having a desired notch wavelength may be formed on the pixel image arrays after distribution when it is determined what illuminating IR wavelength is to be used. In other words, subsequent to initial distribution pixel imaging arrays may be customized with a suitable IR notch filter which matches the wavelength of IR illumination selected to be used with the pixel imaging arrays. <Desc/Clms Page number 11> Although the camera of FIG. 5 has been described in connection with IR illumination using only one or two IR wavelengths, it is alternatively possible to use broader band IR illumination, particularly in indoor or other controlled environments in which ambient IR radiation is minimized or is not likely to interfere with depth detection and color imaging operation of the camera. In such cases the IR notch filter may be omitted. In other alternative embodiments of the sensor structures of FIGS. 1-3, the IR pass filter 20 may be omitted if a sufficient proportion of visible radiation is absorbed in the layers 34, 28 and 22 such that not enough visible radiation reaches the IR sensing layer or layers to hinder accurate IR sensing. The several embodiments described herein are solely for the purpose of illustration. The various features described herein need not all be used together, and any one or more of those features may be incorporated in a single embodiment. Therefore, persons skilled in the art will recognize from this description that other embodiments may be practiced with various modifications and alterations.
A transistor architecture utilizes a raised source and drain region to reduce the adverse affects of germanium on silicide regions. Epitaxial growth can form a silicide region above the source and drain. The protocol can utilize any number of silicidation processes. The protocol allows better silicidation in SMOS devices.
1. A method of manufacturing an integrated circuit, the method comprising:providing a gate structure between a first source location and a first drain location above a semiconductor substrate, the substrate including a strained layer provided above a germanium-containing layer; providing an anti-reflective material above the gate structure; etching the substrate to remove the strained layer at the first source location and the first drain location to form a recessed source location and a recessed drain location, the strained layer remaining beneath the gate structure, the anti-reflective material remaining above the gate structure during the step of etching the substrate; selectively providing a semiconductor material above a top surface of the substrate above the recessed source location and the recessed drain location; and siliciding the semiconductor material. 2. The method of claim 1, further comprising:providing the semiconductor material above a gate conductor provided in the gate structure. 3. The method of claim 2, further comprising:siliciding the semiconductor material above the gate conductor. 4. The method of claim 1, wherein the step of siliciding the semiconductor material comprises depositing a layer of material, the layer of material comprising at least one of nickel, cobalt, and tungsten.5. The method of claim 3, wherein the selectively providing step is a silicon epitaxial growth step.6. The method of claim 1, wherein the semiconductor material is 200-400 Angstroms thick.7. The method of claim 6, further comprising:providing an anti-reflective coating layer above a gate conductor of the gate structure before the providing a gate structure step. 8. The method of claim 7, further comprising:removing the anti-reflective coating after the etching step. 9. The method of claim 1, wherein the etching step uses a KOH etching process.10. The method of claim 1, wherein the strained layer is above a silicon germanium layer.11. A method of manufacturing an ultra-large scale integrated circuit including a transistor, the method comprising steps of:forming at least part of a gate structure on a top surface of a semiconductor substrate, the semiconductor substrate including a strained semiconductor layer above a silicon germanium layer, the gate structure including a bottom anti-reflective coating above a polysilicon gate conductor; removing exposed portions of the strained semiconductor layer to reach the silicon germanium layer while the bottom anti-reflective coating is above the polysilicon gate conductor; removing the bottom anti-reflective coating after the step of removing the exposed portions of the strained semiconductor layer; growing a semiconductor layer above the silicon germanium layer in the location of the removed exposed portions and above the gate conductor; and siliciding the semiconductor layer. 12. The method of claim 11, wherein the siliciding comprises providing a metal layer above the semiconductor layer and annealing.13. The method of claim 11, wherein the strained semiconductor layer is approximately 200 Angstroms thick.14. The method of claim 11, wherein the semiconductor layer is grown to a thickness of 200-400 Angstroms.15. The method of claim 14, wherein the gate structure includes spacers.16. The method of claim 15, wherein the step of removing exposed portions of the strained semiconductor layer uses a wet etch.17. A process of forming a transistor with a strained channel, an elevated source region, and an elevated drain region, the process comprising steps of:forming a gate structure on a substrate including a strained layer provided above a silicon germanium layer; forming a BARC layer above the gate structure; removing the strained layer from a source location and a drain location while the BARC layer remains above the gate structure, thereby leaving the strained channel underneath the gate structure; using selective epitaxial growth to provide semiconductor material at the source location and the drain location to form the elevated source region and the elevated drain region; and siliciding the elevated source region and the elevated drain region. 18. The process of claim 17, wherein the semiconductor material includes silicon.19. The process of claim 18, further comprising:removing the BARC layer after the step of removing the strained layer. 20. The process of claim 18, further comprising:providing the semiconductor material above the gate structure. 21. The method of claim 1, wherein the step of selectively providing a semiconductor material comprises growing the semiconductor material by epitaxy.22. The method of claim 11, wherein the step of growing a semiconductor layer above the silicon germanium layer comprises selective epitaxial growth of single crystal silicon.
FIELD OF INVENTIONThe present invention relates generally to integrated circuit (IC) fabrication. More particularly, the present invention relates to a design for and a method of improving silicidation of an IC substrate containing germanium.BACKGROUND OF THE INVENTIONSMOS processes are utilized to increase transistor (MOSFET) performance by increasing the carrier mobility of silicon, thereby reducing resistance and power consumption and increasing drive current, frequency response and operating speed. Strained silicon is typically formed by growing a layer of silicon on a silicon germanium substrate or layer. Germanium can also be implanted, deposited, or otherwise provided to silicon layers to change the lattice structure of the silicon and increase carrier mobility.The silicon germanium lattice associated with the germanium substrate is generally more widely spaced than a pure silicon lattice, with spacing becoming wider with a higher percentage of germanium. Because the silicon lattice aligns with the larger silicon germanium lattice, a tensile strain is created in the silicon layer. The silicon atoms are essentially pulled apart from one another. Relaxed silicon has a conductive band that contains six equal valance bands. The application of tensile strength to the silicon causes four of the valance bands to increase in energy and two of the valance bands to decrease in energy. As a result of quantum effects, electrons effectively weigh 30 percent less when passing through the lower energy bands. Thus, lower energy bands offer less resistance to electron flow.In addition, electrons meet with less vibrational energy from the nucleus of the silicon atom, which causes them to scatter at a rate of 500 to 1,000 times less than in relaxed silicon. As a result, carrier mobility is dramatically increased in strained silicon compared to relaxed silicon, providing an increase in mobility of 80 percent or more for electrons and 20 percent or more for holes. The increase in mobility has been found to persist for current fields up to 1.5 megavolt/centimeter. These factors are believed to enable device speed increase of 35 percent without further reduction of device size, or a 25 percent reduction in power consumption without reduction in performance.High levels of germanium at the surface of a wafer can adversely affect the formation of silicide layers. In particular, high concentration of germanium in a top surface of a substrate can adversely affect the formation of silicide layers above the source and drain regions. The germanium concentration at the top surface can be exacerbated by the processing associated with source and drain regions and gate structure formation.Silicidation of strained silicon or germanium containing layers can be difficult. For example, the presence of germanium in a silicon layer can cause germanosilicides to form during the silicidation process. Germanosilicides negatively impact the formation of a silicide region.After pre-cleaning native oxides from a top surface of the wafer, a metal can be deposited. The metal layer can be reacted with the semiconductor surface of the wafer to form a metal silicide (MexSiy) region such as a titanium silicide layer, a nickel silicide layer, a cobalt silicide layer, etc. The pre-cleaning process can cause germanium contamination due to resputtering.Thus, there is a need for an efficient process for forming silicide wafers on a wafer surface in an SMOS process. Further, there is a need for a system and a method which reduces germanium contamination of silicide regions. Even further, there is a need for a method of siliciding and a transistor architecture which avoids germanosilicides. Yet further, there is a need for a process which reduces the adverse effects of germanium on silicidation processes.SUMMARY OF THE INVENTIONAn exemplary embodiment relates to a method of manufacturing an integrated circuit. The method includes providing a gate structure between a first source location and a first drain location above a semiconductor substrate. The substrate includes a strained layer. The method also includes etching the substrate to remove the strained layer at the first source location and at the first drain location to form a recessed source location and a recessed drain location. The strained layer remains beneath the gate structure. The method also includes selectively providing a semiconductor material above a top surface of the substrate above the recessed source location and the recessed drain location, and siliciding the semiconductor material.Another exemplary embodiment relates to a method of manufacturing an ultra-large scale integrated circuit including a transistor. The method includes steps of forming at least part of a gate structure on a top surface of a semiconductor substrate. The semiconductor substrate includes a strained silicon layer above a silicon germanium layer. The gate structure includes a bottom anti-reflective coating above a polysilicon gate conductor. The method also includes steps of removing exposed portions of the strained silicon layer to reach the silicon germanium layer, removing the bottom anti-reflective coating, growing a silicon layer above the exposed portions and above the gate conductor, and siliciding the silicon layer.Yet another exemplary embodiment relates to a process of forming a transistor with a strained channel, an elevated source region and an elevated drain region. The process includes steps of forming gate structure on a substrate including a strained layer and removing the strained layer from a source location and a drain location, thereby leaving the strained channel underneath the gate structure. The process further includes steps of using selective epitaxial growth to provide silicon material at the source location and the drain location to form the elevated source region and the elevated drain region and siliciding the elevated source region and the elevated drain region.BRIEF DESCRIPTION OF THE DRAWINGSExemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and:FIG. 1 is a flow diagram showing a fabrication process for a germanium-containing substrate in accordance with an exemplary embodiment;FIG. 2 is a cross-sectional view schematic drawing of a portion of the substrate used in the process illustrated in FIG. 1, the IC substrate including a lithographic feature provided above a gate stack that is above a strained silicon layer and a silicon germanium substrate;FIG. 3 is a cross-sectional view of the portion illustrated in FIG. 2, showing a gate structure formation step;FIG. 4 is a cross-sectional view of the portion illustrated in FIG. 3, showing an etching step;FIG. 5 is a cross-sectional view of the portion illustrated in FIG. 4, showing a coating removal step;FIG. 6 is a cross-sectional view of the portion illustrated in FIG. 5, showing a selective epitaxy step;FIG. 7 is a cross-sectional view of the portion illustrated in FIG. 6, showing a metal deposition; andFIG. 8 is a cross-sectional view of the portion illustrated in FIG. 7, showing a silicidation step.DETAILED DESCRIPTION OF PREFERRED AND EXEMPLARY EMBODIMENTSFIGS. 1 through 8 illustrate a method of manufacturing an integrated circuit (IC) in accordance with an exemplary embodiment. The method and IC structure illustrated in FIGS. 1 through 8 reduces the formation of germanosilicides during silicidation. The process includes at least one epitaxial step and can be used as a part of any process requiring silicidation. Advantageously, germanium associated with silicon germanium substrates and strained silicon layers does not seriously adversely affect the formation of silicide regions on the IC substrate.Referring to FIGS. 2 through 8, a cross-sectional view of a portion 12 of an integrated circuit (IC) is illustrated. Portion 12 (FIG. 2) is subjected to process 100 (FIG. 1) to form an IC. The IC can include a transistor with a gate structure and a silicided source and drain region as explained below. Germanium contamination of silicides can be reduced through an advantageous process and transistor architecture. The architecture uses raised or elevated source and drain regions to prevent germanium from adversely affecting silicidation.In FIG. 2, portion 12 includes a strained silicon layer 16 provided over a semiconductor substrate 14 or a germanium containing layer or substrate. Substrate 14 can be provided above a substrate 13.Substrate 13 is optional and portion 12 can be provided with substrate 14 as the bottom-most layer. Substrate 13 can be the same material or a different material than substrate 14. In one embodiment, substrate 13 is a semiconductor substrate such as a silicon substrate upon which silicon germanium substrate 14 has been grown. In another embodiment, substrates 13 and 14 are not included and the substrate is comprised of layer 16. In such an embodiment, layer 16 can be a silicon germanium substrate or a strained silicon substrate.Portion 12 can be any type of semiconductor device, or portion thereof, made from any of the various semiconductor processes such as a complementary metal oxide semiconductor (CMOS) process, a bipolar process, or another semiconductor process. Portion 12 may be an entire IC or a portion of an IC including a multitude of electronic component portions.Substrate 14 is preferably silicon germanium or another semiconductor material including germanium, and can be doped with P-type dopants or N-type dopants. Substrate 14 can be an epitaxial layer provided on a semiconductor or an insulative base, such as substrate 13. Furthermore, substrate 14 is preferably a composition of silicon germanium (Si1-xGex, where X is approximately 0.2 and is more generally in the range of 0.1-0.4). Substrate 14 can be grown or deposited.In one embodiment, substrate 14 is grown above substrate 13 by chemical vapor deposition (CVD) using disilane (Si2H6) and germane (GeH4) as source gases with a substrate temperature of approximately 650[deg.] C., a disilane partial pressure of approximately 30 mPa and a germane partial pressure of approximately 60 mPa. Growth of silicon germanium material may be initiated using these ratios, or, alternatively, the partial pressure of germanium may be gradually increased beginning from a lower pressure or zero pressure to form a gradient composition. Alternatively, a silicon layer can be doped by ion implantation with germanium, or other processes can be utilized to form substrate 14. Preferably, substrate 14 is grown by epitaxy to a thickness of less than approximately 5000 Ȧ (and preferably between approximately 1500 Ȧ and 4000 Ȧ).A strained silicon layer 16 is formed above substrate 14 by an epitaxial process. Preferably, layer 16 is grown by CVD at a temperature of approximately 600[deg.] C. Layer 16 can be a pure silicon layer and may have a thickness of between approximately 50 and 150 Ȧ.The substrate for portion 12 can be a semiconductor substrate such as silicon, gallium arsenide, germanium, or another substrate material. The substrate can include one or more layers of material and/or features such as lines, interconnects, vias, doped portions, etc., and can further include devices such as transistors, microactuators, microsensors, capacitors, resistors, diodes, etc. The substrate can be an entire IC wafer or part of an IC wafer. The substrate can be part of an integrated circuit such as a memory, a processing unit, an input/output device, etc.In process 100 (FIG. 1) at step 52, gate structures are formed above substrate 14. In FIG. 2, gate structures are formed by providing a gate stack including a gate dielectric layer 18 above a top surface 46 of layer 16, a gate conductor 22, and a bottom anti-reflective (BARC) layer 26. Top surface 46 can be considered a top surface of the substrate or wafer associated with portion 12, even though surface 46 corresponds to the top surface of layer 16 in FIG. 2.Gate dielectric layer 18 can be a 5-30 Ȧ thick layer of thermally grown silicon dioxide. Alternatively, layer 18 can be deposited. Alternative materials for layer 18 include high-k dielectric layers, medium-k dielectric layers, silicon nitride, and other insulative materials.Gate conductor 22 is preferably a polysilicon layer having a thickness of 700-2000 Ȧ. Gate conductor 22 can be deposited as a P-doped or N-doped layer. Alternatively, conductor 22 can be a metal layer such as a refractory metal layer deposited by chemical vapor deposition (CVD) or sputtering.Layer 26 is preferably an anti-reflective coating material such as silicon oxynitride (SiON) or silicon nitride (Si3N4). Alternative materials for layer 26 can also be utilized. Layer 26 serves a dual purpose of providing anti-reflective properties (e.g., as a BARC layer) as well as protecting gate conductor 22 during etching steps. Layer 26 is preferably deposited as a 250-1000 Ȧ thick layer above gate conductor 22 by chemical vapor deposition (CVD). Alternatively, layer 26 can be thermally grown.Photoresist feature 24 is formed above layer 26. Preferably, photoresist feature 24 is lithographically patterned to form a gate structure from gate conductor 22 and layer 18.In FIG. 3, layers 26 and 18 and gate conductor 22 are etched in a conventional process to leave gate structure 38 (step 52 of process 100). Gate structure 38 can include spacers 23 formed in a deposition and etch back process. In one embodiment, spacers 23 are silicon dioxide or silicon nitride. Substrate 14 and layer 16 can be doped to provide appropriate regions such as halo regions, channel regions, and source and drain regions in step 52.In FIG. 4, after gate structure 38 is formed, layer 16 is etched at a drain location and a source location in accordance with step 54 of process 100. Preferably, layer 16 is etched until substrate 14 beneath layer 16 is reached, thereby removing all of layer 16 at the source location and the drain location. Preferably, layer 16 is etched in an etching process selective to layer 16 with respect to spacers 23 and layer 26. Layer 16 can be etched to leave original top surface 46 of layer 16 approximately 200 Ȧ above top surface 32 of substrate 14.The channel region underneath gate structure 38 includes strained silicon layer 16, thereby achieving the advantages of SMOS processes. A top surface 27 of layer 14 in the channel region is the original level associated with the deposition or formation of layer 16 and substrate 14 in FIG. 2. Preferably, layer 16 is selectively etched using a hydrogen bromide (HBr) chemistry or potassium hydroxide (KOH) wet etch technique. Preferably, the etching step removes at least 200 Ȧ of layer 16 in accordance with step 54 of process 100.In FIG. 5, in accordance with step 54 of process 100, bottom anti-reflective coating (BARC) layer 26 can be removed from gate conductor 22. BARC layer 26 is preferably removed for appropriate silicidation of gate conductor 22. In one embodiment, portions of spacers 23 are also removed so that a top surface of spacer 23 is planar with a top surface of gate conductor 22. BARC layer 26 can be removed in a standard etching process, as is known in the art.In FIG. 6, a semiconductor layer 47 is provided at the source location and the drain location. Layer 47 is preferably single crystal silicon grown above substrate 14. Semiconductor layer 47 is preferably grown by selective epitaxy. In one embodiment, selective epitaxial growth (SEG) is applied to grow layer 47 to a thickness of 200-400 Ȧ. In an exemplary embodiment, SEG of layer 47 may be accomplished using a dichlorosilane, hydrochloric acid, and hydrogen gas mixture. The flow rates of the gases in the mixture are approximately 0.2 standard liters per minute dichlorosilane, 0.1 standard liters per minute hydrochloric acid gas, and 20 standard liters per minute hydrogen. The gases are introduced at a pressure of approximately 20 millitorr and a temperature of between approximately 700 and 850[deg.] C. (and preferably approximately 750[deg.] C.). In other embodiments, different flow rates, pressure and/or temperatures may be used. In one embodiment, layer 47 grows from surface 32 to a level higher than the original top surface 46 of layer 16.In an alternative embodiment, layer 47 can be a doped SEG deposited silicon layer. The doped layer can include arsenic dopants for NMOS transistors. The arsenic dopants do not have enhanced lateral diffusion because they are located in layer 47 rather than layer 16, which is strained. Preferably, layer 47 is doped with arsenic dopants to a concentration of between approximately 1*10<19 > and 1*10<20 > dopants per centimeter cubed. In such an embodiment, layer 16 is preferably removed to at least the depth of an extension associated with the transistor on portion 12.In FIG. 6, a semiconductor layer 49 (e.g., silicon, etc.) is also preferably grown above gate conductor 22. Layer 49 can be grown in the same process step used to grow layer 47. Alternatively, separate steps can be utilized. Layers 47 and 49 are preferably formed in accordance with step 56.In FIG. 7, a metal layer 48 is deposited above layers 47 and 49 accordance with step 58 of process 100 (FIG. 1). Preferably, metal layer 48 is a 100 Ȧ thick layer of nickel for use in a silicide process. Layer 48 can be deposited by sputtering or by chemical vapor deposition (CVD). In other exemplary embodiments, layer 48 comprises a 120 Ȧ thick layer of cobalt or a 150 Ȧ layer of tungsten, or another layer of material appropriate for silicidation.In accordance with step 58 of process 100, portion 12 is subject to a rapid thermal anneal at a temperature of between approximately 320[deg.] and 420[deg.] C. in a nitrogen atmosphere, selectively etched, and subjected to a second rapid thermal anneal at a temperature between approximately 400[deg.] and 600[deg.] C. in nitrogen to form silicide regions 64 and 62 (FIG. 8). Preferably, the silicide regions 64 are provided in raised source and drain regions associated with layer 47. Where different metals are used for layer 48, different annealing temperatures may be utilized. For example, where cobalt is used, the first rapid thermal anneal may be performed at a temperature of approximately 500[deg.] C. and the second rapid thermal anneal at approximately 700[deg.] C. Use of nickel for layer 48 results in formation of nickel monosilicide, while the use of cobalt results in formation of cobalt disilicide.Alternative silicidation processes can be utilized. Due to the architecture associated with transistor portion 12 and the process steps in process 100, germanium is advantageously not present in layer 47, thereby reducing the effects of germanium in the silicidation process. Layers 47 and 49 can be approximately 250 Ȧ thick and may consume approximately 230 Ȧ of the underlying semiconductor layers. The use of layer 47 allows larger or thicker germanium silicon layers to be formed due to the raised nature of the source location and the drain location.It is understood that although the detailed drawings, specific examples, and particular values given provide exemplary embodiments of the present invention, the exemplary embodiments are for the purpose of illustration only. The method and apparatus in the aforementioned embodiments are not limited to the precise details and descriptions disclosed. For example, although particular silicide techniques are described, other types of silicide processes can also be utilized. Various changes may be made to the details disclosed without departing from the scope of the invention which is defined by the following claims.
Systems and methods for process variation power control in three- dimensional integrated circuits, 3DICs, are disclosed. In an exemplary aspect, at least one process variation sensor (324, 326) is placed in each tier (302) of a 3DIC (300). The process variation sensors report information related to a speed characteristic for elements within the respective tier to a decision logic (328). The decision logic is programmed to weight output from the process variation sensors according to relative importance of logic path segments (310, 320) in the respective tiers. The weighted outputs are combined to generate a power control signal that is sent to a power management unit (PMU). By weighting the importance of the logic path segments, a compromise voltage may be generated by the PMU which is "good enough" for all the elements in the various tiers to provide acceptable performance.
What is claimed is:1. A method for controlling power in a three-dimensional (3D) integrated circuit (IC) (3DIC), the method comprising:sensing a first speed characteristic with a first sensor in a physically embodied first tier of a 3DIC;sensing a second speed characteristic with a second sensor in a physically embodied second tier of the 3DIC;weighting a first output from the first sensor with a first weight;weighting a second output from the second sensor with a second weight;combining weighted outputs from the first sensor and the second sensor; and determining a control signal for a power management unit (PMU) based at least in part on the combined weighted outputs.2. The method of claim 1, wherein sensing the first speed characteristic with the first sensor comprises sensing voltage and temperature with the first sensor.3. The method of claim 1, wherein sensing the first speed characteristic with the first sensor comprises sensing the first speed characteristic with a ring oscillator.4. The method of claim 1 , wherein weighting the first output comprises weighting a first frequency count.5. The method of claim 1, further comprising sensing a third speed characteristic for the physically embodied first tier of the 3DIC using a third sensor.6. The method of claim 1, wherein the first weight is based on a first logic path segment located in the first tier.7. The method of claim 6, wherein the second weight is based on a second logic path segment located in the second tier.8. The method of claim 1, further comprising sensing, using a multi-tier sensor, a multi-tier speed characteristic and using the multi-tier speed characteristic in determining the control signal.9. The method of claim 1, further comprising outputting the control signal to the PMU.10. The method of claim 9, further comprising generating a voltage level with the PMU.11. A three-dimensional (3D) integrated circuit (IC) (3DIC) , comprising:a first tier comprising:a first sensor configured to sense a first speed characteristic and generate a first output; anda first logic path segment;a second tier comprising:a second sensor configured to sense a second speed characteristic and generate a second output; anda second logic path segment communicatively coupled to the first logic path segment to form a logic path; anddecision logic configured to:receive the first output from the first sensor;receive the second speed characteristic from the second sensor;weight the first output with a first weight;weight the second output with a second weight;combine weighted outputs from the first sensor and the second sensor; anddetermine a control signal for a power management unit (PMU) based at least in part on the combined weighted outputs.12. The 3DIC of claim 11, wherein the first sensor is further configured to sense voltage and temperature.13. The3DIC of claim 11, wherein the first sensor comprises a ring oscillator.14. The 3DIC of claim 11, wherein the first sensor is configured to generate a first frequency count as the first output.15. The 3DIC of claim 11, further comprising a third sensor positioned in the first tier, the third sensor configured to:sense a third speed characteristic;generate a third output; andprovide the third output to the decision logic.16. The 3DIC of claim 11, wherein the first weight is based on the first logic path segment.17. The 3DIC of claim 16, wherein the second weight is based on the second logic path segment.18. The 3DIC of claim 11, further comprising a multi-tier sensor positioned at least in part on both the first tier and the second tier, the multi-tier sensor configured to sense a multi-tier speed characteristic and generate a multi-tier output and provide the multi- tier output to the decision logic.19. The 3DIC of claim 11 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a smart phone; a tablet; a phablet; a server; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; and an automobile. A three-dimensional (3D) integrated circuit (IC) (3DIC) , comprising:a first tier comprising:a means to sense a first speed characteristic and generate a first output; anda first logic path segment;a second tier comprising:a means to sense a second speed characteristic and generate a second output; anda second logic path segment communicatively coupled to the first logic path segment to form a logic path; anda means to:receive the first output from the means to sense the first speed characteristic;receive the second output from the means to sense the second speed characteristic;weight the first output with a first weight;weight the second output with a second weight;combine weighted outputs from the first sensor and the second sensor; anddetermine a control signal for a power management unit (PMU) based at least in part on the combined weighted outputs.
PROCESS VARIATION POWER CONTROL IN THREE-DIMENSIONAL (3D) INTEGRATED CIRCUITS (ICs) (3DICs)PRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 15/264,983, filed on September 14, 2016 and entitled "PROCESS VARIATION POWER CONTROL IN THREE-DIMENSIONAL (3D) INTEGRATED CIRCUITS (ICs) (3DICs)," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to power control and more particularly to power control in a three-dimensional (3D) integrated circuit (IC) (3DIC).II. Background[0003] Computing devices have become common in modern society. The rise in numbers of computing devices is due, in part, to the advent of truly portable or mobile computing devices. While such mobile computing devices began as relatively cumbersome and bulky devices that exhausted batteries relatively quickly, increased miniaturization and power saving techniques have made current devices into powerful multimedia devices with extensive functions and generally adequate battery life.[0004] While there has been a recent trend to increase the size of some of the mobile computing devices, especially in the smart phone and tablet categories, such size increases are accompanied by expectations of increased computing power and better battery life. Accordingly, there continues to be pressure to miniaturize the circuitry within the mobile computing devices. Two-dimensional (2D) integrated circuits (ICs) (2DICs) are approaching what seem to be hard physical limits in terms of material behavior as well as limits in manufacturing processes which preclude further miniaturization. The pressure to miniaturize continues unabated in view of these limits. Accordingly, circuit designers have embraced three-dimensional (3D) ICs (3DICs).[0005] While IC manufacturing is a relatively mature industry, such manufacturing processes do not guarantee that semiconductor materials made according to the same process have precisely the same characteristics. That is, most semiconductor materials may experience process variations during the manufacturing processes. Such process variations may result in a semiconductor material that is typical (T), fast (F), or slow (S). Such variations may be different for different types of elements within a single semiconductor material. For example, an N-type Metal Oxide Semiconductor (MOS) (NMOS) field effect transistor (FET) might be fast while a P-type MOS (PMOS) FET might be slow. In the 2D context, variations between devices on a single IC are relatively uniform, and various compensation schemes (typically changing the supply voltage) for the 2DIC have been proposed. However, in a 3DIC context, different tiers of the 3DIC may have different process variations. Having different compensation requirements for different tiers imposes additional power control burdens on circuit designers, including voltage step-ups or voltage step-downs or the like. In some instances, the additional power control burdens make certain tiers unusable in certain 3DIC architectures. Such unusable tiers may be discarded, which increases manufacturing costs. Accordingly, designers would appreciate more options for power control in a 3DIC to compensate for process variations.SUMMARY OF THE DISCLOSURE[0006] Aspects disclosed in the detailed description include systems and methods for process variation power control in three-dimensional (3D) integrated circuits (ICs) (3DICs). In an exemplary aspect, at least one process variation sensor is placed in each tier of a 3DIC. The process variation sensors report information related to a speed characteristic for elements within the respective tier to a decision logic. The decision logic is programmed to weight output from the process variation sensors according to relative importance of logic path segments in the respective tiers. The weighted outputs are combined to generate a power control signal that is sent to a power management unit (PMU). By weighting the importance of the logic path segments, a compromise voltage may be generated by the PMU which is "good enough" for all the elements in the various tiers to provide acceptable performance. In this manner, performance may be optimized relative to a lowest acceptable power level resulting in an optimal power to performance tradeoff. [0007] In this regard in one aspect, a method for controlling power in a 3DIC is disclosed. The method includes sensing a first speed characteristic with a first sensor in a physically embodied first tier of a 3DIC. The method also includes sensing a second speed characteristic with a second sensor in a physically embodied second tier of the 3DIC. The method includes weighting a first output from the first sensor with a first weight. The method also includes weighting a second output from the second sensor with a second weight. The method also includes combining weighted outputs from the first sensor and the second sensor. The method also includes determining a control signal for a PMU based at least in part on the combined weighted outputs.[0008] In another aspect, a 3DIC is disclosed. The 3DIC includes a first tier. The first tier includes a first sensor configured to sense a first speed characteristic and generate a first output. The first tier also includes a first logic path segment. The 3DIC also includes a second tier. The second tier includes a second sensor configured to sense a second speed characteristic and generate a second output. The second tier also includes a second logic path segment communicatively coupled to the first logic path segment to form a logic path. The 3DIC also includes decision logic. The decision logic is configured to receive the first output from the first sensor. The decision logic is also configured to receive the second speed characteristic from the second sensor. The decision logic is also configured to weight the first output with a first weight. The decision logic is also configured to weight the second output with the second weight. The decision logic is also configured to combine weighted outputs from the first sensor and the second sensor. The decision logic is also configured to determine a control signal for a PMU based at least in part on the combined weighted outputs.[0009] In another aspect, a 3DIC is disclosed. The 3DIC includes a first tier. The first tier includes a means to sense a first speed characteristic and generate a first output. The first tier also includes a first logic path segment. The 3DIC also includes a second tier. The second tier includes a means to sense a second speed characteristic and generate a second output. The second tier also includes a second logic path segment communicatively coupled to the first logic path segment to form a logic path. The 3DIC also includes a means to receive the first output from the means to sense the first speed characteristic, receive the second speed characteristic from the means to sense the second speed characteristic, weight the first output with a first weight, weight the second output with a second weight, combine weighted outputs from the first sensor and the second sensor, and determine a control signal for a PMU based at least in part on the combined weighted outputs.BRIEF DESCRIPTION OF THE FIGURES[0010] Figures 1A-1C illustrate three exemplary simplified system in a package (SIP) three-dimensional (3D) integrated circuit (IC) (3DIC) systems which may have logic paths that extend across multiple tiers;[0011] Figure 2 is a simplified system on a chip (SoC) monolithic 3DIC that may have logic paths that extend across multiple tiers;[0012] Figure 3 is a simplified view of two tiers of a 3DIC with a logic path extending across both tiers and separate process sensors in each tier;[0013] Figure 4 is a simplified view of two tiers of a 3DIC with a process sensor spanning both tiers as well as separate process sensors in each tier;[0014] Figure 5 is simplified view of two tiers of a 3DIC with a distributed process sensor as well as separate process sensors in each tier;[0015] Figure 6 is a flowchart illustrating an exemplary process for using process sensors to determine an optimal power control signal; and[0016] Figure 7 is a block diagram of an exemplary processor-based system that can include the 3DIC systems of Figures 1A-1C and 2.DETAILED DESCRIPTION[0017] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0018] Aspects disclosed in the detailed description include systems and methods for process variation power control in three-dimensional (3D) integrated circuits (ICs) (3DICs). In an exemplary aspect, at least one process variation sensor is placed in each tier of a 3DIC. The process variation sensors report information related to a speed characteristic for elements within the respective tier to a decision logic. The decision logic is programmed to weight output from the process variation sensors according to relative importance of logic path segments in the respective tiers. The weighted outputs are combined to generate a power control signal that is sent to a power management unit (PMU). By weighting the importance of the logic path segments, a compromise voltage may be generated by the PMU which is "good enough" for all the elements in the various tiers to provide acceptable performance. In this manner, performance may be optimized relative to a lowest acceptable power level resulting in an optimal power to performance tradeoff.[0019] By providing a uniform compromise voltage, voltage level shifting may be avoided as signals pass between different tiers of the 3DIC. Likewise, a single voltage signal is possible to reduce the requirements for a timing closure strategy while at the same time allowing for some power savings relative to a solution that assumes worst case speed characteristics and supplies a higher voltage than is necessary for many tiers so that the slowest tier has an adequate voltage level.[0020] 3DICs may come in various forms, including system in a package (SIP) arrangements or monolithic 3DICs. SIP arrangements include multiple discrete ICs stacked into a package. The individual and separate ICs are preserved because of the difficulty experienced in integrating different technologies within a single IC. For example, fabrication techniques differ greatly between analog and digital components, and accordingly, it is difficult to include both components in a single IC. Likewise, fabrication techniques to support high speed circuitry are different than those techniques used to provide low current leakage, and it is difficult to include both types of components in a single IC. In short, there are many conflicting technology requirements to achieve different functions within an IC. Such different processes may cause one tier of the 3DIC to operate in a "typical" or "T" process corner and the other tier to operate in "fast" (F) or "slow" (S) process corner. More extreme process variations may cause an F tier to be combined with an S tier. Even when the IC is made through a single process, there may be process variations between ICs made at different times or on different places of the silicon water. When such mismatches occur, circuit designers must compensate for the mismatch. In the past, one typical approach is to over-engineer the 3DIC, assuming a worst case scenario and providing a voltage high enough to drive any element in any tier. Such over-engineering may result in excessively high voltage for some tiers and corresponding increases in power consumption. The high voltage for some tiers may mean that those tiers operate faster than other tiers, which may cause performance issues. This problem exists for SIP arrangements and monolithic 3DICs. Alternatively, differing power supplies may be provided for different tiers. Such differing power levels may require voltage level shifting as signals pass from one tier to another. Likewise, this approach requires complex timing closure strategies to cover the different process variations. Exemplary aspects of the present disclosure provide alternate solutions to such process variations. Before addressing those solutions, an overview of different sorts of 3DICs are discussed with reference to Figures 1A-1C and 2. A discussion of specific exemplary aspects of the present disclosure begins below with reference to Figure 3.[0021] In this regard, Figure 1A illustrates a die stacked system 100A. The die stacked system 100A has a first tier or layer 102A formed from a first IC 104A and a second tier or layer 106A formed from a second IC 108A. This arrangement is sometimes referred to as a wireless bond in that there is no direct wire connection between the first IC 104 A and the second IC 108 A. The first IC 104 A is intercoupled to the second IC 108 A with external wiring 110A. To accommodate the external wiring 110A, the second layer 106A may be smaller than the first layer 102A. Likewise, the first IC 104 A is coupled to other elements within a device (not shown) by external wiring 112A. The first IC 104A may be made by a first process and the second IC 108A may be made by a second process, which may cause process variations to exist between the different tiers. Even if the ICs 104A and 108A are made by the same process, different locations on the semiconductor wafer being processed at different times may cause process variations.[0022] With reference to Figure IB, die stacked system 100B is similar to the die stacked system 100 A of Figure 1A, but instead of the external wiring 11 OA, solder bumps HOB are used to interconnect first IC 104B with second IC 108B. This arrangement is sometimes referred to as a flip-chip arrangement. Face to face bonding is achieved, but only for two layers. If more than two layers are used, then external wiring (such as that used in Figure 1A) is required. However, even with just two layers, external wiring 112B is still present to interconnect the die stacked system 100B to other elements within the device. The positioning of the external wiring 112B on the upper surface of the first IC 104B forces the second IC 108B to be smaller than the first IC 104B with the same disadvantages just discussed.[0023] With reference to Figure IC, die stacked system lOOC is likewise similar to the die stacked systems 100A and 100B of Figures 1A and IB, but instead of the external wiring 110A, solder bumps HOC intercouple first IC 104C with second IC 108C. Likewise, vias 114C (which may be through silicon vias (TSV)) extend through the first IC 104C. TSV are typically fairly large (e.g. -microns) and correspondingly impose a large area penalty as wiring within the first IC 104C must be routed around the TSV. This routing and requirements for space for active components again force first the IC 104C to be larger than the second IC 108C.[0024] In contrast to the die stacked systems lOOA-lOOC, a 3DIC may be a monolithic 3DIC. Thus, a single IC may be formed having heterogeneous functions across multiple tiers within the IC. Some functions may be collocated within a single tier while some functions may be spread across multiple tiers. Thus, a monolithic 3DIC allows heterogeneous partitioning of system functions in different tiers of different technologies or flavors, heterogeneously partitioning circuit functions in different tiers of different technologies or flavors, and homogeneously partitioning different functions in different tiers of different technologies or flavors. Such flexibility in partitioning may cause such partitioned functions to use tiers having different process variations. Having a logic path cross tiers as a function of such partitioning creates design challenges in providing an optimal performance and power consumption profile.[0025] To assist in understanding such a monolithic structure, Figure 2 illustrates a simplified cross-section of a monolithic 3DIC 200. The monolithic 3DIC 200 has multiple tiers 202. The tiers 202 may be formed by hydrogen cutting or other monolithic tier formation method.[0026] As noted above, the use of 3DIC technology allows different tiers of the tiers 202 within the monolithic 3DIC 200 to perform different functions and provide all the functions of a particular device in a single IC. For example, the monolithic 3DIC 200 may be a radio frequency (RF) transceiver and controller for a mobile terminal such as a smart phone or tablet. Thus, a first tier 204 includes sensors and other large feature size elements. [0027] With continued reference to Figure 2, a second tier 206 may include RF, analog, and/or power management integrated circuit (PMIC) components such as a receiver, a transmitter, and a duplexer/switch. The second tier 206 may be designed to be relatively low noise so that incoming RF analog signals are not distorted.[0028] With continued reference to Figure 2, an electromagnetic (EM) shield 208 may be positioned between the second tier 206 and a third tier 210. The EM shield 208 may be formed from a conductive material, such as a graphene layer.[0029] The presence of the EM shield 208 helps prevent noise from the first and second tiers 204 and 206 from affecting the low noise characteristics of the third tier 210. The third tier 210 may have a modem or other controller. To accommodate the functions on the third tier 210, the materials and design of the third tier 210 may be selected to promote a medium speed architecture.[0030] With continued reference to Figure 2, fourth and fifth tiers 212 and 214 may be a memory bitcell array with random access memory (RAM) including dynamic RAM (DRAM), static RAM (SRAM) or the like. Both of the fourth and fifth tiers 212 and 214 may be designed to provide low leakage circuitry to improve the operation of the RAM.[0031] With continued reference to Figure 2, sixth and seventh tiers 216 and 218 may be general processing unit tiers. The sixth tier 216 may include a digital signal processor (DSP), such as a baseband processor using combination logic, while the seventh tier 218 may include a DSP relying on sequential logic. Both of the sixth and seventh tiers 216 and 218 may be designed to support high speeds over concerns about leakage.[0032] In an exemplary embodiment, the tiers 202 are electrically intercoupled by monolithic intertier via (MIV) 220. For more information about MIV, the interested reader is referred to "High-Density Integration of Functional Modules Using Monolithic 3D-IC Technology" by Shreedpad Panth et al. in the proceedings of the IEEE/ ACM Asia South Pacific Design Automation Conference, 2013; pp. 681-686, which is hereby incorporated by reference in its entirety. In contrast to TSV, MIV may be on the order of sub 100 nanometers (nm) in diameter (i.e., much smaller than the micron dimensions of the TSV) and 200 nm or less depth. Further, in an exemplary embodiment, each of the multiple tiers 202 may be approximately 400 nm thick or thinner. These dimensions are illustrated in the inset of Figure 2.[0033] By providing different tiers with different functions and/or being able to split circuits across different tiers, a full system IC is possible including batteries, sensors, memory, energy harvesting functions, PMIC, processors, digital and analog components, and the like. Each tier may be optimized to accommodate the functions positioned thereon. Additionally, the very high density of tier to tier links (i.e., the MIV) allows a high degree of wafer level integration. The monolithic 3DIC may have a homogeneous cell level 3D partition - sequential-combination logic, multi-tier memory bitcell arrays. Likewise, the monolithic 3DIC may have a fine grain heterogeneous 3D partition such as a memory to digital core, bitcell array-control logic partitions. This flexibility allows for a wide range of technology features for optimal system functions. However, as noted, this flexibility may introduce process variations between tiers over and above any process variations that may exist within a single tier.[0034] Within the 3DICs lOOA-lOOC and 200, there may be logic paths that span multiple tiers. A simplified block diagram of such logic paths is presented in Figure 3. In particular, Figure 3 illustrates a partial 3DIC 300 that has multiple tiers 302(1)- 302(N). For simplicity, only two tiers 302(1) and 302(2) will be addressed, but it should be appreciated that the concepts discussed relative to the two tiers 302(1) and 302(2) are applicable to more tiers or different tiers 302(P)-302(N) (where 2<P<N) within the partial 3DIC 300. A first logic path 304 exists between a first flip-flop 306 and a second flip-flop 308 through a first logic 310 and a second logic 312. The entirety of the first logic path 304 lies in the first tier 302(1). A second logic path 314 exists between the first flip-flop 306 and a third flip-flop 316 through the first logic 310, down via 318 and through third logic 320. The second logic path 314 spans the two tiers 302(1) and 302(2). Still other logic paths (not labeled) may exist between the first flip- flop 306 and a fourth flip-flop 322, between the second flip-flop 308 and the third flip- flop 316 or the fourth flip-flop 322, between the third flip-flop 316 and the fourth flip- flop 322, or the like.[0035] With continued reference to Figure 3, the first tier 302(1) further includes a process sensor 324, and the second tier 302(2) further includes a process sensor 326. Such process sensors are sometimes referred to as a means to sense speed characteristics. In an exemplary aspect, the process sensor 324 and the process sensor 326 are ring oscillators. The process sensor 324 and the process sensor 326 are communicatively coupled to a decision logic 328 that may be located on any of the tiers 302(1)-302(N) of the partial 3DIC 300 or outside the partial 3DIC 300 (on another tier not illustrated or completely separate from the 3DIC). The decision logic 328 sends a control signal to a PMU 330, which in turn provides power to the partial 3DIC 300. The process sensors 324 and 326 provide an indication of a speed characteristic associated with the respective tiers 302(1) and 302(2) to the decision logic 328. For example, if the first tier 302(1) is a fast tier, an indication of this fast speed characteristic is provided to the decision logic 328. In the example where the process sensors 324 and 326 are ring oscillators, the indication of the speed characteristic may be a frequency count. The decision logic 328 may use the indication of the speed characteristic to determine a control signal to send to the PMU 330. In particular, the decision logic 328 may weight the frequency count according to an algorithm explained below. In general, tiers within the tiers 302(1)-302(N) having more important logic paths are weighted more heavily than tiers having less important logic paths. Where a logic path has a first segment in a first tier, such as first segment 332 of the second logic path 314 in the first tier 302(1) and a second segment in a second tier, such as second segment 334 of the second logic path 314 in the second tier 302(2), the size of the segment relative to the entire length of the logic path may be used to modify the weight given to a particular tier. Based on the combination of weighted indications of speed characteristics, the decision logic 328 may select a power level that is "good enough" to support all the tiers while still having an acceptable performance profile. In this manner a single voltage may be supplied obviating the need for voltage level shifters or complex timing closure strategies. Likewise, the power impact to drive the slower tiers is not so onerous as to make the slower tiers unusable.[0036] As used herein the decision logic 328 (or other decision logics described below) are sometimes referred to as a means to receive the first output from a process sensor, a means to receive the second output from a process sensor; a means to weight the first output with a first weight, a means to weight the second output with a second weight, combine the weights and determine a control signal. [0037] In another exemplary aspect, a partial 3DIC 400, illustrated in Figure 4, may have a process sensor 402 that includes a first sensor portion 404 that is present in a first tier 406(1) and a second sensor portion 408 that is present in a second tier 406(2). The first sensor portion 404 is coupled to the second sensor portion 408. The first tier 406(1) may further include a second process sensor 410 that exists solely within the first tier 406(1). Likewise, the second tier 406(2) may further include a third process sensor 412 that exists solely within the second tier 406(2). As with the partial 3DIC 300 of Figure 3, the process sensors 402, 410, and 412 are communicatively coupled to a decision logic 414, which in turn provides a power control signal to a PMU (not illustrated). As with the partial 3DIC 300, the outputs of the process sensors 402, 410, and 412 are weighted and combined to determine the power control signal. Given the greater granularity of the multiple sensors and the fact that the process sensor 402 conveys information about both of the tiers 406(1) and 406(2), the weighting may assume greater granularity. For example, the weight assigned to output from the second process sensor 410 may be associated with how important the logic paths that lie strictly in the first tier 406(1) are. Likewise, the weight assigned to output from the third process sensor 412 may be associated with how important the logic paths that lie strictly in the second tier 406(2) are. Still further, output of the process sensor 402 may be weighed according to how important the logic paths that have segments in both the first tier 406(1) and the second tier 406(2) are. In an exemplary aspect, the process sensors 402, 410, and 412 are ring oscillators and provide an indication of a speed characteristic in the form of a frequency count to the decision logic 414. In the example, where the process sensor 402 is a ring oscillator, preliminary weighting of the frequency count may be achieved by configuring a number of stages (N) in the first sensor portion 404 relative to a number (M) of stages in the second sensor portion 408. That is, an appropriate number of stages is provided in each tier to create a senor that tracks critical path behavior for a logic path that spans between the first tier 406(1) and the second tier 406(2). In an exemplary aspect, more stages are provided for more important tiers. In another exemplary aspect, more stages are provided for tiers with longer segments. Still other ways of weighting the relative count between tiers may also be used without departing from the present disclosure. In an exemplary aspect, the ring oscillators are formed from inverters. In other exemplary aspects, the ring oscillators are formed from Negative AND (NAND) or Negative OR (NOR), or some combination of the two logic types.[0038] In another exemplary aspect, a partial 3DIC 500, illustrated in Figure 5, may have a first process sensor 502(1) in a first tier 504(1) and a second process sensor 502(2) in a second tier 504(2). Collectively, the first process sensor 502(1) and the second process sensor 502(2) form a distributed process sensor. Still further, the first tier 504(1) may include a process sensor 506 that is not part of the distributed process sensor and exists solely within the first tier 504(1). The second tier 504(2) may include a process sensor 508 that is not part of the distributed process sensor and exists solely within the second tier 504(2). The process sensors 502(1), 502(2), 506, and 508 are communicatively coupled to a decision logic 510 and provide information related to a speed characteristic for the respective tiers 504(1) and 504(2) as well as information about both of the tiers 504(1) and 504(2) in the case of the distributed process sensor formed from the process sensors 502(1) and 502(2). Logic paths may be formed between flip-flops 512(1)-512(4) (referenced in the drawings as "FF') and logic elements 514(1)-514(3). As with the partial 3DIC 400 of Figure 4, the decision logic 510 weights outputs from the process sensors 502(1), 502(2), 506, and 508 and combines the weighted outputs to generate the power control signal. The weight assigned to output from the process sensor 506 may be associated with how important the logic paths that lie strictly in the first tier 504(1) are. Likewise, the weight assigned to output from the process sensor 508 may be associated with how important the logic paths that lie strictly in the second tier 504(2) are. Still further, output of the distributed process sensor formed from the process sensors 502(1) and 502(2) may be weighed according to how important the logic paths that have segments in both first tier 504(1) and the second tier 504(2) are.[0039] Against the possible 3DIC structures described above, Figure 6 provides a flowchart for determining the weighting to be assigned to outputs from process sensors. Process 600 of Figure 6 assumes that the process sensors are ring oscillators and that the output is a frequency count. Other algorithms may be created for process sensors that generate different outputs. Returning to the process 600, the process starts (block 602). Initially, the designer identifies Tier_Groups for each critical logic path (block 604). That is, while there are numerous logic paths, some may be so inconsequential that impacts thereto from process variations may not affect the overall operation noticeably. Accordingly, only the critical logic paths are identified and corresponding tiers identified. It should be appreciated that tiers with critical logic paths confined to just one tier will be the lone member in that group. The weight of this group is one (Wn= 1). In contrast, each set of tiers with critical logic paths distributed between them form separate groups. The sum of the weights within each Tier_Group will add up to one (∑Wn) .[0040] With continued reference to Figure 6, the process 600 continues. For each sensor type (distributed, multi-tier, single tier), the designer provides a target frequency (FTarget) which maps to the critical logic path target performance for each tier (block 606) assuming nominal voltage, Typical-Typical process variation, and nominal temperature (25° C). For each tier group, the decision logic sets a Scaling_Factor according to the formula SFX=∑Wn Fraraet(block 608). Note that SFXlooks like a two dimensional (2D) solution when the critical logic path is confined to a single tier. The process 600 continues by finding the maximum scaling factor (i.e., Max_SF = Max {SF!, SF2, . . . SFX} (block 610).[0041] The decision logic then determines if l-Max_SF is positive or negative (block 612). If l-Max_SF is positive, the decision logic determines if this value is within an error tolerance limit (block 614). If the answer to block 614 is no, then the decision logic lowers the voltage to a VDD where VDD + 1 PMIC step makes Max_SF > 1 (block 616) and the process 600 returns to block 610. For the sake of example, one PMIC step may be around 10- 12.5 mV. The value of the PMIC step may be programmable. If the answer to block 612 is that the value is negative, the decision logic determines if the value is within an error tolerance limit (block 618). If the answer to bock 618 is no, then the decision logic raises the voltage to a VDD where VDD-1 PMIC step makes Max_SF >1 (block 620) and the process returns to block 610. Note that the goal of the incrementing and decrementing of the VDD by a PMIC step is to get l-Max_SF as close to zero as possible, but still positive.[0042] With continued reference to Figure 6, if either block 614 or block 618 is answered affirmatively, the PMIC voltage remains unchanged (block 622) and the process continues to monitor the outputs from the sensors. [0043] An example of the process 600 is provided in Table 1 below where a single logic path is distributed between three tiers.1 0.3 100 125 0.24 FF2 0.6 100 75 0.80 SS3 0.1 100 100 0.10 TT[max(1 .14)]=1 .14Table 1[0044] Another example of the process 600 is provided in table 2 below where there are two paths. One is confined to a single tier and one is distributed across two tiers. TG is the Tier_Group.Tier TG Weight FTAHGETF Wn*FTA T/F corner1 1 1 100 125 0.80 FF2 2 0.3 100 125 0.24 FF3 2 0.7 100 100 0.70 TT[max (0.80, 0.94)]=0.94Table 2[0045] The systems and methods for process variation power control in 3DICs according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a smart phone, a tablet, a phablet, a server, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, and an automobile.[0046] In this regard, Figure 7 illustrates an example of a processor-based system 700 that can employ the 3DIC systems lOOA-lOOC and 200 illustrated in Figures 1A-1C and 2. In this example, the processor-based system 700 includes one or more central processing units (CPUs) 702, each including one or more processors 704. The CPU(s) 702 may have cache memory 706 coupled to the processor(s) 704 for rapid access to temporarily stored data. The CPU(s) 702 is coupled to a system bus 708 and can intercouple master and slave devices included in the processor-based system 700. As is well known, the CPU(s) 702 communicates with these other devices by exchanging address, control, and data information over the system bus 708. For example, the CPU(s) 702 can communicate bus transaction requests to a memory controller 710 as an example of a slave device. Although not illustrated in Figure 7, multiple system buses 708 could be provided, wherein each system bus 708 constitutes a different fabric.[0047] Other master and slave devices can be connected to the system bus 708. As illustrated in Figure 7, these devices can include a memory system 712, one or more input devices 714, one or more output devices 716, one or more network interface devices 718, and one or more display controllers 720, as examples. The input device(s) 714 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 716 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 718 can be any devices configured to allow exchange of data to and from a network 722. The network 722 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 718 can be configured to support any type of communications protocol desired. The memory system 712 can include one or more memory units 724(0-N).[0048] The CPU(s) 702 may also be configured to access the display controller(s) 720 over the system bus 708 to control information sent to one or more displays 726. The display controller(s) 720 sends information to the display(s) 726 to be displayed via one or more video processors 728, which process the information to be displayed into a format suitable for the display(s) 726. The display(s) 726 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, etc.[0049] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The devices described herein may be employed in any circuit, hardware component, IC, or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0050] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a DSP, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0051] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in RAM, flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0052] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0053] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A present method of fabricating a memory device includes the steps of providing a dielectric layer (110), providing an opening (112) in the dielectric layer (110), providing a first conductive body (116A) in the opening (112), providing a switching body ( 118A) in the opening (112), the first conductive body ( 116A) and switching body (118A) filling the opening (112), and providing a second conductive body (120A) over the switching body (118A). In an alternate embodiment, a second dielectric layer (150) is provided over the first-mentioned dielectric layer (110), and the switching body (156A) is provided in an opening (152) in the second dielectric layer (150).
1、A method of manufacturing a memory device includes:Setting a dielectric layer (110);An opening (112) is provided in the dielectric layer (110);A first conductive body (116A) is provided in the opening (112);A switching body (118A) is provided in the opening (112), and the first conductive body (116A) and the switching body (118A) fill the opening (112); andA second conductive body (120A) is provided on the switching body (118A).2、The method of claim 1, wherein the switching body (118A) grows from the first conductive body (116A).3、The method of claim 1, wherein the second conductive body is formed by disposing a conductive layer (120) on the dielectric layer (110) and the switching body (118A) and patterning the conductive layer (120). (120A), and the second conductive body (120A) is provided.4、The method according to claim 1, wherein, by providing a second dielectric layer (122) above the first-mentioned dielectric layer (110) and the switching body (118A), An opening (123) is provided in (122), and the second conductive body (128) is provided in the opening (123) in the second dielectric layer (122), and the second conductive body (128) is provided.5、A method of manufacturing a memory device includes:Providing a first dielectric layer (110);An opening (112) is provided in the first dielectric layer (110);Providing a first conductive body (116A) in the opening (112) in the first dielectric layer (110) to fill the opening (112) in the first dielectric layer (110);Providing a second dielectric layer (150);An opening (152) is provided in the second dielectric layer (150);A switching body (156A) is provided in the opening (152) in the second dielectric layer (150) to fill the opening (152) in the second dielectric layer (150); andA second conductive body (160A) is provided on the switching body (156A).6、The method of claim 5, wherein the second dielectric layer (150) and the switching body (156A) are provided with a conductive layer (160) and the conductive layer (160) is patterned to form the second The conductive body (160A) is provided with the second conductive body (160A).7、A memory device includes:A dielectric layer (110) having an opening (200);A first conductive layer (214A) in the opening (200);A switching material layer (220A) in the opening (200) and on the first conductive layer (214A); andA second conductive layer (230A) above the opening (200) and on the switching material layer (220A).8、The memory device of claim 7, further comprising first and second insulating walls (206, 208) in the opening (200) in the dielectric layer (110), the first conductive layer (214A) A layer of switching material (220A) is between these insulating walls (206, 208).9、The memory device according to claim 7, further comprising first and second conductive walls (242, 244) in the opening (200) in the dielectric layer (110), the first conductive layer (254A) A layer of switching material (260A) is between these conductive walls (242, 244).10、The memory device according to claim 9, further comprising a conductive connection portion (246) connecting the conductive walls (242, 244), and the first conductive layer (254A) on the conductive connection portion.
Metal inlaid metal-insulator-metal device with improved size scalabilityTechnical fieldThe present invention relates generally to memory devices, and more specifically, to metal-insulator-metal (MIM) devices and methods of making the devices.Background technique1 and 2 illustrate a method of manufacturing a metal-insulator-metal (MIM) device using an etching technique. Initially, the conductive layer 22 is disposed on the substrate 20. Next, an insulating layer 24 is provided on the conductive layer 22. Then, another conductive layer 26 is disposed on the insulating layer 24. It should be understood that the conductive layers 22, 26 and the insulating layer may be of various materials. (It should be further understood that the term (MIM) is used to describe such a device, even if, for example, the top and / or bottom layers 22, 26 may be non-metallic). Next, using a standard photolithographic technique, a photoresist layer 28 is disposed on the conductive layer 26, and the photoresist layer 28 is patterned as shown. The patterned photoresist layer 28 is used as a mask, and the exposed material is etched to remove portions of the conductive layer 22, the insulating layer 24, and the conductive layer 26 to form a remaining MIM on the substrate 20. Stack 30. The photoresist 28 is then removed to obtain a MIM device 30 formed on the substrate 20 and including an electrode 22A, a switching layer 24A, and an electrode 26A.It will be appreciated that the device stack must be properly formed to ensure proper operation of the device 30. For example, it is highly desirable that the etchant provide proper and uniform etching of the materials of the electrodes 22, 26 and the insulating layer 24, while leaving the exposed material of the substrate 20 substantially complete (the "selectivity" of the etchant is Refers to the ability to properly remove selected materials while leaving other materials in contact with them substantially intact). When the MIM device 30 of FIG. 2 is shown to be formed in an ideal manner, the following occurs: Depending on the materials selected for the electrodes 22, 26 and the insulating layer 24, and the etchant used, this layer 22, The non-uniform etching of the materials at 24 and 26 results in an inappropriate morphology of the MIM stack 30 (for example, one layer may be etched faster than the other layers, causing the layer to be etched more than other layers). In addition, unwanted gouging of the substrate 20 and layers 22, 24, 26 may occur. These phenomena cause deterioration of the performance of the obtained memory device.In addition to limiting the scalability, the above method also results in a less efficient manufacturing method.Therefore, there is a need for a method that can avoid the above problems, and provide a suitable and consistent MIM device with improved size scalability.Summary of the InventionThe method for manufacturing a memory device of the present invention includes providing a dielectric layer; providing an opening in the dielectric layer; providing a first conductive body in the opening; and providing a switching body in the opening, the first conductive body And the switching body fills the opening; and a second conductive body is provided on the switching body.The present invention can be better understood through consideration of the following detailed descriptions and the accompanying drawings. For those skilled in the art, the present invention can be easily understood by exemplifying the best mode for carrying out the present invention by referring to the embodiment of the present invention shown and described below. It should also be understood that the present invention is capable of other embodiments, and its several details can be modified and various prominent aspects, all of which will not depart from the scope of the present invention. Accordingly, the drawings and detailed description are for illustrative purposes only and should not be construed as limiting the invention.BRIEF DESCRIPTION OF THE DRAWINGSThe novel features of the reliable characteristics of the present invention are set forth in the appended claims. However, the present invention itself and its preferred mode of use, as well as its further purposes and advantages, will be best understood by referring to the detailed description exemplified above and in conjunction with the accompanying drawings, of which:1 to 3 show the process steps of forming a MIM according to the prior art method;4 to 6 show the process steps of forming a MIM device according to the first embodiment of the present invention;7 to 9 show the process steps of forming a MIM device according to a second embodiment of the present invention;10 to 12 show the process steps of forming a third embodiment MIM device according to the present invention;13 to 15 show the process steps of forming a fourth embodiment MIM device according to the present invention;16 to 18 show the process steps of forming a fifth embodiment MIM device according to the present invention;19 to 24 show the process steps of forming a sixth embodiment MIM device according to the present invention;25 to 29 show manufacturing steps for forming a MIM device according to a seventh embodiment of the present invention;30 to 34 show the process steps of forming an MIM device according to an eighth embodiment of the present invention;35 to 39 show process steps for forming a ninth embodiment MIM device according to the present invention; andFIG. 40 to FIG. 44 show the process steps of forming a tenth embodiment MIM device according to the present invention.detailed descriptionReference will now be made in detail to the specific embodiments of the present invention, which show the best modes for carrying out the invention in consideration of its performance.Referring to FIG. 4, a structure formed on a semiconductor wafer includes a p + semiconductor substrate 70 in which n + regions 72, 74, 76, and 78 are formed. In contact with the respective n + regions 72, 74, 76, 78 are conductive W (tungsten) plugs 80, 82, 84, 86 that extend through the SiO2 layer 88, SiN layer 90, and SiO2 layer 92. Covering the top of the SiO2 layer 92 and the W plugs 80, 82, 84, 86 is a SiN layer 94. The n + regions 72, 74 form the transistor T0 along the gate and the gate oxide 96, and the n + regions 76, 78 form the transistor T1 along the gate and the gate oxide 98. The plug 80 contacts the n + source region 72 of the transistor T0, and the plug 82 contacts the n + drain region 74 of the transistor T0. The plug 84 contacts the n + drain region 74 of the transistor T1, and the plug 86 contacts the n + source region 78 of the transistor T1 through the W body 100 on the substrate 70. The conductive W plugs 106, 108 contact the respective plugs 82, 84 and extend through the SiN layer 94 and the SiO 2 layer 95.A nitride such as a SiN or SiON layer or an ARC double layer 110 is deposited on the resulting structure to a thickness of, for example, 1000 angstroms (A). Using standard photolithography techniques, openings 112, 114 are provided through the nitride layer 110 above the respective plugs 106, 108. A conductive layer 116 is deposited on the resulting structure and contacts the plugs 106, 108 on the nitride layer 110 and in the openings 112, 114. The conductive layer 116 may be, for example, tantalum (Ta), tantalum nitride (TaN), titanium (Ti), titanium nitride (TiN), tungsten (W), tungsten nitride (WN), nickel (Ni), cobalt ( Co), aluminum (Al), copper (Cu), or any other suitable material. Deposition can be performed using, for example, physical vapor deposition (PVD), atomic layer deposition (ALD), chemical vapor deposition (CVD), plasma enhanced chemical vapor deposition (PECVD), or metal organic chemical vapor deposition (MOCVD).Referring to FIG. 5, a chemical mechanical polishing step is performed, in which a portion of the layer 116 covering the nitride layer 110 (ie, an excessively loaded portion) is removed, and the nitride layer 110 itself and the conductive bodies 116A, 116B are exposed. The systems are formed in the respective openings 112, 114 and fill the respective openings 112, 114. Next, a thermal oxidation step is performed, in which the top part of each conductive body is transformed into its oxide to form switching bodies 118A, 118B, so that the remaining conductive body 116A and the switching body 118A are placed on the opening 112 and The opening 112 is filled in a manner of contact with it, and the remaining conductive body 116B and the switching body 118B are filled in the opening 114 in a manner of being in contact with the opening 114. Other processes such as ion implantation and plasma oxidation can be used to form the switching body.Referring to FIG. 6, a conductive layer 120 is deposited on the resulting structure. The conductive layer 120 may be, for example, Ta, TaN, Ti, TiTiN, W, WN, Ni, Co, Al, Cu, or any other suitable material. Deposition can be performed using, for example, PVD, ALD, CVD, PECVD, or MOCVD. Using standard photolithography technology, the conductive layer 120 is patterned to form conductive bodies 120A, 120B, the conductive body 120A is on and in contact with the switching body 118A, and the conductive body 120B is on and in contact with the switching body 118B.An encapsulating dielectric layer 122, such as SiN, SiC, or a double-layer SiN / SiON, SiC / SiN, or SiC / SiON system, is deposited on the resulting structure. Prior to this deposition, an oxidative pretreatment can be performed to improve adhesion and form an insulating layer across the common surface. Using standard photolithography techniques, openings 123, 124 are provided in this layer 122 to expose the conductive bodies 120A, 120B. A conductive metal layer 126 is deposited on the resulting structure, and is connected to the conductive bodies 120A, 120B through conductive Ti / TiN adhesive layers 128, 130. As an alternative embodiment of the cladding layer shown and described in this and other embodiments (layer 122 in this embodiment), the cladding layer can be omitted and a subsequent metal layer can be directly deposited (in this embodiment) For layer 126).The conductive body 116A (electrode), the switching body 118A, and the conductive body 120A (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 116B (electrode), the switching body 118B, and the conductive body 120B (electrode) form a metal-insulator-metal (MIM) memory device 134. This method is a damascene process, in which a component is set in a trench and a chemical mechanical planarization process is performed on the component. As will be seen, using this method, MIM devices are formed without using etching to avoid the problems described above. Surely, use the method as stated and described as efficient and simple. Furthermore, this method allows improvements in dimensional scalability when manufacturing the structure.7 to 9 show a second embodiment of the present invention. In this embodiment, referring to FIGS. 7 and 8, the nitride layer 110, the openings 112, 114, the conductive bodies 116A, 116B, and the switching bodies 118A, 118B are formed as shown in FIGS. 4 and 5. However, when the next step, referring to FIG. 9, a cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in layer 122 to expose the switching bodies 118A, 118B. A conductive metal layer 126 is deposited on the resulting structure, and is connected to the switching bodies 118A, 118B through conductive Ti / TiN adhesive layers 128, 130.The conductive body 116A (electrode), the switching body 118A, and the conductive body (ie, the adhesive layer 128) (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 116B (electrode), the switching body 118B, and the conductive body 120B (ie, the adhesive layer 130) (electrode) form a metal-insulator-metal (MIM) memory device 134. This method provides advantages similar to those proposed in the embodiments of FIGS. 4 to 6 described above, and does not require the formation of conductive bodies 120A, 120B.10 to 12 show a third embodiment of the present invention. In this embodiment, referring to FIGS. 10 and 11, the nitride layer 110, the openings 112, 114, and the conductive bodies 116A, 116B are formed as shown in FIGS. 4 and 5. However, in this case, the switching body is not formed as described previously. Referring to FIG. 12, a layer of the switching material 140 is deposited on the resulting structure and is in contact with the nitride layer 110 and the conductive bodies 116A, 116B. Second, a conductive layer 142 is deposited on the layer of the switching material 140. Using standard photolithography techniques, the layers of the conductive layer 142 and the switching material 140 are patterned as shown, so that the conductive body 142A is on the switching body 140A and the conductive body 142B is on the switching body 140B, and the switching body 140A is on the conductive body 116A and the switching body 140B is on the conductive body 116B.A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in the layer 122 to expose the conductive bodies 142A, 142B. A conductive metal layer 126 is deposited on the resulting structure and is connected to the conductive bodies 142A, 142B through conductive Ti / TiN adhesive layers 128, 130.The conductive body 116A (electrode), the switching body 140A, and the conductive body (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 116B (electrode), the switching body 140B, and the conductive body 142B (electrode) form a metal-insulator-metal (MIM) memory device 134.13 to 15 show a fourth embodiment of the present invention. In this embodiment, referring to FIGS. 13 and 14, the SiO 2 layer 95, the openings 112, 114, and the conductive bodies 116A, 116B are formed as shown in FIGS. 10 and 11. Next (FIG. 15), a dielectric such as the SiN layer 150 is deposited on the resulting structure, and openings 152, 154 are provided therein using standard photolithography techniques. A layer of the switching material 156 is deposited on the resulting structure, fills the openings 152, 154 in the dielectric layer 150, and contacts the conductive bodies 116A, 116B. A chemical mechanical polishing step is performed to remove a portion of the switching material layer 156 from above the dielectric layer 150, so that the switching bodies 156A, 156B remain in the openings 152, 154 of the dielectric layer 150 to fill the dielectric layer 150. In the opening 152, 154.Second, a conductive layer 160 is deposited on the resulting structure. Using standard photolithography techniques, the conductive layer 160 is patterned as shown so that the conductive body 160A is on the switching body 156A and the conductive body 160B is on the switching body 156B.A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in the layer 122 to expose the conductive bodies 160A, 160B. A conductive metal layer 126 is deposited on the resulting structure, and is connected to the conductive bodies 160A, 160B through conductive Ti / TiN adhesion layers 128, 130.The conductive body 116A (electrode), the switching body 156A, and the conductive body 160A (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same manner, the conductive body 116B (electrode), the switching body 156B, and the conductive body 160B (electrode) form a metal-insulator-metal (MIM) memory device 134.16 to 18 show a fifth embodiment of the present invention. In this embodiment, referring to FIGS. 16 and 17, the nitride layer 110, the openings 112, 114, and the conductive bodies 116A, 116B are formed as shown in FIGS. 10 and 11. Next (FIG. 18), a dielectric such as the SiN layer 150 is deposited on the resulting structure, and openings 152, 154 are provided therein using standard photolithography techniques. A layer of the switching material 156 is deposited on the resulting structure, fills the openings 152, 154 in the dielectric layer 150, and contacts the conductive bodies 116A, 116B. A chemical mechanical polishing step is performed to remove a portion of the switching material layer 156 from above the dielectric layer 150, so that the switching bodies 156A, 156B remain in the openings 152, 154 in the dielectric layer 150 to fill the dielectric layer 150. In the openings 152, 154.A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in layer 122 to expose the conductive bodies 156A, 156B. A conductive metal layer 126 is deposited on the resulting structure, and is connected to the conductive bodies 156A, 156B through conductive Ti / TiN adhesion layers 128, 130.The conductive body 116A (electrode), the switching body 156A, and the conductive body (ie, the adhesive layer 128) (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 116B (electrode), the switching body 156B, and the conductive body (ie, the adhesive layer 130) (electrode) form a metal-insulator-metal (MIM) memory device 134.19 to 24 show a sixth embodiment of the present invention. In this embodiment, similar to the previous embodiment, the openings 200 and 202 are disposed in the nitride layer 110. As a next step, an insulating layer 204 such as SiN, SiC, or a non-conductive metal oxide is deposited on the resulting structure (FIG. 19). The standard photolithography technique is used to remove the portion of the insulating layer 204 that is in contact with the plugs 106, 108, resulting in the structure of FIG.A conductive layer 214 is deposited on the resulting structure, in contact with the plugs 106, 108 on the nitride layer 110 and in the remaining openings 216, 218 (FIG. 21). Referring to FIG. 22, a chemical mechanical polishing step is performed, in which a portion of the layer 214 covering the nitride layer 110 and a portion of the insulating layer 204 are removed, and the nitride layer 110 itself is exposed, so that insulating walls 206 and 208 are disposed in the opening 200. In the opening 202, insulating walls 210 and 212 are disposed. The conductive bodies 214A and 214B are formed in the respective remaining openings 216A and 216B and fill the openings 216A and 216B. Next, the switching bodies 220A, 220B are formed as described above, so that the remaining conductive bodies 214A and the switching bodies 220A fill the remaining openings 216 in a manner to contact the remaining openings 216, and the The remaining conductive body 214B and the switching body 220B fill the remaining opening 218 so as to be on and in contact with the remaining opening 218. The conductive body 214A and the switching body 220A are located between the walls 206 and 208, and the conductive body 214B and the switching body 220B are located between the walls 210 and 212.Second, a conductive layer 230 is deposited on the resulting structure. Using standard photolithography techniques, the conductive layer 230 is patterned to form conductive bodies 230A, 230B, conductive body 230A on and in contact with switching body 220A, and conductive body 230B on and in contact with switching body 220B (Figure twenty three).23 and 24, an insulating layer 232, such as SiN or SiC, is deposited on the resulting structure. A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in layers 122 and 232 to expose the conductive bodies 230A, 203B. A conductive metal layer 126 is deposited on the resulting structure and is connected to the conductive bodies 230A, 230B through conductive Ti / TiN adhesion layers 128, 130.The conductive body 214A (electrode), the switching body 220A, and the conductive body 230A (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 214B (electrode), the switching body 220B, and the conductive body 230B (electrode) form a metal-insulator-metal (MIM) memory device 134.25 to 29 show a seventh embodiment of the present invention. In this embodiment, similar to the previous embodiment, the openings 200 and 202 are disposed in the nitride layer 110. As a next step, a conductive layer 240 such as TiN, TaN, TiN, or WN is deposited on the resulting structure (FIG. 25).A conductive layer 254 is deposited on the resulting structure and in the remaining openings 256, 258 (FIG. 26). Referring to FIG. 27, a chemical mechanical polishing step is performed, in which a portion of the layer 254 covering the nitride layer 110 and a portion of the conductive layer 240 covering the nitride layer 110 are removed to obtain the structure of FIG. 27 having conductive walls 242, 244, All conductive connection portions 246 connected to the walls 242, 244 are in the opening 200, and conductive connection portions 252 having conductive walls 248, 250, and the walls 248, 250 are all located in the opening 202. The nitride layer 110 itself is exposed, and the conductive bodies 254A and 254B are formed in the respective remaining openings 256 and 258 to fill the respective openings 256 and 258. Then, the switching bodies 260A, 260B are formed as described above, so that the remaining conductive bodies 254A and the switching body 260A fill the remaining openings 256 in a manner to contact the remaining openings 256, and The remaining conductive body 254B and the switching body 260B fill the remaining opening 258 on and in contact with the remaining opening 258. The conductive body 254A and the switching body 260A are located between the walls 242 and 244, and there is a conductive body 254A on the portion 246, and the conductive body 254B and the switching body 260B are located between the walls 248 and 250, and the conductive body is on the portion 252. 254B.Second, the conductive layer 262 is deposited on the resulting structure. Then, a conductive layer 264 such as TiN, TaN, TiN, or WN is deposited on the conductive layer 262. Using standard photolithography techniques, the conductive layer 262 and the conductive layer 264 are patterned to form conductive bodies 266, 268, the conductive body 266 is on and in contact with the switching body 260A, and the conductive body 268 is on the switching body 260B and Contact it (Figure 28).A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in the layer 122 to expose the conductive bodies 266, 268. A conductive metal layer 126 is deposited on the resulting structure and is connected to the conductive bodies 266, 268 through conductive Ti / TiN adhesion layers 128, 130.The conductive body 254A (electrode), the switching body 260A, and the conductive body (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 254B (electrode), the switching body 260B, and the conductive body 268 (electrode) form a metal-insulator-metal (MIM) memory device 134.30 to 34 show an eighth embodiment of the present invention. The process steps of FIGS. 30 to 31 are similar to the process steps of FIGS. 25 to 26. The structure of FIG. 31 was chemically and mechanically ground to obtain the structure of FIG. 32. Next (FIG. 33), a dielectric layer 270 such as SiN is deposited on the resulting structure, and openings 272, 274 are provided therein. A switching material layer 276 is deposited on the resulting structure, fills the openings 272, 274 in the dielectric layer 270, and contacts the conductors 254A, 254B. A chemical mechanical polishing step is performed, in which a portion of the layer 276 covering the dielectric layer 270 is removed, and the dielectric layer 270 itself is exposed, and the switching bodies 276A, 276B are formed in the respective openings 272, 274, and filled with Fill the respective openings 272, 274. Then, a conductive layer 262 is deposited on the resulting structure, and a conductive layer 264 is deposited on the conductive layer 262. Using standard photolithography techniques, conductive layer 262 and conductive layer 264 are patterned to form conductive bodies 266, 268, conductive body 266 on and in contact with switching body 260A, and conductive body 268 on and in contact with switching body 260B contact.A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in the layer 122 to expose the conductive bodies 266, 268. A conductive metal layer 126 is deposited on the resulting structure and is connected to the conductive bodies 266, 268 through conductive Ti / TiN adhesion layers 128, 130 (FIG. 34).The conductive body 254A (electrode), the switching body 276A, and the conductive body 266 (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 254B (electrode), the switching body 276B, and the conductive body 268 (electrode) form a metal-insulator-metal (MIM) memory device 134.35 to 39 show a ninth embodiment of the present invention. The process steps of FIGS. 33 to 36 are similar to the process steps of FIGS. 19 to 20. The structure of FIG. 36 was chemically and mechanically ground to obtain the structure of FIG. 37. Next, referring to FIG. 38, a dielectric system such as a SiN layer 280 is deposited on the resulting structure, and openings 282, 284 are provided therein. A switching material layer 286 is deposited on the resulting structure, fills the openings 282, 284 in the dielectric layer 280, and contacts the conductors 214A, 214B. A chemical mechanical polishing step is performed in which a portion (ie, an excessively loaded portion) of the layer 286 covering the dielectric layer 280 is removed, and the nitride layer 280 itself is exposed, and the switching bodies 286A, 286B are formed on the respective The respective remaining openings 282, 284 are filled in the openings 272, 274 of. Next, a conductive layer 292 is deposited on the resulting structure and patterned as shown to form conductive bodies 292, 294, which are in contact with the respective switching bodies 286A, 286B, and an insulating layer such as SiN is deposited on the resulting structure on.A cladding dielectric layer 122 is deposited on the resulting structure. Using standard photolithography techniques, openings 123, 124 are provided in the layer 122 and the layer 296 to expose the conductive bodies 292, 294. The conductive metal layer 126 is provided on the resulting structure, and is connected to the conductive bodies 292, 294 through the conductive Ti / TiN adhesive layers 128, 130 (FIG. 39).The conductive body 214A (electrode), the switching body 286A, and the conductive body 292 (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 214B (electrode), the switching body 286B, and the conductive body 298 (electrode) form a metal-insulator-metal (MIM) memory device 132.40 to 44 show a tenth embodiment of the present invention. The process steps of FIGS. 40 to 41 are similar to the process steps of FIGS. 19 to 20. The structure of FIG. 41 was chemically and mechanically ground to obtain the structure of FIG. 42. Next (FIGS. 43 and 44), a dielectric layer 280 is deposited on the resulting structure and is provided with openings 282, 284 therein. A switching material layer 286 is deposited on the resulting structure, fills the openings 282, 284 in the dielectric layer 280, and contacts the conductors 214A, 214B. A chemical mechanical polishing step is performed, in which a portion of the layer 286 covering the dielectric layer 280 is removed, and the dielectric layer 280 itself is exposed, and the switching bodies 286A, 286B are formed in the respective openings 282, 284 and filled The respective openings 282, 284. An insulating layer 300 such as SiN is deposited on the resulting structure, and a cladding dielectric layer 122 is deposited on the insulating layer 300. Using standard photolithography techniques, the openings 123, 124 are provided in the insulating layer 300 and the dielectric layer 122, exposing the switching bodies 286A, 286B. The conductive metal layer 126 is disposed on the generated structure, and is connected to the switching bodies 286A, 286B through conductive Ti / TiN adhesive layers 128, 130.The conductive body 214A (electrode), the switching body 286A, and the conductive body (ie, the adhesive layer 128) (electrode) form a metal-insulator-metal (MIM) memory device 132. In the same case, the conductive body 214B (electrode), the switching body 286B, and the conductive body (ie, the adhesive layer 130) (electrode) form a metal-insulator-metal (MIM) memory device 134.The method provides various metal damascene processes for forming metal-insulator-metal (MIM) devices. The various methods are clear and effective in properly forming such devices. In particular, the problems raised with regard to etching materials to form devices are avoided. In addition, the method provides dimensional scalability of the height of the device.The foregoing description of the embodiments of the invention has been presented for display and illustration purposes. It is not intended that the invention has been fully described or limited to the precise form disclosed. In view of the above description, it can be seen that other corrections or changes are possible.The embodiments have been chosen and described in order to provide the best illustration of the principles of the invention and its practical application, thereby enabling others skilled in the art to use the invention in various embodiments and various modifications as appropriate For consideration of specific uses. All such modifications and variations are within the scope of the invention as determined by the scope of the appended patent application, and the broadest scope in which rights may be obtained in accordance with fairness, lawfulness, and impartiality.
The invention relates to apparatuses including stacked horizontal capacitor structures and related methods, memory devices, and electronic systems. An apparatus includes fin structures comprising individual levels of a conductive material having elongated portions extending in a first horizontal direction, first conductive lines extending in a second horizontal direction transverse to the first horizontal direction, and second conductive lines extending in a vertical direction transverse to each of the first horizontal direction and the second horizontal direction. At least portions of the first conductive lines are aligned vertically. The apparatus also includes horizontal capacitor structures comprising the conductive material of the fin structures and access devices proximate intersections of the first conductive lines and the second conductive lines. The access devices comprise the conductive material of the fin structures.
1.A device including:A fin structure, which includes each layer of conductive material, and the conductive material includes an elongated portion extending in a first horizontal direction;A first conductive line that extends in a second horizontal direction transverse to the first horizontal direction, and at least a part of the first conductive line is vertically aligned;A second conductive line extending in a vertical direction transverse to each of the first horizontal direction and the second horizontal direction;A horizontal capacitor structure including the conductive material of the fin structure; andAn access device is close to the intersection of the first conductive line and the second conductive line, and the access device includes the conductive material of the fin structure.2.The apparatus according to claim 1, further comprising a support structure extending in the second horizontal direction, the support structure comprising between vertical abutting portions of the conductive material of the respective layers of the fin structure Of electrical insulating materials.3.The apparatus of claim 1, wherein each access device includes a gate structure at least partially surrounding a gate dielectric material, at least some of the gate structures substantially surrounding the conductive material of the fin structure Material.4.3. The apparatus of claim 3, wherein each horizontal capacitor structure and the corresponding access device share a common gate structure.5.The device according to any one of claims 1 to 4, wherein the first conductive line is configured as a data line, and the horizontal capacitor structure on the conductive material of the respective layers of the fin structure shares Public data line.6.4. The device of any one of claims 1 to 4, wherein the second conductive line comprises an access line, and the horizontal capacitor structures aligned in a single vertical column share a common access line.7.The device according to any one of claims 1 to 4, further comprising:A stepped structure adjacent to at least one of a longitudinal end or a lateral side of the elongated portion of the conductive material of the fin structure; andThe conductive contacts on the corresponding steps of the stepped structure, and each of the first conductive lines on the corresponding layers share a common conductive contact.8.The apparatus according to any one of claims 1 to 4, wherein adjacent portions of the conductive material of adjacent fin structures are electrically connected to each other in a contact area close to the longitudinal ends of the fin structure.9.The device of any one of claims 1 to 4, further comprising a base material under the horizontal capacitor structure, wherein the elongated portion of the conductive material of the fin structure is substantially parallel to The main plane of the base material extends, and the elongated portion of the second conductive thread extends substantially transversely to the main plane of the base material.10.A method of forming a device, which includes:At least one opening is formed that extends vertically through the stack of alternating conductive and dielectric materials above the base material, and the remaining portion of the alternating conductive and dielectric materials of the stack defines an opening that extends in a first horizontal direction. Fin structure;Forming at least one gate structure adjacent to the conductive material of the fin structure;Forming a horizontal capacitor structure adjacent to the conductive material of each layer of the fin structure;Forming at least one stepped structure including the stacked materials of alternating conductive materials and dielectric materials;Forming an electrically insulating material over at least a portion of the stack; andConductive contacts are formed through openings in the electrically insulating material.11.The method of claim 10, further comprising:Forming a first conductive line including the conductive material of the stack; andA second conductive line is formed, which includes the material of the at least one gate structure, and the second conductive line runs in each of the first horizontal direction transverse to the fin structure and the main plane of the base material. One extends in the vertical direction.12.The method of claim 10, further comprising:The conductive material of the fin structure is formed as a junctionless nanowire, which includes a conductive doped semiconductor material, and the conductive doped semiconductor material includes one of a p-type dopant or an n-type dopant, and Does not include the other of the p-type dopant or the n-type dopant; andAn access device is formed that includes a portion of the conductive doped semiconductor material of the junctionless nanowire.13.The method according to any one of claims 10 to 12, wherein forming the at least one step structure includes forming a single step structure that is close to the lateral side surface of the horizontal capacitor structure and is substantially parallel to the fin The elongated portion of the sheet structure extends.14.The method according to any one of claims 10 to 12, wherein:Forming the at least one opening includes: forming a single opening in the central portion of the stack of alternating conductive and dielectric materials to form two opposing fin structures and a contact area at a longitudinal end of the stack; andForming the at least one step structure includes forming a single step structure located at a proximal end of the contact area on a side of the at least one gate structure opposite to the horizontal capacitor structure.15.The method according to any one of claims 10 to 12, further comprising: forming a support structure that extends in a second horizontal direction transverse to the first horizontal direction, wherein forming the support structure includes:Using an anisotropic material removal process to form an opening that extends vertically through the stack to the base material;Removing the portion of the dielectric material between vertically adjacent portions of the conductive material in an isotropic material removal process; andOther dielectric materials are formed between the vertically adjacent portions of the conductive material.16.A memory device includes:At least one memory array of memory cells, which includes:The data line, which extends in the horizontal direction;An access line, which extends in a vertical direction substantially transverse to the horizontal direction;A capacitor structure that is aligned horizontally in the horizontal direction and stacked vertically in the vertical direction; andAn access device electrically coupled to the access line, the access device including a conductive material common to the capacitor structure.17.16. The memory device of claim 16, wherein the capacitor structure includes respective capacitor containers between 10 and 100 directly vertically aligned with each other, and the respective capacitor containers of a single vertical layer share a common access line.18.The memory device of claim 16 or claim 17, further comprising a junction-free nanowire, the junction-free nanowire comprising an elongated portion of the conductive material extending in the horizontal direction, the junction-free nanowire The nanowires are configured as electrodes of the respective capacitor structures.19.The memory device according to claim 16 or claim 17, further comprising:A fin structure including the conductive material of each layer of the capacitor structure; andThe gate structures are aligned with each other in the vertical direction, and a single gate structure is located on each layer of the corresponding fin structure, wherein the single gate structure is connected to the capacitor structure of the corresponding fin structure.20.The memory device of claim 19, further comprising a conductive contact and an under-array CMOS CUA area under the at least one memory array, wherein the conductive contact connects the gate structure to the CUA area electrical system.21.An electronic system, which includes:At least one input device;At least one output device;At least one processor device operably coupled to the at least one input device and the at least one output device; andA memory device operatively coupled to the at least one processor device, the memory device comprising:Capacitor structure, each capacitor structure includes a first electrode and a second electrode separated from each other by a dielectric material, wherein the first electrode includes an elongated portion of conductive material extending in a horizontal direction, and the first electrode is opposite to each other. Are partially connected to each other by a contact portion extending therebetween;A gate structure located close to the contact portion, wherein a single gate structure is coupled to each of the opposing portions of the first electrode; andA conductive line extending in a vertical direction transverse to the horizontal direction, and the conductive line is connected to the respective gate structures of the corresponding capacitor structures stacked in the vertical direction.
Equipment including stacked horizontal capacitor structure, related method, and memory device And electronic systemPriority claimThis application requires the U.S. Patent Application Serial No. 16/886,497 filed on May 28, 2020 for "Apparatuses Including Stacked Horizontal Capacitor Structures and Related Methods, Memory Devices, Including Stacked Horizontal Capacitor Structures and Related Methods, Memory Devices, and Electronic Systems)” on the filing date.Technical fieldThe embodiments disclosed herein relate to the field of microelectronic device design and manufacturing. More particularly, the embodiments of the present disclosure relate to a device including a stacked horizontal capacitor structure, and relate to related memory devices and electronic systems, and relate to a method of forming the device.Background techniqueThe continuing goal of integrated circuit manufacturing is to increase integration density. Dynamic Random Access Memory (DRAM) utilizes DRAM capacitors to store a certain amount of charge, which represents the logical value of the stored bit. Some DRAM capacitors include container-shaped capacitors in which one electrode is shaped as a container, and the cell dielectric material and the other electrode are only inside the container (for example, a single-sided hole capacitor) or only on the outside of the container (for example, a single-sided cylindrical Capacitors) or on the inside and outside of the container (e.g., double-sided container). In order to increase integration density, the lateral footprint of DRAM capacitors has been reduced by increasing the aspect ratio (ie, the ratio of height to width or diameter) and reducing the proximity of adjacent DRAM capacitors to each other. The high aspect ratio and small size result in structurally fragile containers that are prone to tipping or breaking. The container-shaped capacitor can be oriented vertically in a hollow cylindrical shape anchored at the top and bottom, but can move laterally, which causes deformation (e.g., damage) of the DRAM capacitor. Therefore, the structural stability and mechanical strength of the container are important for the operability of the DRAM capacitor in the DRAM device. Retention structures (e.g., lattice structures) have been used to reinforce vertically oriented containers by supporting the outer sidewalls of the container defined by electrodes. However, the use of the retention structure increases the complexity of the DRAM capacitor manufacturing process.In addition, conventional DRAM device structures include multilayer conductive structures (eg, access lines, data lines, etc.) separated by dielectric materials. Some conventional DRAM device structures include vertically positioning capacitor pillars above the access lines and corresponding access devices (e.g., transistors). However, in this device structure, forming the capacitor column vertically above the access line and the access device requires additional real estate in the DRAM device.As the size of the DRAM capacitor decreases, the cross-sectional area of the container may decrease, resulting in a decrease in the capacitance of the container. The reduction in the size of each DRAM capacitor and the reduction in the proximity of adjacent capacitors increase the sensitivity of the bridging (e.g., electrical connection) between two or more adjacent capacitors during manufacturing and the operation of the DRAM device Sensitivity to leakage during the period. Therefore, as the size of the DRAM device is scaled down to increase the integration density, conventional DRAM capacitors may not be sufficient to reduce leakage due to coupling capacitance between horizontally adjacent capacitors.Summary of the inventionThe embodiments described herein include a device including a stacked horizontal capacitor structure, and relate to related memory devices and electronic systems, and to a method of forming the device. According to an embodiment described herein, a device includes a fin structure including various layers of conductive material, the conductive material including an elongated portion extending in a first horizontal direction; a first conductive wire , Which extends in a second horizontal direction transverse to the first horizontal direction, at least a part of the first conductive line is vertically aligned; and a second conductive line which extends transversely to the first horizontal direction and the first Extending in the vertical direction of each of the two horizontal directions; a horizontal capacitor structure, which includes the conductive material of the fin structure; and an access device, which is close to the first conductive line and the second conductive line At the intersection, the access device includes the conductive material of the fin structure.According to another embodiment described herein, a method of forming a device includes forming at least one opening that extends vertically through a stack of alternating conductive and dielectric materials above a base material, the alternating conductive material of the stack The rest of the material and the dielectric material define a fin structure extending in a first horizontal direction; form at least one gate structure adjacent to the conductive material of the fin structure; form a horizontal capacitor structure adjacent to the conductive material Conductive material of each layer of the fin structure; forming at least one stepped structure including alternating conductive and dielectric materials of the stacked material; forming an electrically insulating material, which is located above at least a portion of the stack; and The openings in the electrically insulating material form conductive contacts.In addition, according to another embodiment described herein, a memory device includes at least one memory array of memory cells, which includes: data lines that extend in a horizontal direction; and access lines that are substantially transverse to the horizontal direction. A capacitor structure that is aligned horizontally in the horizontal direction and stacked vertically in the vertical direction; and an access device that is electrically coupled to the access line, the access device includes Conductive material common to the capacitor structure.According to further embodiments described herein, an electronic system includes at least one input device; at least one output device; at least one processor device operably coupled to the at least one input device and the at least one output device; And a memory device operably coupled to the at least one processor device, the memory device comprising: a capacitor structure, each capacitor structure comprising a first electrode and a second electrode separated from each other by a dielectric material, wherein the The first electrode includes an elongated portion of conductive material extending in a horizontal direction, and the opposing portions of the first electrode are connected to each other by a contact portion extending therebetween; a gate structure, which is located close to the contact portion, wherein a single gate The structure is coupled to each of the opposing portions of the first electrode; and a conductive line extending in a vertical direction transverse to the horizontal direction, the conductive line connecting the corresponding stacked in the vertical direction Each gate structure of the capacitor structure.Description of the drawingsFigures 1A to 7B are simplified partial top views (Figures 1A, 2A, 3A, 4A, 5A, 6A, and 7A) and simplified partial cross-sectional views (Figures 1B, 2B, 3B, 4B, 5B, 6B, and 7B), which show A method of forming a device including a device structure according to an embodiment of the present disclosure is presented, wherein the cross-sectional views of FIGS. 1B, 2B, 3B, 4B, 5B, 6B, and 7B are along the lines of FIGS. 1A, 2A, 3A, 4A, and 5A. , 6A and 7A line BB intercepted;Figure 8 is a simplified perspective view of the device of Figures 1A to 7B according to an embodiment of the present disclosure;Figures 9A to 15C are simplified partial top views (Figures 9A, 10A, 11A, 12A, 13A, 14A, and 15A) and simplified partial cross-sectional views (Figures 9B, 9C, 10B, 10C, 11B, 11C, 12B, 12C, 13B , 13C, 14B, 14C, 15B, and 15C), which show a method of forming another device according to an embodiment of the present disclosure, in which the cross-sectional views of FIGS. 9B, 10B, 11B, 12B, 13B, 14B, and 15B and The cross-sectional views of FIGS. 9C, 10C, 11C, 12C, 13C, 14C, and 15C are taken along lines BB and CC in FIGS. 9A, 10A, 11A, 12A, 13A, 14A, and 15A, respectively;Figure 16 is a simplified perspective view of the device of Figures 9A to 15C according to an embodiment of the present disclosure;Figure 17 is a simplified partial top view of the device of Figures 9A to 15C according to an embodiment of the present disclosure;FIG. 18 is a schematic block diagram showing a microelectronic device according to an embodiment of the present disclosure; andFIG. 19 is a schematic block diagram showing an electronic system according to an embodiment of the present disclosure.detailed descriptionDisclosed is an apparatus (for example, a microelectronic device, a semiconductor device, a memory device), which includes a fin structure including various layers of conductive material, the conductive material having a conductive material extending in a first horizontal direction Slender part. The device includes a first conductive line (for example, a data line) extending in a second horizontal direction that is transverse to the first horizontal direction, and in a vertical direction that is transverse to each of the first horizontal direction and the second horizontal direction. An extended second conductive line (for example, an access line). At least a part of the first conductive line is vertically aligned. A part of the fin structure exists in the access device of the device, and the other part of the fin structure exists in the capacitor structure of the device. The capacitor structure includes a horizontal capacitor structure including a conductive material of a fin structure. Each horizontal capacitor structure includes conductive materials of various layers of fin structures that extend in a first horizontal direction and are connected by contact areas that extend in a second horizontal direction. Therefore, the horizontal capacitor structures are aligned horizontally in the first horizontal direction and stacked vertically in the vertical direction. As used herein, the term "horizontal alignment" with respect to a horizontal capacitor structure refers to and encompasses the orientation of the capacitor structure such that the elongated portion of the electrode of each capacitor structure extends in the horizontal direction. In other words, the main plane of each horizontal capacitor structure is substantially parallel to the main plane of the underlying base material (eg, substrate). A plurality of horizontal capacitor structures may be stacked vertically so that each capacitor structure (for example, each capacitor container) is directly and vertically aligned with each other, and the outer periphery thereof is vertically aligned.The access device is close to the intersection of the first conductive line and the second conductive line. The access device includes a conductive material with a fin structure. Each access device includes a gate structure at least partially surrounded by a gate dielectric material. The gate structure basically surrounds the conductive material of the fin structure. In some embodiments, each horizontal capacitor structure and the corresponding access device share a common gate structure, the horizontal capacitor structures on the conductive material of each individual layer of the fin structure share the common data line, and the alignment in a single vertical column The horizontal capacitor structure columns share common access lines. By providing a capacitor structure that is aligned horizontally and stacked vertically within the device, as the size of the memory device is scaled down to increase the density of memory cells, such a configuration can achieve improved density, which can lead to the use of And the power consumption during operation is reduced. This configuration can reduce the occurrence of bridging (e.g., electrical connection) between two or more adjacent capacitor structures and reduce leakage during use and operation of the device.The following description provides specific details, such as material type, material thickness, and process conditions, in order to provide a thorough description of the embodiments described herein. However, those of ordinary skill in the art will understand that the embodiments disclosed herein may be practiced without adopting these specific details. In fact, the embodiments can be practiced in combination with conventional manufacturing techniques adopted in the semiconductor industry. In addition, the description provided herein does not form a complete description of the microelectronic device or the complete process flow for manufacturing the microelectronic device, and the structure described below does not form a complete microelectronic device. Only those process actions and structures necessary to understand the embodiments described herein are described in detail below. Additional actions to form a complete microelectronic device can be performed by conventional techniques.The materials described herein can be formed by conventional techniques, including but not limited to spin coating, blanket coating, chemical vapor deposition (CVD), atomic layer deposition (ALD), plasma enhanced ALD, or physical vapor deposition (PVD). Alternatively, the material can be grown in situ. Depending on the specific material to be formed, one of ordinary skill in the art can choose the technique used to deposit or grow the material. Unless the context indicates otherwise, the removal of material can be accomplished by any suitable technique, including but not limited to etching, abrasive planarization (e.g., chemical-mechanical planarization), or other known methods.The drawings presented herein are for illustrative purposes only, and are not intended to be actual views of any particular materials, components, structures, devices, or systems. It should be expected that the shape depicted in the drawings will vary due to, for example, manufacturing technology and/or tolerances. Therefore, the embodiments described herein should not be construed as being limited to the specific shapes or regions shown, but include shape deviations due to, for example, manufacturing. For example, an area shown or described as a box shape may have rough and/or non-linear characteristics, and an area shown or described as a circle may contain some rough and/or linear characteristics. In addition, the acute angles shown can be rounded, and vice versa. Therefore, the regions shown in the drawings are schematic in nature, and their shapes are not intended to show the precise shape of the regions, and do not limit the scope of the present claims. The drawings are not necessarily drawn to scale. In addition, common elements among the various figures may retain the same numerical names.As used herein, unless the context clearly dictates otherwise, the singular forms "a/an" and "the" are intended to also encompass the plural forms.As used herein, "and/or" includes any and all combinations of one or more of the associated listed items.As used herein, "about" or "approximately" with respect to a value of a specific parameter includes the degree of difference between the stated value and the stated value within the acceptable tolerance of the specific parameter as will be understood by those of ordinary skill in the art. For example, "about" or "approximately" with respect to a numerical value can be included in the range of 90.0% to 110.0% of the numerical value (for example, in the range of 95.0% to 105.0% of the numerical value, within 97.5% of the numerical value). % To 102.5%, in the range of 99.0% to 101.0% of the stated value, in the range of 99.5% to 100.5% of the stated value, or in the range of 99.9% to 100.1% of the stated value) The other value of.As used herein, in order to describe the relationship between one element or feature shown in a figure and another element or feature, spatially relative terms (for example, "under", "under", "Down", "Bottom", "Above", "Up", "Top", "Front", "Back", "Left", "Right", etc.) for ease of description. Unless otherwise stated, spatial relative terms are intended to encompass different orientations of materials other than those depicted in the figures. For example, if the materials in the drawings are turned upside down, elements described as “below” or “below” or “below” or “on the bottom” of other elements or features will be oriented as being above the other elements or features "Above" or "Above". Therefore, depending on the context in which the term is used, the term "under" can encompass two orientations of above and below, which is obvious to those of ordinary skill in the art. The material can be oriented in other ways (e.g., rotated 90 degrees, inverted, flipped), and the spatial relative descriptors used herein are explained accordingly.As used herein, the terms "vertical", "longitudinal", "horizontal" and "lateral" refer to the main plane of the structure, and are not necessarily defined by the earth's gravitational field. The "horizontal" or "lateral" direction is the direction substantially parallel to the main plane of the structure, and the "vertical" or "longitudinal" direction is the direction substantially perpendicular to the main plane of the structure. Compared with other surfaces of the structure, the main plane of the structure is defined by the surface of the structure having a larger area.As used herein, the term "configuration" refers to the size, shape, and material composition of one or more of the structure and the device that facilitate the operation of one or more of at least one structure and at least one device in a predetermined manner And arrangement.As used herein, the term "spacing" refers to the distance between the same points in two adjacent (ie, adjoining) features.As used herein, referring to an element as being "on" or "on" another element refers to and includes the element directly on top of the other element, directly adjacent to it (eg, directly adjacent to it laterally, directly Vertically adjacent to it), directly below it, or in direct contact with it. It also includes that the element is indirectly on top of the other element, indirectly adjacent to it (for example, indirectly adjacent to it laterally, indirectly adjacent to it vertically), indirectly under or near it, with other elements in between. In contrast, when an element is referred to as being "directly on" or "directly adjacent to" another element, there are no intervening elements present.As used herein, the phrase "coupled to" refers to structures that are operatively connected to each other (e.g., through a direct ohmic connection or electrically connected through an indirect connection (e.g., via another structure)).As used herein, the term "selectively etchable" refers to and includes a material that exhibits a greater etching rate in response to exposure to the same etching chemistry relative to another material exposed to a given etching chemistry . For example, the material may exhibit an etching rate that is at least about five times the etching rate of another material, for example, about ten times, about twenty times, or about forty times the etching rate of the other material. rate. A person of ordinary skill in the art can select the etching chemistry and etching conditions used to selectively etch the desired material.As used herein, the term "junction-free nanowire" refers to and includes a structure that includes one or more conductive materials that are doped with dopants of the same polarity so that there are no conventionally used to form PN junctions. (For example, pnp junction, npn junction) implants with different polarities.As used herein, the term "substantially" with respect to a given parameter, property, or condition means and to a certain extent encompasses that those of ordinary skill in the art will understand that the given parameter, property, or condition is satisfied with a certain degree of difference ( For example, within acceptable tolerances). By way of example, depending on the specific parameters, properties or conditions that are basically met, the parameters, properties or conditions can be met at least 90.0%, at least 95.0%, at least 99.0%, at least 99.9% met or even 100.0% is satisfied.As used herein, the term "substrate" refers to and includes a material (e.g., base material) or structure on which additional materials are formed. The substrate may be a semiconductor substrate, a basic semiconductor material on a supporting structure, a metal electrode, or a semiconductor substrate on which one or more materials, layers, structures or regions are formed. The material on the semiconductor substrate may include, but is not limited to, semi-conductive materials, insulating materials, conductive materials, and the like. The substrate may be a conventional silicon substrate or other bulk substrate including a layer of semi-conductive material. As used herein, the term "bulk substrate" refers to and includes not only silicon wafers, but also silicon-on-insulator ("SOI") substrates (e.g., silicon-on-sapphire ("SOS") substrates and silicon-on-glass ("SOI") "SOG") substrate), the silicon epitaxial layer on the basic semiconductor substrate, and other semiconductor or optoelectronic materials (for example, silicon germanium, germanium, gallium arsenide, gallium nitride, and indium phosphide). The substrate can be doped or undoped.1A to 7B show a method of forming a device (each stage of the method) according to an embodiment of the present disclosure, the device including a device structure (for example, a microelectronic device structure), the device structure including a stacking level Capacitor structure (e.g., DRAM capacitor structure). For simplicity, the formation of a single device structure is shown, but those of ordinary skill in the art will understand that the method may include forming multiple device structures (for example, more than one device structure or an array thereof) at the same time. For the convenience of describing FIGS. 1A to 7B, the first direction may be defined as the direction shown in FIGS. 1A to 7B, that is, the X direction. A second direction transverse to (eg, perpendicular to) the first direction is shown in FIGS. 1A, 2A, 3A, 4A, 5A, 6A, and 7A, that is, the Y direction. A third direction transverse to (e.g., perpendicular to) each of the first direction and the second direction may be defined as the direction shown in FIGS. 1B, 2B, 3B, 4B, 5B, 6B, and 7B (e.g., vertical Direction), that is, the Z direction. As shown in Figures 8 to 17, similar directions are defined, as discussed in more detail below.A device structure 100 comprising a stack 103 of alternating conductive and electrically insulating materials is shown in FIGS. 1A and 1B. FIG. 1A is a simplified partial top view of the device structure 100, and FIG. 1B shows a cross-sectional view of the device structure 100 through the section line B-B of FIG. 1A. Similar views are shown in Figures 2A to 7B, respectively, as discussed in more detail below. The device structure 100 includes a stack 103 of alternating multiple layers of electrically insulating material 104 and conductive material 106 formed adjacent to (e.g., on or over) a base material 102 (e.g., a substrate). As discussed below, portions of the conductive material 106 may be configured as junctionless nanowire transistors of the device, and other portions of the conductive material 106 may be configured as the capacitor structure of the device. Merely by way of example, the base material 102 may be a semiconductor substrate, a base semiconductor layer on a supporting structure, or a metal electrode, or a semiconductor substrate on which one or more layers, structures, or regions are formed.The electrically insulating material 104 may include and be formed of at least one dielectric material, for example, at least one dielectric oxide material (e.g., SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, One or more of fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (for example, SiNy), at least one dielectric oxynitride material (for example, SiOxNy), One or more of at least one dielectric carboxynitride material (for example, SiOxCz), at least one dielectric carboxynitride material (for example, SiOxCzNy), and amorphous carbon. In some embodiments, the electrically insulating material 104 includes a silicon dioxide material.The conductive material 106 may include metals, such as tungsten, titanium, nickel, platinum, rhodium, ruthenium, iridium, aluminum, copper, molybdenum, silver, gold, metal alloys, metal-containing materials (e.g., metal nitrides, metal silicides, metal Carbides, metal oxides); materials containing at least one of the following: titanium nitride (TiN), tantalum nitride (TaN), tungsten nitride (WN), titanium aluminum nitride (TiAlN), iridium oxide ( IrOx) and ruthenium oxide (RuOx), their alloys; conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, conductive doped silicon germanium, etc.); polycrystalline silicon (polycrystalline silicon/polysilicon); exhibits conductivity的 other materials; or a combination thereof. The conductive material 106 may be crystalline (e.g., single crystal, polycrystalline) or amorphous in whole or in part. The conductive material 106 may be undoped, or may contain one or more (eg, single) dopants, such as p-type dopants or n-type dopants, as discussed in more detail with reference to FIG. 8. In some embodiments, the dopant may include a p-type dopant, which includes phosphorous (P) or arsenic (As) but not aluminum (Al) and silicon (Si), for example. In other embodiments, the dopant may be an n-type dopant, such as aluminum (Al) and silicon (Si) without including phosphorus (P) or arsenic (As). In some embodiments, the conductive material 106 is polysilicon. In some embodiments, the conductive material 106 may be a homogeneous material and contain a uniform concentration of dopants. In other embodiments, the conductive material 106 may be a heterogeneous material and contain a gradient of at least one dopant along a vertical portion (e.g., Z direction) and/or at least one of its horizontal portions (e.g., X direction, Y direction) has a higher dopant concentration and a lower dopant concentration. The interface between the region of higher dopant concentration and another region of lower dopant concentration may not necessarily be along a straight line.The alternating electrically insulating material 104 and the conductive material 106 may each be separately formed using a conventional material process, which is not described in detail herein. As a non-limiting example, the electrically insulating material 104 and the conductive material 106 may each be separately formed by one or more conventional deposition processes (eg, PVD process, CVD process, ALD process, spin coating process).2A and 2B, a central opening 108 may be formed in multiple layers of alternating electrically insulating material 104 and conductive material 106. As shown in FIG. 2A, the central opening 108 may extend vertically between the fin structure 109 (e.g., opposite side surfaces of the electrically insulating material 104 of the stack 103 and the rest of the conductive material 106). In particular, material can be removed from each of the electrically insulating material 104 and the conductive material 106 in the central portion of the stack 103, and the sidewalls of the remaining electrically insulating material 104 and the conductive material 106 define the central opening 108. Other parts of the electrically insulating material 104 and the conductive material 106 may also remain at the contact area 112 on the longitudinal end of the stack 103. In some embodiments, the contact area 112 may be only on one (eg, a single) longitudinal end of the stack 103 and not on the opposite longitudinal end of the stack 103. In some embodiments, the elongated portion of each of the electrically insulating material 104 and the conductive material 106 extends in a first direction (for example, the X direction), and the contact area 112 of the connecting fin structure 109 is in a second direction ( For example, the Y direction) extends upward. Therefore, as shown in FIG. 2A, the electrically insulating material 104 and the conductive material 106 of the stack 103 may exhibit a substantially U-shaped configuration. The central opening 108 can be formed using one or more conventional patterning and material removal processes, such as conventional photolithography exposure processes, conventional development processes, conventional etching processes, and conventional processing equipment, which are not described in detail herein .The electrically insulating material 110 may be disposed in the central opening 108. The electrically insulating material 110 may be formed by one or more conventional conformal deposition processes or non-conformal deposition processes (for example, PVD process, CVD process, ALD process, spin coating process). The electrically insulating material 110 may substantially completely fill the central opening 108 extending between the fin structures 109. In some embodiments, the electrically insulating material 110 is positioned adjacent to (eg, above) the upper surface of the base material 102 and may extend vertically to the upper surface of the stack 103.The electrically insulating material 110 may include and be formed of at least one dielectric material, for example, at least one dielectric oxide material (e.g., SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, One or more of fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (for example, SiNy), at least one dielectric oxynitride material (for example, SiOxNy), One or more of at least one dielectric oxycarbide material (e.g., SiOxCz), at least one dielectric oxynitride material (e.g., SiOxCzNy), and amorphous carbon. In some embodiments, the electrically insulating material 110 includes a silicon dioxide material. In some embodiments, the electrically insulating material 110 includes substantially the same material composition as the electrically insulating material 104. Therefore, the electrically insulating material 110 and the electrically insulating material 104 may include a single insulating material, which may correspond to the electrically insulating material 104. Although FIGS. 2A to 7A show the electrically insulating material 104 and the electrically insulating material 110 as separate components, it will be understood that the electrically insulating material 104 and the electrically insulating material 110 may include exhibiting a substantially uniform composition (e.g., silicon dioxide) Single structure.The upper surface of the electrically insulating material 110 and the uppermost electrically insulating material 104 may be planarized, for example, by one or more CMP actions after the electrically insulating material 110 is formed to promote or strengthen the electrically insulating material 110 and the electrically insulating material 104 The flatness of the upper boundary (for example, the upper surface) of the upper boundary (for example, the upper surface) for further processing. Therefore, the upper surfaces of the electrically insulating material 110 and the uppermost electrically insulating material 104 may be substantially coplanar with each other.Referring to FIGS. 3A and 3B, an opening 114 may be formed that extends vertically through the electrically insulating material 110 and the electrically insulating material 104 without extending through the conductive material 106. In particular, it is possible to selectively remove material from each of the electrically insulating material 104 and the electrically insulating material 110 without removing the material from the conductive material 106. For example, the portion of the electrically insulating material 104 and the electrically insulating material 110 between the conductive material 106 may be exposed to a suitable etching chemistry (for example, wet etching or dry etching chemistry, which is formulated and configured to remove the electrically insulating material. 104 and the portion of the electrically insulating material 110 without substantially removing the portion of the conductive material 106) to selectively remove the electrically insulating material 104 and the electrically insulating material 110 between adjacent (for example, vertically adjacent) portions of the conductive material 106 part. By way of non-limiting example, one or more so-called "hole masks" (not shown) can be used to perform anisotropic etching to form an initial opening extending vertically through the stack 103 to the base material 102, followed by isotropic etching. Etch to remove the portion of the electrically insulating material 104 between the vertically adjacent portions of the conductive material 106.The material of the support structure 116 may be disposed in the opening 114. The support structure 116 may be formed using one or more conformal deposition processes or non-conformal deposition processes (for example, PVD process, CVD process, ALD process, spin coating process). As shown in FIG. 3A, the material of the support structure 116 may substantially completely fill the opening 114 extending linearly in the second direction (for example, the Y direction). In some embodiments, the support structure 116 extends (eg, extends substantially completely) between the upper surface of each portion of the conductive material 106 and the lower surface of the vertically adjacent portion of the conductive material 106, as shown in FIG. 3B. The support structure 116 may define the first area 141 and the second area 142 of the stack 103. In particular, one of the support structures 116 adjacent to the contact area 112 (e.g., at its proximal end) may divide the first area 141 from the second area along the longitudinal length of the device structure 100 in the first direction (e.g., the X direction). The regions 142 are separated, as shown in Figures 3A and 3B. In some embodiments, the first area 141 may be characterized as a so-called "access device area", and the second area 142 may be characterized as a so-called "capacitor area", as will be discussed in more detail below. For clarity and ease of understanding of the drawings and related descriptions, only three supporting structures 116 are shown in the second area 142 in FIGS. 3A and 3B. However, the present disclosure is not limited to this, and depending on the length and mechanical integrity of the fin structure 109, an additional support structure 116 may be included. It will be understood that, in at least some embodiments, the device structure 100 includes a single contact area 112 in the first area 141 and any number of support structures 116 in the second area 142.The width W1 of the stack 103 (for example, the combined width of the fin structure 109 and the electrically insulating material 110 in the Y direction) may be between about 10 nm and about 200 nm, for example, between about 10 nm and about 20 nm, and between about 20 nm and about 30 nm. , Between about 30nm and about 50nm or between about 50nm and about 200nm. The length L1 of the stack 103 (for example, the length of the fin structure 109 in the X direction) may be between about 300 nm and about 3000 nm, for example, between about 300 nm and about 1000 nm, between about 1000 nm and about 1500 nm, or between about 1500 nm and about 3000 nm. between. The height H1 of the stack 103 (for example, the combined height of the base material 102, the electrically insulating material 104, and the conductive material 106 in the Z direction) may be between about 100 nm and about 3000 nm, for example, between about 100 nm and about 1000 nm, and between about 1000 nm and about 1000 nm. Between 1500 nm or between about 1500 nm and about 3000 nm. In some embodiments, the aspect ratio (ie, the ratio of height to width) of the device structure 100 may be between about 10:1 and about 100:1, such as about 33:1.In addition, the thickness T1 of the conductive material 106 (for example, in the Z direction) may be between about 5 nm and about 40 nm, for example, between about 5 nm and about 10 nm, between about 10 nm and about 20 nm, between about 20 nm and about 30 nm, or between about 20 nm and about 30 nm. Between about 30nm and about 40nm. By way of non-limiting example, the spacing between vertically adjacent portions of the conductive material 106 may be between about 20 nm and about 100 nm, for example, between about 20 nm and about 60 nm or between about 60 nm and about 100 nm. In some embodiments, the minimum spacing between vertically adjacent portions of the conductive material 106 may be about 10 nm to about 20 nm greater than its thickness T1. In some embodiments, the thickness T2 of the support structure 116 (for example, in the X direction) may be between about 10 nm and about 60 nm, for example, between about 10 nm and about 20 nm, between about 20 nm and about 30 nm, and between about 30 nm and about 30 nm. Between 40 nm, between about 40 nm and about 50 nm, or between about 50 nm and about 60 nm. In some embodiments, the thickness T3 of the contact region 112 (for example, in the X direction) may be relatively larger than the thickness T2 of the support structure 116 (for example, in the X direction) and relatively larger than the thickness T4 of the fin structure 109 (for example, In the Y direction).The support structure 116 may include and be formed of at least one dielectric material, for example, at least one dielectric oxide material (e.g., SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, fluorine One or more of silicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (for example, SiNy), at least one dielectric oxynitride material (for example, SiOxNy), at least One or more of a dielectric oxycarbide material (for example, SiOxCz), at least one dielectric oxynitride material (for example, SiOxCzNy), and amorphous carbon. In some embodiments, the support structure 116 includes a silicon nitride material. In other embodiments, the support structure 116 includes an oxynitride material. The dielectric material of the support structure 116 may be selectively etched with respect to the electrically insulating material 104 and the electrically insulating material 110.Referring to FIGS. 4A and 4B, one or more gate electrodes 118 may be formed adjacent to (e.g., on, and around) the fin structure 109. For example, a single gate electrode 118 may be formed on each conductive material 106 of the fin structure 109 (e.g., two opposing fin structures 109). An opening (not shown) may be formed in each of the electrically insulating material 104 and the electrically insulating material 110 in the first region 141 through one or more material removal processes, so as to form the conductive material 106 adjacent to the fin structure 109 Grid electrode 118. By way of non-limiting example, a mask (not shown) can be used to perform anisotropic etching to form an initial opening extending vertically through the stack 103 to the base material 102, and then perform an isotropic etching to remove the electrically insulating material 104 in the conductive The portion between vertically adjacent portions of material 106. In some embodiments, each gate electrode 118 is formed (eg, substantially surrounding) around its corresponding layer of conductive material 106 near the longitudinal end of the fin structure 109. As shown in FIG. 4A, the gate electrode 118 may be formed in the first area 141 between the contact area 112 and a support structure 116 close to the contact area 112.The gate electrode 118 may be surrounded by the gate dielectric material 120 on at least some sides thereof. The gate dielectric material 120 may be formed adjacent to the various layers of the conductive material 106 (eg, above, below), and may be formed before the gate electrode 118 is formed. Therefore, at least a portion of the gate dielectric material 120 may be located between the conductive material 106 and the corresponding gate electrode 118. The gate electrode 118 may be configured as a portion of a word line extending in the third direction (eg, the Z direction), as discussed in more detail below. In some embodiments, each U-shaped structure of the device structure 100 includes one or more (eg, two) gate electrodes 118. The gate electrode 118 may include conductive materials such as, for example, tungsten, titanium, nickel, platinum, rhodium, ruthenium, iridium, aluminum, copper, molybdenum, silver, gold, metal alloys, metal-containing materials (e.g., metal nitrides, metal silicides). Materials, metal carbides, metal oxides); materials containing at least one of the following: titanium nitride (TiN), tantalum nitride (TaN), tungsten nitride (WN), titanium aluminum nitride (TiAlN), Iridium oxide (IrOx) and ruthenium oxide (RuOx), their alloys; conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, conductive doped silicon germanium, etc.); polysilicon; other materials exhibiting conductivity ; Or a combination thereof.The gate dielectric material 120 may be disposed around at least some sides of each gate electrode 118. In some embodiments, the gate electrode 118 substantially surrounds all sides (for example, up, down, left, right, front, back, etc.) of the gate dielectric material 120. The gate dielectric material 120 may include one or more electrically insulating materials, such as, for example, phosphosilicate glass, borosilicate glass, borophosphosilicate glass (BPSG), fluorosilicate glass, silicon dioxide, Magnesium oxide, niobium oxide, molybdenum oxide, strontium oxide, barium oxide, yttrium oxide, nitride materials (for example, silicon nitride (Si3N4)), oxynitride (for example, silicon oxynitride, another gate dielectric material , Dielectric carbon nitride materials (e.g., silicon oxynitride (SiCN)), dielectric oxynitride materials (e.g., silicon oxynitride (SiOCN)), high-k dielectric materials (e.g., aluminum oxide (Al2O3) , Tantalum oxide (Ta2O5), zirconium oxide (ZrO2), hafnium oxide (HfO2), lanthanum oxide (La2O3), titanium oxide (TiO2)), another material or a combination thereof. In some embodiments, the gate dielectric The material 120 includes silicon dioxide. In some embodiments, the gate dielectric material 120 includes substantially the same material composition as the electrically insulating material 104. Therefore, the gate dielectric material 120 and the electrically insulating material 104 may include a single insulating material. Material, which may correspond to the electrically insulating material 104.The access device 119 (eg, a transistor) may be formed of a portion of the conductive material 106 adjacent to the gate electrode 118. The access device 119 may be formed close to the intersection of two conductive materials (for example, a data line and an access line), as discussed in more detail with reference to FIG. 8. In an embodiment including a gate electrode 118 formed to substantially surround each conductive material 106 in the first region 141, the access device 119 may be characterized as a so-called "gate all-around transistor." In some embodiments, the gate electrode 118 and therefore the access device 119 are isolated (eg, physically separated) from the support structure 116 and/or the contact area 112 by an isolation region 122 (eg, a gap between adjacent structures). In other words, the gate electrode 118 may not be directly adjacent to at least one of the support structure 116 or the contact region 112 to form an adjacent structure. In other embodiments, the gate electrode 118 may be directly adjacent to the support structure 116 and/or the contact area 112.Referring to FIGS. 5A and 5B, an opening 124 may be formed between vertically adjacent portions of each conductive material 106 in the second region 142. In some embodiments, a portion of each of the electrically insulating material 104 and the electrically insulating material 110 remains in the first region 141, as shown in FIGS. 5A and 5B. During the formation of the opening 124, the contact area 112 and the access device 119 in the first area 141 may be protected (eg, covered, not exposed). In some such embodiments, substantially the entire portion of each of the electrically insulating material 104 and the electrically insulating material 110 may be removed in the second region 142. In particular, it is possible to selectively remove material from each of the electrically insulating material 104 and the electrically insulating material 110 without removing the material from the conductive material 106 and the support structure 116. In order to remove the electrically insulating material 104 and the electrically insulating material 110 in the second region 142, one or more material removal processes may be performed to selectively remove portions of the material and form openings 124 between vertically adjacent portions of the conductive material 106 (For example, undercut), and the conductive material 106 and the supporting structure 116 are not substantially removed. By way of non-limiting example, a mask (not shown) can be used to perform anisotropic etching to form an initial opening extending vertically through the stack 103 to the base material 102, and then perform an isotropic etching to remove the electrically insulating material 104 in the conductive The portion between vertically adjacent portions of material 106. In some embodiments, as described above, the gate electrode 118 and the gate dielectric material 120 are formed in the first region 141 before the opening 124 is formed in the second region 142. Alternatively, the opening 124 may be formed before the gate electrode 118 and the gate dielectric material 120 are formed.The capacitor dielectric material 126 may be disposed in the opening 124 adjacent to the conductive material 106 of the fin structure 109 (eg, above, below it) (shown in dashed lines in FIG. 5A for clarity). The dielectric material can be formed and patterned to form the capacitor dielectric material 126 by conventional techniques. In some embodiments, the capacitor dielectric material 126 is formed adjacent to the conductive material 106 and at least some support structures 116 (for example, conformally formed by an ALD process), and does not completely fill the openings 124 between adjacent portions of the conductive material 106 . For example, the capacitor dielectric material 126 may be conformally formed on the exposed lower surface and the exposed upper surface of the conductive material 106, and may at least partially (eg, substantially) cover the exposed surface of the conductive material 106. The capacitor dielectric material 126 may include one or more electrically insulating materials, such as, for example, phosphosilicate glass, borosilicate glass, borophosphosilicate glass (BPSG), fluorosilicate glass, silicon dioxide, oxide Magnesium, niobium oxide, molybdenum oxide, strontium oxide, barium oxide, yttrium oxide, nitride materials (for example, silicon nitride (Si3N4)), oxynitride (for example, silicon oxynitride, another gate dielectric material, Dielectric carbon nitride materials (for example, silicon oxynitride (SiCN)), dielectric oxynitride materials (for example, silicon oxynitride (SiOCN)), high-k dielectric materials (for example, aluminum oxide (Al2O3), Tantalum oxide (Ta2O5), zirconium oxide (ZrO2), hafnium oxide (HfO2), lanthanum oxide (La2O3), titanium oxide (TiO2)), another material or a combination thereof. In some embodiments, the capacitor dielectric material 126 Including silicon dioxide. In some embodiments, the capacitor dielectric material 126 includes and is formed of the same material as the electrically insulating material 104 and/or the gate dielectric material 120.The conductive material 128 may be formed adjacent to and in contact with the capacitor dielectric material 126 in the opening 124 (eg, in direct physical contact). As shown in FIG. 5B, the conductive material 128 may at least partially (eg, substantially) cover the upper surface of the capacitor dielectric material 126. In other words, the conductive material 128 may substantially fill the remaining part of the opening 124. The conductive material 128 may be configured as one electrode (for example, the top electrode) of the stacked horizontal capacitor structure 144, and the conductive material 106 of the fin structure 109 may include another electrode (for example, the bottom electrode). In some embodiments, the conductive material 128 includes a single continuous material that connects at least some adjacent capacitor structures 144. In other words, the conductive material 128 of one horizontal capacitor structure may be coextensive with the conductive material 128 of an adjacent horizontal capacitor structure. In some embodiments, the conductive material 128 includes and is formed of the same material as the conductive material 106 described above with reference to FIGS. 1A and 1B. The conductive material 128 may include a single conductive material or may include more than one conductive material. For example, the conductive material 128 may include a semiconductor material, such as one or more of silicon germanium, germanium, and polysilicon. The semiconductor material may be undoped or may contain one or more dopants, such as p-type dopants or n-type dopants.As shown in FIGS. 5A and 5B, portions of the capacitor dielectric material 126 and/or conductive material 128 may be formed in the second region 142 and extend to the end region 146 (eg, beyond the support structure located furthest from the contact region 112 116). In other words, at least one of the capacitor dielectric material 126 or the conductive material 128 may extend beyond the support structure 116 located at the distal end of the contact area 112, where the contact area 112 and the end area 146 are located at opposite longitudinal ends of the device structure 100 . In some embodiments, at least a portion of the conductive material 128 is formed adjacent to (e.g., above) the base material 102. In other embodiments, the support structure 116 located at the distal end of the contact area 112 may overlap (eg, define it) with the end area 146 of the device structure 100 without any material extending beyond the furthest support structure 116. The formation of the capacitor dielectric material 126 and the conductive material 128 results in the formation of a so-called "stacked horizontal capacitor structure" according to an embodiment of the present disclosure. For clarity and ease of understanding of the drawings and related descriptions, only three conductive materials 106 and four conductive materials 128 (for example, of the capacitor structure 144) are shown in the second region 142 in FIG. 5B. However, the present disclosure is not limited thereto, and by way of non-limiting example, the device structure 100 according to an embodiment of the present disclosure may have any number of capacitor structures 144, for example, at least 10, 25, 50, 75, or 100 capacitor structures 144. In some embodiments, the device structure 100 may include about 10 capacitor structures 144 to about 100 capacitor structures 144 (eg, about 50 capacitor structures 144).6A and 6B, the stepped structure 130 may be formed at one or both longitudinal ends of the device structure 100, for example, in or near the contact area 112 of the first area 141. In some such embodiments, the stepped structure 130 is formed on the side of the gate electrode 118 opposite to the second region 142. The stepped structure 130 may be isolated from the gate electrode 118 (for example, physically separated) by the isolation region 122 so that the gate electrode 118 is not directly adjacent to the stepped structure 130 to form the stepped structure 130 and/or the gate electrode 118. In some embodiments, as described above, the gate electrode 118 and the gate dielectric material 120 are formed before the stepped structure 130 is formed. Alternatively, the step structure 130 may be formed before forming the gate electrode 118 and the gate dielectric material 120 and the material of the second region 142 (for example, the capacitor dielectric material 126 and the conductive material 128).The stepped structure 130 may be formed by conventional techniques. The stepped stepping structure used to electrically connect the conductive line (eg, conductive material 106) of the capacitor structure 144 can be achieved by using a so-called "step mask" and optionally one or more so-called "chop masks". Model" to form. A step mask (not shown) may be formed over the contact area 112 of the device structure 100 while leaving a step width (for example, the width of a contact area extending in the longitudinal X direction). The conductive material (e.g., conductive material 106) of one or more layers exposed through the step mask may be removed, for example, by a first anisotropic material removal (e.g., etching) action. The edge of the stepped step mask may be removed so that the edge of the stepped step mask is recessed and exposes another step width in addition to the original exposed step width. Another material removal action may be performed to remove another conductive material or materials exposed through the recessed step mask. The process can be repeated to form a desired number of contact areas (also referred to as a "staircase" or "stepped structure"). Each stepped structure (e.g., common contact pad) may include a portion of the conductive material 106 on the upper portion thereof. In other words, the conductive material 106 may be exposed on the upper surface of each stepped structure, as shown in FIG. 6B. The material removal (eg, etching) action may expose multiple rows of conductive material 106. During the formation of the stepped structure 130, the portion of the electrically insulating material 110 in the first region 141 may or may not be removed.Referring to FIGS. 7A and 7B, an electrically insulating material 140 may be disposed above the device structure 100, as shown in FIG. 7B. For clarity and ease of understanding of the drawings and related descriptions, there is no electrically insulating material 140 in FIG. 7A. The electrically insulating material 140 may be formed by one or more conventional deposition processes (for example, PVD process, CVD process, ALD process, spin coating process). As shown in FIG. 7B, an electrically insulating material 140 may be formed over each of the first region 141 and the second region 142. However, the present disclosure is not limited to this, and the electrically insulating material 140 may be formed only over one of the first area 141 or the second area 142, or alternatively, formed over a designated position in response to subsequent positioning of another structure . In some embodiments, the electrically insulating material 140 is adjacent to the exposed upper surface (e.g., above) of the dielectric material (e.g., electrically insulating material 104, support structure 116) and adjacent to the conductive material (e.g., conductive material 106, gate The exposed upper surface (e.g., above) of the electrode 118, the conductive material 128) is positioned. The upper surface of the electrically insulating material 140 may be planarized, for example, by one or more CMP actions to promote or enhance the planarity of the upper boundary (for example, the upper surface) of the electrically insulating material 140 for further processing thereon.The electrically insulating material 140 may include and be formed of at least one dielectric material, such as at least one dielectric oxide material (e.g., SiOx, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, One or more of fluorosilicate glass, AlOx, HfOx, NbOx, and TiOx), at least one dielectric nitride material (for example, SiNy), at least one dielectric oxynitride material (for example, SiOxNy), One or more of at least one dielectric oxycarbide material (e.g., SiOxCz), at least one dielectric oxynitride material (e.g., SiOxCzNy), and amorphous carbon. In some embodiments, the electrically insulating material 140 includes a silicon dioxide material. In some embodiments, the electrically insulating material 140 includes substantially the same material composition as the electrically insulating material 104. Therefore, the electrically insulating material 140 and the electrically insulating material 104 thereof may include a single insulating material, which may correspond to the electrically insulating material 104.After forming the electrically insulating material 140, one or more contacts (for example, conductive contact 132, conductive contact 134, upper conductive contact 136) may be formed in the opening of the electrically insulating material 140 to physically and electrically Contact the corresponding conductive material. The openings and contacts can be formed by conventional techniques. For example, a conductive contact 132 (for example, a data line contact) may be formed between the conductive material 106 of the step structure 130 and other conductive elements (not shown). In some embodiments, the conductive contact 132 is centrally positioned on each stepped structure of the stepped structure 130. However, the present disclosure is not limited thereto, and the conductive contacts 132 may be arranged in a pattern different from the pattern shown in FIG. 7A. In addition, there may be a plurality of conductive contacts 132 on each stepped structure of the stepped structure 130. A conductive contact 134 (for example, a word line contact) may be formed between, for example, the gate electrode 118 and other conductive elements, and may be the uppermost conductive material 128 (for example, the top electrode of the capacitor structure 144) and other conductive elements An upper conductive contact 136 is formed therebetween. In some embodiments, the uppermost conductive material 128 is connected by one or more (eg, a single) upper conductive contact 136. In other embodiments, the upper conductive contact 136 includes additional portions thereof and/or may be arranged in a pattern different from that shown in FIGS. 7A and 7B. Although the conductive contact 132 and the upper conductive contact 136 do not fall along the line B-B of FIG. 7A, for clarity, representative portions of the conductive contact 132 and the upper conductive contact 136 are shown in FIG. 7B. The lower conductive contact 138 may optionally be formed within the base material 102 and extend between the lower portion of the gate electrode 118 and other conductive elements (not shown) under the base material 102. For example, the device structure 100 may be located above a complementary metal oxide semiconductor (CMOS) area (eg, a CMOS under array (CUA) area, as described in more detail with reference to FIG. 18).One or more of the conductive contact 132, the conductive contact 134, the upper conductive contact 136, and the lower conductive contact 138 (e.g., each) may be formed of a material exhibiting sufficient conductivity to respectively access the conductive The material 106, the upper portion of the gate electrode 118, the conductive material 128 and the lower portion of the gate electrode 118, and provide electrical communication between the conductive material and other conductive elements. By way of non-limiting example, the contacts include aluminum, copper, nickel, chromium, cobalt, ruthenium, rhodium, palladium, silver, platinum, gold, iridium, tantalum, tungsten, conductive metal nitrides (e.g., TiN, TaN, WN, etc.) ), conductive metal silicide (for example, tantalum silicide, tungsten silicide, nickel silicide, titanium silicide, etc.), polysilicon, and combinations thereof, and are formed therefrom. In some embodiments, the contact contains and is formed of tungsten.Fig. 8 is a simplified perspective view of an apparatus including the device structure 100 of Figs. 1A to 7B. For clarity and ease of understanding of the drawings and related descriptions, there are no surrounding materials (including the electrically insulating material 110, the electrically insulating material 140, the capacitor dielectric material 126, and the conductive material 128) in FIG. 8. The conductive material 106 in the second region 142 forms respective capacitor structures 144 (for example, capacitor containers) that are aligned horizontally and stacked vertically within the device structure 100. Each capacitor structure 144 contains a corresponding access device 119 on the same single layer of conductive material 106. The capacitor structures 144 may be connected to each other through the contact area 112 to exhibit a U-shaped configuration. In some embodiments, as shown in FIG. 8, the capacitor structure 144 includes two opposing fin structures 109 having an elongated portion extending in a first direction (eg, the X direction); and a single contact area 112 , Which connects the fin structure 109 and extends therebetween in a second direction (for example, the Y direction). As shown in FIG. 8, the device structure 100 includes three capacitor structures 144. However, there may be another capacitor structure 144 in the Z direction. Additional laterally adjacent capacitor structures 144 may extend continuously on any (e.g., each) lateral side thereof. As shown in FIG. 8, multiple layers may be stacked in the third direction (e.g., Z direction) (e.g., three layers are shown for clarity).The portion of the conductive material 106 of the fin structure 109 may be configured as a nanowire having an elongated portion extending in the first direction (for example, a junction-free nanowire 149). The junction-free nanowires 149 may be characterized as so-called "horizontal nanowires" that include at least one dimension less than about 50 nm. In some embodiments, the junction-free nanowire 149 includes one or more conductive materials that are doped with dopants of the same polarity without forming, for example, p-n-p junctions, n-p-n junctions. The associated access device 119 may be characterized as a so-called "junctionless nanowire transistor" (e.g., formed without the use of source and drain implant steps). In other words, the junction-free nanowire 149 includes a material (for example, a doped polysilicon material) that includes one of p-type dopants or n-type dopants but does not include p-type dopants or n-type dopants. Another type of dopant. By way of non-limiting example, the junctionless nanowire 149 may be doped to the p+ layer (or p++ layer) or alternatively to the n+ layer (or n++ layer), and therefore may be relative to the p layer dopant, respectively Or the n-layer dopant is relatively heavily doped. In some embodiments, the junction-free nanowire 149 includes at least one of the electrodes of each capacitor structure 144 (eg, the bottom electrode).The conductive material 106 of the contact area 112 may be configured as a first conductive line 148 (for example, a data line, a bit line) extending in a second direction (for example, the Y direction). The gate electrode 118 surrounding each conductive material 106 may pass through a second conductive line 150 (e.g., access line , Word line) connection. The gate dielectric material 120 may surround the conductive material 106 and the gate electrode 118 surrounds the gate dielectric material 120. As shown in FIG. 8, the second conductive line 150 extends in a direction substantially transverse to the main plane of the base material 102. Since the main plane of the conductive material 128 (FIG. 7B) is oriented parallel to the main plane of the base material 102, the second conductive line 150 also extends in a direction substantially transverse to the main plane of the conductive material 128 of the capacitor structure 144. Since the second conductive line 150 is oriented in a direction transverse to the main plane of the base material 102, the configuration of the device structure 100 is different from the configuration of the conventional device structure, which includes, for example, data lines extending in the first horizontal direction and An access line extending in a second horizontal direction transverse to the first horizontal direction of the data line. Therefore, according to an embodiment of the present disclosure, the position of each of the first conductive wire 148 and the second conductive wire 150 relative to the existing structure (for example, the base material 102) within the device structure 100 is different from the conventional device structure. By providing a capacitor structure 144 that is aligned horizontally (e.g., in the X direction) and stacked vertically (e.g., in the Z direction), as the size of the device (e.g., memory device) is scaled down to increase the density of memory cells, This type of configuration can achieve improved density. This improved density can result in reduced power consumption during use and operation of the device. Compared with a conventional device including a vertically aligned capacitor structure, the footprint of the capacitor structure 144 according to an embodiment of the present disclosure can be reduced without excessively reducing its capacitance as the size of the device decreases proportionally ( For example, the total cross-sectional area is not reduced). Therefore, the RC (product of resistance and capacitance) of the capacitor structure 144 can be optimized (for example, by changing the dopant concentration), which can be related to the improvement of its performance. In addition, the configuration of the stacked horizontal capacitor structure 144 may allow for improved electrical isolation between adjacent capacitor structures 144, which may reduce the occurrence of bridging (eg, electrical connection) between two or more adjacent capacitor structures 144 And to reduce leakage during use and operation. In some cases, the bridging between adjacent structures of conventional devices may be the result of so-called "underetching" during manufacturing. Compared with the conventional capacitor structure using only the vertical orientation, the horizontal configuration using the capacitor structure 144 can reduce (eg, minimize) bridging.Since the access device 119 may include the material of the conductive material 106 of the fin structure 109, the access device 119 may include the material that forms the capacitor structure 144 (for example, the same junctionless nanowire 149) of each fin structure 109. In other words, the conductive material 106 of the fin structure 109 of each capacitor structure 144 can also be used as the conductive material of the access device 119 located close to the intersection of the first conductive line 148 and the second conductive line 150. Since the first conductive line 148 also includes the material of the conductive material 106 (for example, in the contact area 112), the access device 119 may also include the same material as the first conductive line 148. Therefore, the capacitor structures 144 aligned in a single vertical column share a common line of the second conductive line 150 (eg, a common access line). Each of the one or more (for example, two) second conductive lines 150 and the corresponding gate electrode 118 of each conductive material 106 of the fin structure 109 may pass through one or more (for example, a single) conductive contact. Point 134 to connect. In addition, each capacitor structure 144 of each conductive material 106 and the corresponding access device 119 share a common gate electrode 118.By using the above-described process to form the device structure 100, many advantages can be achieved. The gate electrode 118 surrounding the conductive material 106 of the fin structure 109 in a gate full surrounding configuration can provide improved gate performance. Compared with the comparable sizes of conventional devices, by forming a stacked horizontal capacitor structure 144, the overall density of a given size of the device can be increased. In addition, efficiency can be improved by providing a simplified process flow. For example, forming the first conductive line 148 containing the conductive material 106 in the contact area 112 of the stack 103 may allow a simplified process to form the stacked horizontal capacitor structure 144, which is not available in conventional devices with vertically oriented capacitor structures of. Since the horizontal capacitor structure 144 is horizontally oriented, less buckling can be observed relative to a conventional device with a vertically oriented capacitor structure. By using the capacitor structure 144 and the conductive material 106 of the fin structure 109 in the access device 119, the manufacturing cost can be reduced.With continued reference to FIG. 8, the support structure 116 may provide structural support for the fin structure 109 in the second region 142 (FIG. 7A) of the device structure 100. There may be any number of support structures 116 to provide structural stability within the device structure 100 along its longitudinal extent. By way of non-limiting example, the device structure 100 may include one (1) to five (5) support structures 116, such as three (3) support structures 116. In other embodiments, such as a structure with a fin structure 109 having a relatively short length, the device structure 100 may lack (eg, no support structure 116). The stepped structure 130 in the contact area 112 can also provide structural stability in the device structure 100.The various layers of the stepped structure 130 (for example, the stepped stepped structure) provide stepwise electrical access to the conductive contacts 132 corresponding to each capacitor structure 144. In other words, each exposed conductive material 106 of the stepped structure 130 provides access to one or more (eg, a single) conductive contact 132, thereby providing a corresponding first conductive line 148 of each capacitor structure 144 Electrical connection. The capacitor structure 144 of the conductive material 106 of each layer may share a common contact pad within the stepped structure 130 (for example, a stepped stepped structure). The configuration of the step structure 130 allows contact to be made with the conductive material 106 of each layer of the capacitor structure 144 extending in the first horizontal direction. Therefore, the capacitor structures 144 on the respective conductive materials 106 of the fin structure 109 share a common line (for example, a common data line) of the first conductive line 148 and one or more common conductive contacts 132. In some embodiments, each first conductive line 148 of one of the capacitor structures 144 is connected by a single (eg, one) conductive contact 132. Therefore, reducing the number of conductive contacts (for example, conductive contacts 132, conductive contacts 134) and reducing the proximity between conductive contacts and other conductive elements can improve reliability and reduce power consumption during use and operation .Therefore, according to an embodiment of the present disclosure, a device includes a fin structure including various layers of conductive material, the conductive material having an elongated portion extending in a first horizontal direction; a first conductive wire , Which extends in a second horizontal direction transverse to the first horizontal direction; and a second conductive line which extends in a vertical direction transverse to each of the first horizontal direction and the second horizontal direction extend. At least a part of the first conductive line is vertically aligned. The device also includes a horizontal capacitor structure that includes the conductive material of the fin structure; and an access device that is close to the intersection of the first conductive line and the second conductive line. The access device includes the conductive material of the fin structure.In addition, according to an embodiment of the present disclosure, a method is to form at least one opening that extends vertically through a stack of alternating conductive and dielectric materials above the base material. The remaining portion of the alternating conductive material and dielectric material of the stack defines a fin structure extending in a first horizontal direction. The method includes forming at least one gate structure adjacent to the conductive material of the fin structure; and forming a horizontal capacitor structure adjacent to the conductive material of each layer of the fin structure. The method further includes: forming at least one step structure that includes alternating conductive and dielectric materials of the stacked material; forming an electrically insulating material that is located above at least a portion of the stack; and insulating through the electrical The openings in the material form conductive contacts.In other embodiments of the present disclosure, the features and feature configurations described above with respect to FIGS. 1A to 7B may be adapted to the design requirements of different microelectronic devices (for example, different memory devices). By way of non-limiting example, in accordance with another embodiment of the present disclosure, FIGS. 9A to 15C show simplified partial top and cross-sectional views of a method of forming a device including a device structure (eg, a microelectronic device structure), the The device structure has a different configuration from the device structure 100. In the rest of the description and drawings, functionally similar features (e.g., structure, device) are denoted by similar reference numerals. In order to avoid repetition, all the features shown in the remaining drawings (including FIGS. 9A to 15C) are not described in detail herein. On the contrary, unless otherwise described below, the features designated by the reference numerals of the previously described features (regardless of whether the previously described features are first described before this paragraph or are described first after this paragraph) will be understood to be the same as those previously described. The characteristics are basically similar.Figure 9A is a simplified partial top view of the device structure 100'. At the processing stage depicted in FIG. 9A, the device structure 100' may be substantially similar to the device structure 100 at the processing stage depicted in FIG. 1A. 9B shows a cross-sectional view of the device structure 100' passing through the cross-sectional line B-B of FIG. 9A, and FIG. 9C shows another cross-sectional view of the device structure 100' passing through the cross-sectional line C-C of FIG. 9A. Similar views are shown in Figures 10A to 15C, respectively, as discussed in more detail below. The device structure 100' includes a stack 103 of alternating multiple layers of electrically insulating material 104 and conductive material 106 formed adjacent to a base material 102 (e.g., on or over it). Each of the base material 102, the electrically insulating material 104, and the conductive material 106 may include substantially the same materials as those described above with reference to FIGS. 1A and 1B.10A, 10B, and 10C, the electrically insulating material 110 may be disposed in the central opening 108, the central opening is formed in multiple layers of alternating electrically insulating material 104 and conductive material 106 and in the same process as depicted in FIGS. 2A and 2B The device structure 100 of the stage is substantially similar to the fin structure 109 (for example, the rest of the electrically insulating material 104 and the conductive material 106 of the stack 103) extending vertically. The central opening 108 can be formed by conventional techniques. However, the device structure 100' of FIGS. 10A, 10B, and 10C may include a plurality of (for example, three or more) fin structures 109, which pass through the electrically insulating material 110 disposed in each central opening 108. The multiple (e.g., two or more) parts are separated, as shown in FIG. 10A. The electrically insulating material 110 may include substantially the same material as the electrically insulating material 110 described above with reference to FIGS. 2A and 2B.The device structure 100' includes a contact area 112 at its longitudinal end. The contact area 112 may have a thickness (e.g., in the X direction) substantially similar to the thickness T3 of the contact area 112 of the device structure 100 (FIG. 3A), or alternatively, the contact area 112 of the device structure 100' may have opposite The thickness is smaller than the thickness T3 of the contact region 112 of the previous embodiment. The thickness of the contact area 112 may be substantially the same as or different from the thickness of the support structure 116 and/or the fin structure 109. In some embodiments, one of the fin structures 109 may have a relatively larger thickness than the other fin structures 109. For example, as shown in FIG. 10A, each fin structure 109 along the section line B-B may exhibit a thickness that is relatively larger than the thickness of each other fin structure 109. However, the present disclosure is not limited thereto, and the other one (or more) of the fin structures 109 may exhibit an increased thickness relative to the other fin structures 109. In some embodiments, the device structure 100' exhibits an asymmetric configuration along its lateral dimension (e.g., extending in the Y direction).Referring to FIGS. 11A, 11B, and 11C, a stepped structure 130 may be formed at one or two (e.g., a single) lateral sides of the device structure 100'. The stepped structure 130 may be formed by conventional techniques, as described above with reference to FIGS. 6A and 6B. However, the stepped structure 130 of the device structure 100' may be formed along its longitudinal extent instead of being located in or near the contact area 112. Therefore, one or more individual fin structures 109 having a relatively large thickness may be configured to form the stepped structure 130. As will be described herein, the designated fin structure 109 is not configured to be used as one of the fin structures 109 of the subsequently formed device (eg, capacitor structure). In some embodiments, as shown in FIG. 11B, each stepped step structure of the stepped structure 130 extends along the longitudinal length of the device structure 100' and is substantially parallel to the elongated portion of the fin structure 109, while in FIG. 11C As shown, each fin structure 109 along the section line CC does not include a stepped structure 130. The uppermost stepped structure of the stepped structure 130 may or may not include the uppermost portion of the electrically insulating material 104. In some embodiments, the uppermost stepped structure of the stepped structure 130 is close to the contact area 112, and the subsequent stepped structure is formed to decrease as the distance from the contact area 112 increases. However, the present disclosure is not limited to this, and may include another configuration of the stepped structure 130. Therefore, by forming the stepped structure 130 laterally adjacent (e.g., substantially parallel to) the elongated portion of the fin structure 109, this configuration may allow for an improvement in density within the device structure 100' (e.g., an improvement in packaging efficiency).After the stepped structure 130 is formed, an insulating material (for example, a sacrificial material, an electrical insulating material 140) (not shown) may be formed over at least some portions of the stepped structure of the stepped structure 130. For example, the insulating material may be positioned adjacent to (e.g., above) the exposed upper surface of at least some of the stepped structure 130 of the stepped structure 130 to protect the upper surface thereof and provide a substantially uniform upper boundary (e.g., upper boundary) of the device structure 100' surface). In some embodiments, the upper surface of the insulating material may be planarized, for example, by one or more CMP actions to promote or enhance the planarity of the upper surface for further processing thereon. For clarity and ease of understanding of the drawings and related descriptions, there is no insulating material (for example, the electrical insulating material 140) in FIGS. 11A to 14B.12A, 12B, and 12C, the support structure 116 of the device structure 100' may be disposed in an opening 114 extending linearly in the second direction (for example, the Y direction), as shown in FIG. 12A. In some embodiments, the support structure 116 extends (eg, extends substantially completely) between the upper surface of each portion of the conductive material 106 and the lower surface of the vertically adjacent portion of the conductive material 106, as shown in FIG. 12C. In some embodiments, the support structure 116 extends close to the stepped structure 130 (eg, in direct physical contact with it). In other embodiments, the supporting structure 116 does not completely extend to the stepped structure 130, so that the end surface of the supporting structure 116 is embedded in the electrically insulating material 110, and is provided between the end surface of the supporting structure 116 and the side surface of the stepped structure 130 There is space (e.g., gap), as shown in Figure 12A. The support structure 116 may include substantially the same material as the above-mentioned materials, and may be formed as described above. The support structure 116 may define the first area 141 and the second area 142 of the stack 103, as discussed in more detail with reference to FIGS. 3A and 3B.Referring to FIGS. 13A, 13B, and 13C, a gate electrode 118 may be formed on the fin structure 109. In some embodiments, a single gate electrode 118 is formed on each fin structure 109 lacking the stepped structure 130. The gate electrode 118 may be surrounded by the gate dielectric material 120 on at least some sides thereof. The gate dielectric material 120 may be formed adjacent to various portions of the conductive material 106 (for example, above and below), and may be formed before the gate electrode 118 is formed. Each of the gate electrode 118 and the gate dielectric material 120 may include substantially the same material as that described above with reference to FIGS. 4A and 4B, and may be formed as described above. The gate electrode 118 may be configured as a part of a word line extending in the third direction (for example, the Z direction), and the access device 119 (FIG. 16) may include a material adjacent to the conductive material 106 of the gate electrode 118. The gate electrode 118 and therefore the access device 119 may be isolated (eg, physically isolated) from the support structure 116 and/or the contact area 112 by the isolation region 122 or not.14A, 14B, and 14C, the capacitor dielectric material 126 of the device structure 100' may be disposed in the opening 124 of the conductive material 106 adjacent to the fin structure 109 (e.g., above, below). The conductive material 128 may be formed adjacent to and in contact with the capacitor dielectric material 126 in the opening 124 in the second region 142 (eg, in direct physical contact). In some embodiments, a portion of the capacitor dielectric material 126 and/or the conductive material 128 is formed in the end region 146. Each of the capacitor dielectric material 126 and the conductive material 128 may be formed using a substantially similar process, and may include substantially the same materials as those described above with reference to FIGS. 5A and 5B. The formation of the capacitor dielectric material 126 and the conductive material 128 results in the formation of a stacked horizontal capacitor structure 144.Referring to FIGS. 15A, 15B, and 15C, an electrically insulating material 140 may be disposed above the device structure 100', as shown in FIGS. 15B and 15C. For clarity and ease of understanding of the drawings and related descriptions, there is no electrically insulating material 140 in FIG. 15A. The conductive contact 132, the conductive contact 134, and the upper conductive contact 136 may be formed in the opening of the electrically insulating material 140 to physically and electrically contact the corresponding conductive material, as discussed in more detail with reference to FIGS. 7A and 7B. The lower conductive contact 138 may optionally be formed within the base material 102 and extend between the lower portion of the gate electrode 118 and other conductive elements (not shown) under the base material 102. For example, the device structure 100 ′ may be located above a complementary metal oxide semiconductor (CMOS) region (for example, a CMOS under array (CUA) region), as described in more detail with reference to FIG. 18. Each capacitor structure 144 includes each conductive material 106 of a plurality of (for example, three or more) fin structures 109. In some embodiments, the conductive contact 132 is centrally positioned on each stepped structure of the stepped structure 130. However, the present disclosure is not limited thereto, and the conductive contacts 132 may be arranged in a configuration different from the configuration shown in FIG. 15A. Each of the electrically insulating material 140 and the contacts (for example, the conductive contact 132, the conductive contact 134, the upper conductive contact 136, and the lower conductive contact 138) may contain substantially the same material as described above with reference to FIGS. 7A and 7B The same material.Fig. 16 is a simplified perspective view of an apparatus including the apparatus structure 100' of Figs. 9A to 15C. For clarity and ease of understanding of the drawings and related descriptions, there are no materials (including base material 102, electrical insulating material 104, electrical insulating material 110, electrical insulating material 140, supporting structure 116, capacitor dielectric material 126, and conductive material) in FIG. Material 128). The conductive material 106 in the second region 142 forms the respective capacitor structures 144 that are horizontally aligned and vertically stacked within the device structure 100', as discussed in more detail above with reference to FIG. 8. However, in the embodiment of FIG. 16, the capacitor structure 144 includes a plurality of (for example, three or more) fin structures 109, which have an elongated portion extending in a first direction (for example, the X direction); And the contact area 112, which connects the fin structure 109 and extends therebetween in the second direction (for example, the Y direction). The capacitor structures 144 may be connected to each other through the contact area 112 to exhibit multiple U-shaped configurations thereof. In some embodiments, the contact area 112 may be configured as a first conductive line 148 extending in the second direction. Additional laterally adjacent capacitor structures 144 may extend continuously on any (e.g., each) lateral side thereof. As shown in FIG. 16, multiple layers may be stacked in a third direction (e.g., Z direction) (e.g., three layers are shown for clarity).The conductive material 106 of the fin structure 109 may be configured as a junctionless nanowire 149 extending in the first direction, and the access device 119 may further include the conductive material 106 of the fin structure 109. Therefore, the access device 119 may be formed of the same material (eg, the same junctionless nanowire 149) as the capacitor structure 144 forming the fin structure 109 of each layer, as discussed in more detail above with reference to FIG. 8. The access device 119 may be formed close to the intersection of the first conductive line 148 and the second conductive line 150. The gate electrode 118 surrounding each conductive material 106 may pass in a third direction (e.g., Z direction) substantially transverse to (e.g., substantially perpendicular to) the second direction (e.g., Y direction) of the first conductive line 148 The second conductive line 150 (for example, an access line, a word line) extending upward is connected. As described in more detail with reference to FIG. 8, the second conductive line 150 extends in a direction substantially transverse to the main plane of the base material 102 (FIG. 15C). Therefore, the capacitor structures 144 aligned in a single vertical column share a common line of the second conductive line 150 (eg, a common access line). Each of the one or more (for example, three) second conductive lines 150 and the corresponding gate electrode 118 of each conductive material 106 of the fin structure 109 may pass through one or more (for example, a single) conductive contact. Point 134 to connect. Each capacitor structure 144 of each conductive material 106 and the corresponding access device 119 share a common gate electrode 118.With continued reference to Figure 16, the support structure 116 (see Figure 15C) can provide structural stability along its longitudinal extent within the device structure 100'. Depending on the length of the fin structure 109, the support structure 116 may optionally be present in the device structure 100'. The stepped structure 130 formed at one or two (e.g., a single) lateral sides of the device structure 100' may also provide structural stability while promoting a smaller footprint therein. In addition, the lateral configuration of the stepped structure 130 may allow an improvement (for example, a reduction in distance) of the configuration between the connections of conductive materials inside and outside the device structure 100'. For example, each step structure of the step structure 130 provides stepwise electrical access to the conductive contact 132 corresponding to each capacitor structure 144. The configuration of the stepped structure 130 of the device structure 100' allows contact to be formed with each layer of conductive material 106 forming the respective first conductive lines 148 extending in the second direction. Therefore, the capacitor structures 144 on the respective conductive materials 106 of the fin structure 109 share the common line of the first conductive line 148 (for example, the common data line). Each of the first conductive wires 148 may be connected by one or more (for example, a single) common conductive contact 132. The advantages of the configuration of the device structure 100' of FIG. 16 are similar to those of the device structure 100 of FIG. 8, as discussed in more detail above. Such advantages may include, for example, improved density due to proportional reduction in the size of the device, reduced power consumption during use and operation, and improved electrical isolation between adjacent capacitor structures 144 (reduced adjacent capacitor structures 144 during manufacturing). The occurrence of bridging between and the reduction of leakage during use and operation). Additional advantages may include improving efficiency by providing a simplified process flow, reducing manufacturing costs during manufacturing, and improving equipment use and reliability during operation.Figure 17 is a simplified partial top view of the device of Figures 9A to 15C. The method of forming an apparatus including a device structure 100' may include simultaneously forming a plurality of device structures 100' (e.g., more than one device structure or an array thereof). For example, multiple (e.g., two) device structures 100' may be formed adjacent to each other (e.g., next to each other) to form the first array 152, and multiple (e.g., two) additional device structures 100' may be formed adjacent to each other to form the first array 152 The second array 154 is formed. An additional laterally adjacent array may be formed in each of the first direction (for example, the X direction) and the second direction (for example, the Y direction), and another may be formed in the third direction (for example, the Z direction) Vertically adjacent array.The two device structures 100' of a single array 152, 154 may be oriented such that the contact area 112 of the first device structure 100' is located at the distal end of the contact area 112 of the second device structure 100'. Therefore, as shown in FIG. 17, the end regions 146 of each of the two device structures 100' may be close to each other (e.g., in close proximity) in a so-called "tip-to-tip" configuration. In some embodiments, the support structures 116 of the adjacent end regions 146 of the two device structures 100' are adjacent to each other (e.g., in direct physical contact) without intermediate materials. In some such embodiments, the two most central support structures 116 are substantially continuous with each other. In other embodiments, the support structures 116 of the adjacent end regions 146 of the two device structures 100' are close to each other without direct physical contact with each other (e.g., one or more materials are interposed therebetween). The proximal end portions of the contact area 112 of the first array 152 and the second array 154 may or may not be immediately adjacent to each other. As shown in FIG. 17, the device structure 100' in which the first array 152 and the second array 154 of the stepped structure 130 are formed at one or two (e.g., single) lateral sides thereof may allow the conductive contacts 132 and the device structure An improvement in configuration (for example, a reduction in distance) between additional connections (not shown) outside of 100'.Devices including one or more device structures 100, 100' (e.g., those shown in FIGS. 1A to 8 and 9A to 17) can be used in embodiments of the microelectronic device of the present disclosure. Figure 18 is a block diagram of an illustrative microelectronic device 300 (eg, a 3D DRAM device) according to one embodiment of the present disclosure. The microelectronic device 300 may include at least one memory cell array 302, such as, for example, multiple memory arrays. The microelectronic device 300 may further include at least one peripheral circuit 304 that inputs data from the outside of the microelectronic device 300 to provide access to the at least one memory cell array 302. The microelectronic device 300 may further include a charge pump circuit 306 for generating an input voltage. The peripheral circuit 304 and the charge pump circuit 306 may include one or more capacitors, such as the embodiment of the capacitor structure 144 of the device structure 100, 100' shown in FIGS. 1A-8 and 9A-17. The peripheral circuit 304 and the charge pump circuit 306 may be in electrical communication with the at least one memory cell array 302 through the capacitor structure 144. For example, the microelectronic device 300 may include a memory cell array 302, which may include a complementary metal oxide semiconductor (CMOS) region (e.g., an under-array CMOS (CUA) region 308 located below the memory cell array 302). The memory cell array 302 may include memory cells connected to access lines (e.g., word lines) and data lines (e.g., bit lines). In addition, the CUA area 308 may be located under the memory cell array 302 and contain its supporting circuit system. The supporting circuit system may support one or more additional memory cell arrays that exist in a stacked configuration. For example, the microelectronic device 300 including the memory cell array 302 with memory cells may be two-dimensional (2D) in order to represent a single-layer (deck/tier/level) memory cell; or may be three-dimensional (3D) in order to Exhibits multi-layer memory cells. In a stacked configuration, the CUA area 308 can facilitate access to one or more memory cells in each array. For example, the CUA area 308 may facilitate data transfer between memory cells coupled to a channel of the memory cell array 302, memory cells coupled to a channel of another memory cell array 302 coupled to the memory cell array 302, and a controller.Therefore, a memory device including at least one memory cell array is disclosed. The at least one memory cell array includes data lines extending in a horizontal direction and access lines extending in a vertical direction substantially transverse to the horizontal direction. The at least one memory cell array includes a capacitor structure aligned horizontally in a first horizontal direction and stacked vertically in a vertical direction, and an access device electrically coupled to the access line. The access device includes conductive materials common to capacitor structures.The device structure (for example, the device structure 100, 100') according to the embodiment of the present disclosure may be used in the embodiment of the electronic system of the present disclosure. For example, FIG. 19 is a block diagram of an illustrative electronic system 400 according to an embodiment of the present disclosure. The electronic system 400 may include, for example, a computer or computer hardware components, a server or other networked hardware components, a cell phone, a digital camera, a personal digital assistant (PDA), a portable media (for example, music) player, Wi-Fi, or a cellular function Tablet computers (such as, for example, or tablet computers), e-books, navigation devices, etc. The electronic system 400 includes at least one memory device 420. The memory device 420 may include, for example, one embodiment of a microelectronic device (e.g., device structure 100, 100') previously described herein. The electronic system 400 may further include at least one electronic signal processor device 410 (commonly referred to as a "microprocessor"). The electronic signal processor device 410 may optionally include an embodiment of the microelectronic device (e.g., device structure 100, 100') previously described herein. The electronic system 400 may further include one or more input devices 430 for inputting information into the electronic system 400 by the user, such as, for example, a mouse or other pointing device, a keyboard, a touch pad, a button, or a control panel. The electronic system 400 may further include one or more output devices 440 for outputting information (for example, visual or audio output) to the user, such as, for example, a monitor, a display, a printer, an audio output jack, a speaker, and the like. In some embodiments, the input device 430 and the output device 440 may include a single touch screen device, which may be used to input information to the electronic system 400 or to output visual information to the user. The input device 430 and the output device 440 may be in electrical communication with one or more of the memory device 420 and the electronic signal processor device 410.Therefore, according to an embodiment of the present disclosure, an electronic system includes at least one input device; at least one output device; at least one processor device operably coupled to the at least one input device and the at least one output device; And a memory device that is operatively coupled to the at least one processor device. The memory device includes capacitor structures, each capacitor structure including a first electrode and a second electrode separated from each other by a dielectric material. The first electrode includes an elongated portion of conductive material extending in a horizontal direction. The opposing parts of the first electrode are connected to each other by a contact part extending therebetween. The memory device further includes a gate structure positioned close to the contact portion, wherein a single gate structure is coupled to each of the opposing portions of the first electrode; and a conductive line that is transverse to the The horizontal direction extends in the vertical direction. The conductive line connects the respective gate structures of the corresponding capacitor structures stacked in the vertical direction.The embodiments of the present disclosure can be further characterized without the limitations described below.Embodiment 1: A device comprising: a fin structure comprising conductive material of each layer, the conductive material comprising an elongated portion extending in a first horizontal direction; a first conductive line, which is transverse to all layers Extending in a second horizontal direction of the first horizontal direction, at least a part of the first conductive line is vertically aligned; and a second conductive line, which is transverse to each of the first horizontal direction and the second horizontal direction One extending in the vertical direction; a horizontal capacitor structure, which includes the conductive material of the fin structure; and an access device, which is close to the intersection of the first conductive line and the second conductive line, the The access device includes the conductive material of the fin structure.Embodiment 2: The device according to embodiment 1, further comprising a support structure extending in the second horizontal direction, the support structure including vertical conductive materials located in the respective layers of the fin structure Electrical insulating material between adjacent parts.Embodiment 3: The apparatus of embodiment 1 or embodiment 2, wherein each access device includes a gate structure at least partially surrounding a gate dielectric material, and at least some of the gate structures substantially surround all of the gate structures. The conductive material of the fin structure.Embodiment 4: The device according to embodiment 3, wherein each horizontal capacitor structure and the corresponding access device share a common gate structure.Embodiment 5: The device according to any one of Embodiments 1 to 4, wherein the first conductive line is configured as a data line, and the conductive material of each layer of the fin structure is The horizontal capacitor structure shares a common data line.Embodiment 6: The device according to any one of Embodiments 1 to 5, wherein the second conductive line includes an access line, and the horizontal capacitor structures aligned in a single vertical column share a common access line.Embodiment 7: The device according to any one of Embodiments 1 to 6, further comprising: a stepped structure adjacent to the longitudinal end or the transverse direction of the elongated portion of the conductive material of the fin structure At least one of the sides; and the conductive contact on the corresponding step of the stepped structure, each of the first conductive lines on the corresponding layer share a common conductive contact.Embodiment 8: The device according to any one of Embodiments 1 to 7, wherein adjacent portions of the conductive material of adjacent fin structures are electrically connected to each other in a contact area close to the longitudinal end of the fin structure connect.Embodiment 9: The device according to any one of Embodiments 1 to 8, further comprising a base material under the horizontal capacitor structure, wherein the elongated portion of the conductive material of the fin structure It extends substantially parallel to the main plane of the base material, and the elongated portion of the second conductive thread extends substantially transversely to the main plane of the base material.Embodiment 10: A method of forming a device, comprising: forming at least one opening extending vertically through a stack of alternating conductive materials and dielectric materials above a base material, the alternating conductive materials and dielectrics of the stack The remaining part of the electrical material defines a fin structure extending in a first horizontal direction; forming at least one gate structure adjacent to the conductive material of the fin structure; forming a horizontal capacitor structure adjacent to the fin structure Forming at least one stepped structure including alternating conductive and dielectric materials of the stacked material; forming an electrically insulating material, which is located above at least a portion of the stack; and through the electrical The openings in the insulating material form conductive contacts.Embodiment 11: The method of Embodiment 10, further comprising: forming a first conductive line including the conductive material of the stack; and forming a second conductive line including the at least one gate structure The second conductive line extends in a vertical direction transverse to each of the first horizontal direction of the fin structure and the main plane of the base material.Embodiment 12: The method according to embodiment 10 or embodiment 11, further comprising: forming the conductive material of the fin structure into a junctionless nanowire, which includes a conductive doped semiconductor material, and the conductive material The doped semiconductor material includes one of a p-type dopant or an n-type dopant and does not include the other of the p-type dopant or the n-type dopant; and forming an access device, It includes a part of the conductive doped semiconductor material of the junction-free nanowire.Embodiment 13: The method according to any one of Embodiments 10 to 12, wherein forming the at least one step structure includes forming a single step structure that is close to the lateral side surface of the horizontal capacitor structure and is substantially parallel Extending on the elongated part of the fin structure.Embodiment 14: The method of any one of Embodiments 10 to 12, wherein: forming the at least one opening comprises: forming a single opening in a central portion of the stack of alternating conductive and dielectric materials , To form two opposing fin structures and a contact area at the longitudinal end of the stack; and forming the at least one stepped structure includes forming a single stepped structure that is located at the at least one stepped structure opposite to the horizontal capacitor structure. The proximal end of the contact area on one side of a gate structure.Embodiment 15: The method according to any one of Embodiments 10 to 14, further comprising: forming a support structure that extends in a second horizontal direction transverse to the first horizontal direction, wherein the The support structure includes: using an anisotropic material removal process to form an opening that extends vertically through the stack to the base material; and removing the conductive material between the vertically adjacent portions in the isotropic material removal process A portion of a dielectric material; and other dielectric material is formed between the vertically adjacent portions of the conductive material.Embodiment 16: A memory device comprising: at least one memory array of memory cells, comprising: data lines extending in a horizontal direction; access lines in a vertical direction substantially transverse to the horizontal direction Extending upward; a capacitor structure aligned horizontally in the horizontal direction and stacked vertically in the vertical direction; and an access device electrically coupled to the access line, the access device including the capacitor structure Common conductive materials.Embodiment 17: The memory device of embodiment 16, wherein the capacitor structure includes respective capacitor containers between 10 and 100 directly vertically aligned with each other, and the respective capacitor containers of a single vertical layer share a common access line.Embodiment 18: the memory device of embodiment 16 or embodiment 17, further comprising a junction-free nanowire, the junction-free nanowire comprising an elongated portion of the conductive material extending in the horizontal direction, The junctionless nanowires are configured as electrodes of the respective capacitor structures.Embodiment 19: The memory device according to any one of Embodiments 16 to 18, further comprising: a fin structure including conductive materials of each layer of the capacitor structure; and a gate structure The vertical directions are aligned with each other, and a single gate structure is located on each layer of the corresponding fin structure, wherein the single gate structure is connected to the capacitor structure of the corresponding fin structure.Embodiment 20: The memory device of Embodiment 19, further comprising a conductive contact and a CMOS under the array (CUA) region under the at least one memory array, wherein the conductive contact connects the gate structure Circuit system to the CUA area.Embodiment 21: An electronic system comprising: at least one input device; at least one output device; at least one processor device operably coupled to the at least one input device and the at least one output device; and a memory A device operably coupled to the at least one processor device, the memory device comprising: a capacitor structure, each capacitor structure comprising a first electrode and a second electrode separated from each other by a dielectric material, wherein the first The electrode includes an elongated portion of conductive material extending in a horizontal direction, and the opposing portions of the first electrode are connected to each other by a contact portion extending therebetween; a gate structure, which is positioned close to the contact portion, wherein a single gate structure is coupled To each of the opposing portions of the first electrode; and a conductive line extending in a vertical direction transverse to the horizontal direction, the conductive line connecting corresponding capacitor structures stacked in the vertical direction的 Each gate structure.Although certain illustrative embodiments have been described with reference to the accompanying drawings, those of ordinary skill in the art will recognize and understand that the embodiments covered by the present disclosure are not limited to those explicitly shown and described herein. Rather, without departing from the scope of the embodiments covered by the present disclosure (for example, those required below, including their equivalents), many additions, deletions, and modifications can be made to the embodiments described herein. In addition, features from one disclosed embodiment may be combined with features of another disclosed embodiment, while still being covered by the scope of the present disclosure.
The invention relates to a technology to ensure sufficient registers to fully cache complex memory configurations. Systems, apparatuses and methods may provide for technology that detects a misalignment condition, wherein the misalignment condition includes a memory map being misaligned with a granularity of a register, automatically appends a protected range to the memory map, wherein the protected range eliminates the misalignment condition, and defines an operational characteristic of the memory map via the register. In one example, the protected range is a non-existent memory (NXM) range appended via a source address decoder (SAD) rule, the register is a memory type range register (MTRR), and the operational characteristic is a cache characteristic.
1.A computing system with enhanced performance, the computing system comprising:Network controllerA processor coupled to the network controller, wherein the processor includes a register; andA memory, the memory is coupled to the processor, and the memory includes a set of executable program instructions, which when executed by the processor, cause the computing system to perform the following operations:Detecting a misalignment condition, wherein the misalignment condition includes a memory map that is misaligned with the granularity of the register;Appending a protection range to the memory map, wherein the protection range eliminates the misalignment condition; andThe operating characteristics of the memory map are defined via the register.2.The computing system according to claim 1, wherein the granularity of the register is a power of two, and the protection range is used to move the upper limit value of the address range in the memory map to the power of two Address and eliminate the misalignment condition.3.The computing system according to claim 1, wherein the instruction, when executed, causes the computing system to confirm that there are sufficient resources available for appending the protection range to the memory map.4.The computing system according to claim 1, wherein the protection range is appended to the memory map when sufficient resources are available, and wherein the instruction, when executed, causes the computing system to perform the following operations :Determining that there are insufficient resources available to append the protection range to the memory map; andIteratively reduce the upper limit of the address range in the memory map until the misalignment condition is eliminated.5.The computing system according to claim 1, wherein the protection scope is added via source address decoder rules to prevent pseudo direct memory access and prevent malware drivers.6.The computing system according to any one of claims 1 to 5, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operation characteristic is a cache characteristic.7.A semiconductor device including:One or more substrates; andLogic coupled to the one or more substrates, wherein the logic is at least partially implemented in one or more of configurable logic or fixed-functional hardware logic, coupled to the one or more substrates The logic at the bottom is used to:Detecting a misalignment condition, where the misalignment condition includes a memory map that is misaligned with the granularity of the register;Appending a protection range to the memory map, wherein the protection range eliminates the misalignment condition; andThe operating characteristics of the memory map are defined via the register.8.7. The semiconductor device according to claim 7, wherein the granularity of the register is a power of two, and the protection range is used to move the upper limit value of the address range in the memory map to the power of two. Address and eliminate the misalignment condition.9.The semiconductor device according to claim 7, wherein the logic is to confirm that sufficient resources are available to attach the protection range to the memory map.10.The semiconductor device according to claim 7, wherein the protection range is appended to the memory map if sufficient resources are available, and wherein the logic is used to:Determining that there are insufficient resources available to append the protection range to the memory map; andIteratively reduce the upper limit of the address range in the memory map until the misalignment condition is eliminated.11.The semiconductor device according to any one of claims 7 to 10, wherein the protection range is added via a source address decoder rule, the protection range is a non-existent memory range, and the register is a memory type range register, And the operating characteristic is a cache characteristic.12.10. The semiconductor device according to any one of claims 7 to 10, wherein the logic coupled to the one or more substrates includes a transistor channel region located within the one or more substrates.13.A method for operating a computing system with enhanced performance, the method comprising:Detecting a misalignment condition, where the misalignment condition includes a memory map that is misaligned with the granularity of the register;Automatically append a protection range to the memory map, wherein the protection range eliminates the misalignment condition; andThe operating characteristics of the memory map are defined via the register.14.The method according to claim 13, wherein the granularity of the register is a power of two, and the protection range is eliminated by moving the upper limit of the address range in the memory map to a power of two address The dislocation condition.15.The method of claim 13, further comprising confirming that sufficient resources are available to append the protection scope to the memory map.16.The method according to claim 13, wherein, in the case that sufficient resources are available, appending the protection range to the memory map, the method further comprising:Determining that there are insufficient resources available to append the protection range to the memory map; andIteratively reduce the upper limit of the address range in the memory map until the misalignment condition is eliminated.17.The method according to claim 13, wherein the protection scope is added via source address decoder rules to prevent pseudo direct memory access and prevent malware drivers.18.The method according to any one of claims 13 to 17, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operation characteristic is a cache characteristic.19.A semiconductor device including:A device for detecting a misalignment condition, wherein the misalignment condition includes a memory map that is misaligned with the granularity of a register;A device for automatically appending a protection range to the memory map, wherein the protection range eliminates the misalignment condition; andA device for defining the operating characteristics of the memory map via the register.20.The device according to claim 19, wherein the granularity of the register is a power of two, and the protection range is used to move the upper limit of the address range in the memory map to a power of two address And eliminate the dislocation condition.21.The apparatus according to claim 19, further comprising: a device for confirming that there are sufficient resources available for appending the protection range to the memory map.22.The apparatus according to claim 19, wherein the protection range is appended to the memory map when sufficient resources are available, the apparatus further comprising:A device used to determine that there are insufficient resources available to attach the protection range to the memory map; andA device for iteratively reducing the upper limit of the address range in the memory map until the misalignment condition is eliminated.23.The device of claim 19, wherein the protection scope is added via source address decoder rules to prevent pseudo direct memory access and prevent malware drivers.24.The device according to any one of claims 19 to 23, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operation characteristic is a cache characteristic.
Technology used to ensure sufficient registers to completely cache complex memory configurationsTechnical fieldThe embodiments generally relate to memory registers. More specifically, the embodiments relate to techniques for ensuring sufficient memory type range registers (MTRR) to completely cache complex memory configurations.Background techniqueThe boot sequence in a typical computing system may include the generation of a mapping between physical memory space and virtual memory space (e.g., a "memory map"), followed by a cache initialization process. The cache initialization process may involve the use of MTRR to control how the address range in the memory map is cached (e.g., uncached, write-back cache, etc.). The number of MTRRs is usually fixed (e.g., ten register pairs), where each MTRR describes an address range whose size is a power of two (e.g., 2n). The newly developed complex memory architecture can reserve a small percentage of available memory for internal use, so that the remaining amount of available memory is not a power of two. In such cases, the memory map is "misaligned" with the MTRR, which in turn can result in the use of a higher number of MTRRs to fully specify the cacheability of the memory architecture. In fact, if the number of available MTRRs is exceeded, the boot sequence can be stopped due to a fatal error.Summary of the inventionAccording to a first embodiment of the present disclosure, a performance-enhanced computing system is provided, the computing system includes: a network controller; a processor, the processor is coupled to the network controller, wherein the processor It includes a register; and a memory, the memory is coupled to the processor, and the memory includes a set of executable program instructions, which when executed by the processor, cause the computing system to perform the following operations : Detecting a misalignment condition, wherein the misalignment condition includes a memory map that is misaligned with the granularity of the register; appending a protection range to the memory map, wherein the protection range eliminates the misalignment condition; and Registers to define the operational characteristics of the memory mapAccording to a second embodiment of the present disclosure, there is provided a semiconductor device including: one or more substrates; and logic coupled to the one or more substrates, wherein the logic is at least partially configurable Is implemented in one or more of logic or fixed-functional hardware logic, and the logic coupled to the one or more substrates is used to: detect a misalignment condition, wherein the misalignment condition includes a granular misalignment with a register Append a protection range to the memory map, wherein the protection range eliminates the misalignment condition; and define the operating characteristics of the memory map via the register.According to a third embodiment of the present disclosure, there is provided a method for a computing system with enhanced operating performance. The method includes: detecting a misalignment condition, wherein the misalignment condition includes a memory map that is misaligned with a granularity of a register; It is automatically appended to the memory map, wherein the protection range eliminates the misalignment condition; and the operating characteristics of the memory map are defined via the register.According to a fourth embodiment of the present disclosure, there is provided a semiconductor device including: a device for detecting a misalignment condition, wherein the misalignment condition includes a memory map that is misaligned with the granularity of a register; and is used to automatically add a protection range To the memory-mapped device, wherein the protection range eliminates the misalignment condition; and a device for limiting the operating characteristics of the memory-mapped via the register.Description of the drawingsVarious advantages of the embodiments will become apparent to those skilled in the art by reading the following specification and appended claims and by referring to the following drawings, in which:Fig. 1 is a diagram of an embodiment of an MTRR group according to an embodiment;Fig. 2 is a comparison diagram of an example of a conventional memory map and an additional memory map according to an embodiment;Figure 3 is a diagram of an example of a register encoding configuration for a conventional memory map;Figure 4 is a diagram of an example of a register encoding configuration for additional memory mapping according to an embodiment;Fig. 5 is a flowchart of an example of a method of operating a performance-enhanced computing system according to an embodiment;FIG. 6 is a flowchart of an example of a method for eliminating misalignment conditions according to an embodiment;FIG. 7 is a block diagram of an example of a computing system with enhanced performance according to an embodiment;FIG. 8 is a diagram of an example of a semiconductor device according to an embodiment;Fig. 9 is a block diagram of an example of a processor according to an embodiment; andFig. 10 is a block diagram of an example of a multi-processor-based computing system according to an embodiment.Detailed waysIn a given processor (e.g., host processor, graphics processor), registers such as model specific registers (MSR) can be used to set the operating characteristics of the memory regions accessed by the processor. For example, the memory type range register (MTRR) is a more expensive MSR, which is located in the processor core and specifies the cache characteristics of the memory range. Therefore, MTRR can specify a specific memory range to operate in write-back (WB) mode, so that when information associated with an address in the range is written to the cache, the cache is marked as "dirty" and the information It is then written to the memory. Other possible caching modes include, for example, uncached (UC), write-through, write protection, and so on.Turning now to FIG. 1, there is shown a register group 20 (20a, 20b), in which the base address register 20a sets the base address ("PhysBase") of the address range, and the mask register 20b sets the base address Range mask ("PhysMask"). In an embodiment, the range mask is selected so that when the AND operation is performed between the target address in the address range of the base address register 20a and the range mask of the mask register 20b, the result will be returned as the same as when the AND operation is performed between the base address and the range mask of the mask register 20b. The same value is performed between the range masks. Therefore, when such a condition occurs, the target address can be regarded as the memory type ("type", e.g., write-back, uncached) of the range as specified in the base register 20a. In an embodiment, a given processor core includes a limited number (e.g., ten pairs) of register sets 20.It is particularly worth noting that the size (e.g., granularity) of the address range defined by the register set 20 shown is a power of two. If the size of the memory range is also not a power of two (e.g., 1.75 GiB instead of 2 GiB), then the misalignment condition may exist, and some of the register set 20 may need to specify the cache characteristics of the memory range. In fact, such situations can exist in more complex memory architectures, such as persistent memory modules (PMM) and/or solid state drives (SSD). In the embodiment, the misalignment condition is automatically detected, and the protection range (e.g., a range inaccessible to the system) is automatically appended to the memory range to eliminate the misalignment condition. As will be discussed in more detail, such methods can reduce the number of registers required for a complete cache memory configuration. Therefore, performance can be improved in terms of fewer fatal errors and/or boot sequence failures. Performance can also be improved by mitigating the loss of mapped memory (e.g., if the amount of available memory will be reduced in other ways).FIG. 2 shows a conventional memory map 30 and an additional memory map 32. In the conventional example shown, the address range 34 (e.g., high dynamic random access memory/DRAMH) has an upper limit value of 36 (e.g., just below 0x5E270000000, i.e., 0x5E26FFFFFFF) and a lower limit value of 38 (0x100000000). The physical address 0x5E270000000 includes nine address bits, which are set to the value 1, resulting in a physical address that is not well aligned to a power of two. In the example shown, the upper limit value 36 represents a misalignment condition because the upper limit value 36 is not a power of two address.Continuing to refer to Figures 2 to 4, the conventional register coding configuration 40 requires a total of eight variable MTRRs (MTRR[00]-MTRR[07]) to define the address range 34 and the area above the address range 34 (for example, "unmapped" And high memory-mapped input/output/MMIOH area) cache characteristics. This example shows the BIOS (Basic Input/Output System), which sets the MTRR default type=UC, and then uses the power of two (designating the adjacent power of two address range) to directly map the WB area. Together, these ranges cover the entire range of DRAM from address zero to the top of the high memory (DRAMH). As already pointed out, the MTRR base address register specifies the starting address of the range, and the mask specifies the limit of the range. Therefore, when address & mask = base address & mask, the target address is in this range.In contrast, the protection range 44 is appended to the memory map 32 to eliminate the misalignment condition. More specifically, the protection range 44 causes the upper limit 36 of the address range 34 to actually move to the power of two address 46 (for example, just below 0x80000000000, that is, 0x7FFFFFFFFFF). Therefore, the enhanced register encoding configuration 42 only involves a single MTRR (MTRR[00]) to define the cache characteristics of the address range 34. Although the illustrated solution is aligned to a power of two boundary and uses a minimum number of MTRRs, other solutions that are not aligned to a power of two address can also be used. For example, when there is not enough address space to align to the power of two address and/or the total number of powers of two does not get the power of two value (for example, 4GB+4GB+4GB=12GB, where 4GB is a power of two, but 12GB is not ), more than one MTRR pair can be used to allow mapping to addresses that are not powers of two. Therefore, the illustrated memory map 32 reduces the number of registers involved in a complete cache memory configuration, and improves performance at least in terms of fewer fatal errors, boot sequence failures, and/or mapped memory loss.FIG. 5 shows a method 50 of operating a computing system with enhanced performance. Method 50 can generally be implemented after generating a memory map (e.g., mapping a physical memory space to a virtual memory space) and during a cache initialization process in a computing system. More specifically, the method 50 can be used as logic stored in a machine or computer-readable storage medium (for example, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc.) The instruction set is implemented in the following items: one or more modules; configurable logic (such as programmable logic array (PLA), field programmable gate array (FPGA), complex programmable logic device (CPLD)); use circuit Technology (such as application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS), or transistor-transistor logic (TTL) technology) fixed-function logic hardware, or any combination of the foregoing.For example, the computer program code used to perform the operations shown in method 50 may be written in any combination of one or more programming languages, including object-oriented programming languages (eg, JAVA, SMALLTALK, C++, etc.) And conventional process programming languages (for example, "C" programming language or similar programming languages). In addition, logic instructions may include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, state setting data, configuration data for integrated circuits, and other inherent features of electronic circuits and/or hardware. Personalized status information of structural components (for example, host processor, central processing unit/CPU, microcontroller, etc.).The illustrated processing block 52 allows detection of misalignment conditions, where the misalignment conditions include a memory map that is misaligned with the size of, for example, a register (e.g., MTRR). In an embodiment, block 52 includes automatically determining whether the upper limit of the address range in the memory map is at a power of two address (e.g., in the case where the size of the register is also a power of two). Such determination can be made by reading from the boot memory (e.g., Unified Extensible Firmware Interface/UEFI memory) and/or querying the memory map. Block 54 automatically appends the protection range to the memory map, where the protection range eliminates the misalignment condition. In one example, the granularity of the register is a power of two, and the protection range eliminates the misalignment condition by moving the upper limit of the address range in the memory map to a power of two address. As will be discussed in more detail, block 54 may also involve confirming that sufficient resources are available to append the protection range to the memory map.In an embodiment, block 54 adds the protection scope via source address decoder (SAD) rules. Generally, SAD is a cache and home agent (CHA) component, which can define the layout of the physical address space of each group of processors that share the last level cache (LLC). In an embodiment, the SAD is responsible for directing memory requests to the LLC where the addressed memory unit is locally attached. Unlike MTRR, SAD rules are not limited to power-of-two granularity. Therefore, SAD rules can be used to set the size of the protection range to achieve sufficient cacheable memory alignment for MTRR programming.In one example, the memory protection range is a non-existent memory (NXM) range. NXM attributes can generally be used to indicate "holes" in the memory map. The illustrated block 56 allows for defining the operational characteristics of the memory map (e.g., cacheability characteristics) via registers. Therefore, block 56 may specify the address range as write-back, uncached, write-through, write-protected, and so on. Therefore, the illustrated method 50 reduces the number of registers involved in a complete cache memory configuration and improves performance at least in terms of fewer fatal errors, boot sequence failures, and/or mapped memory loss.FIG. 6 shows a method 60 for eliminating misalignment conditions. The method 60 can be incorporated into the discussed block 54 (Figure 5). More specifically, the method 60 can be implemented as a logical instruction set stored in a machine or computer-readable storage medium (for example, RAM, ROM, PROM, firmware, flash memory, etc.) in the following items: one or more modules; Configurable logic (such as PLA, FPGA, CPLD), fixed-function logic hardware using circuit technology (such as ASIC, CMOS or TTL technology), or any combination of the foregoing.The illustrated processing block 62 checks resource adequacy in response to the misalignment condition. In an embodiment, block 63 determines whether there are enough silicon resources (such as SAD rules, protection ranges, address spaces, etc.) to attach protection rules. If so, then block 64 may append the protection range to the memory map, where the protection rules eliminate the misalignment condition. Otherwise, the illustrated block 66 iteratively reduces the upper limit of the address range in the memory map until the misalignment condition is eliminated, bypassing the appending of the protection range to the memory map at block 64. In one example, block 66 includes reducing the upper limit of cacheable memory to the smallest available power of two until all memory and the required cache area can be fully represented in MTRR programming. Block 66 may also update the UEFI memory map to mark uncached memory regions as reserved to prevent UEFI drives and operating systems (OS) from using degraded memory. Therefore, the illustrated method 60 further improves performance by ensuring that sufficient resources are available before attaching the protection range to the memory map.Turning now to FIG. 7, a computing system 151 with enhanced performance is shown. The system 151 can generally be part of an electronic device/platform with the following functionalities: computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, reversible tablet, server), communication functionality (e.g., smart Phone), imaging functionality (e.g., camera, video camera), media playback functionality (e.g., smart TV/TV), wearable functionality (e.g., watches, glasses, headwear, shoes, jewelry), automotive functionality (E.g., cars, trucks, motorcycles), robotic functionality (e.g., autonomous robots), etc., or any combination thereof. In the illustrated embodiment, the system 151 includes a host processor 153 (e.g., central processing unit/CPU) that has a plurality of registers 154 and is coupled to a system memory 157 (e.g., PMM or other complex memory configuration) Integrated Memory Controller (IMC) 155. In an embodiment, the plurality of registers 154 includes a limited number of MTRRs.The illustrated system 151 also includes an input output (IO) module 159, which is implemented as a system on chip (SoC) along with the host processor 153 and the graphics processor 161 on the semiconductor die 163. The IO module 159 shown is connected to, for example, the display 165 (for example, touch screen, liquid crystal display/LCD, light emitting diode/LED display), network controller 167 (for example, wired and/or wireless), and mass storage device 169 (for example, hard disk drive). /HDD, optical disc, solid-state drive/SSD, flash memory) for communication.In an embodiment, the host processor 153, the graphics processor 161, and/or the IO module 159 execute the program instructions 171 retrieved from the system memory 157 and/or the mass storage device 169 to perform the discussed method 50 (FIG. 5) And/or one or more aspects of method 60 (Figure 6). Therefore, the execution of the illustrated instruction 171 may cause the computing system 151 to detect a misalignment condition, where the misalignment condition includes a memory map that is misaligned with the granularity of the registers in the plurality of registers 154. Execution of the program instructions 171 may also cause the computing system 151 to automatically append a protection range to the memory map (e.g., in response to a misalignment condition), wherein the protection range eliminates the misalignment condition and defines the operating characteristics of the memory map via registers. In one example, the protection range is the NXM range attached via SAD rules, the register is MTRR, and the operating characteristic is the cache characteristic.More specifically, there may be two components for the protection scope: SAD rules and GENPROT (general protection) register scope. In one example, the GENPROT register can prevent the following problems/attacks: prevent direct memory access (DMA) by programming the GENPROT range to cover the NXM range (for example, provide protection against pseudo DMA); and when a software entity attempts to access the NXM range In this case, erroneous data is returned as additional silicon-level protection (for example, "CRAB Abort" is issued by silently abandoning writing, and all 1s are returned when reading) (for example, to protect against malware drivers). Therefore, SAD rules can cover mapping/routing, and the GENPROT register range can cover protection. Therefore, the illustrated computing system 151 is considered to be performance-enhancing at least to some extent: it reduces the number of registers involved in the complete cache memory configuration, eliminates fatal errors, reduces boot sequence failures, and/or reduces The mapped memory is lost.FIG. 8 shows the semiconductor packaging device 173. The device 173 shown includes one or more substrates 175 (e.g., silicon, sapphire, gallium arsenide), and logic 177 (e.g., transistor arrays and other integrated circuits/ICs) coupled to the substrate(s) 175 Component). The logic 177 may be implemented at least partially in configurable logic or fixed-function logic hardware. In one example, logic 177 implements one or more aspects of method 50 (Figure 5) and/or method 60 (Figure 6) discussed. Therefore, the logic 177 can detect the misalignment condition, where the misalignment condition includes a memory map that is misaligned with the size of the register, and automatically appends the protection range to the memory map, wherein the protection range eliminates the misalignment condition and defines the operating characteristics of the memory map via the register . In one example, the protection range is an NXM range attached via SAD rules, the register is MTRR, and the operating characteristic is a cache characteristic. As noted, the scope of protection can also have GENPROT to prevent pseudo direct memory access and malicious software drivers. Therefore, the device 173 shown is considered to be performance-enhancing at least to some extent: it reduces the number of registers involved in a full cache memory configuration, eliminates fatal errors and/or reduces boot sequence failures.In one example, the logic 177 includes a transistor channel region located (eg, embedded) within the substrate(s) 175. Therefore, the interface between the logic 177 and the substrate(s) 175 cannot be an abrupt junction. The logic 177 can also be viewed as including an epitaxial layer grown on the initial wafer of the substrate 175(s).Figure 9 shows a processor core 200 according to one embodiment. The processor core 200 may be a core for any type of processor (for example, a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, or other devices for executing code). Although only one processor core 200 is shown in FIG. 9, the processing element may alternatively include more than one processor core 200 shown in FIG. The processor core 200 may be a single-threaded core; or for at least one embodiment, the processor core 200 may be multi-threaded, as it may include more than one hardware thread context (or "logical processor") per core.FIG. 9 also shows a memory 270 coupled to the processor core 200. The memory 270 may be any of a variety of memories (including various layers of the memory hierarchy) as known to those skilled in the art or otherwise available. The memory 270 may include one or more code 213 instructions for execution by the processor core 200, where the code 213 may implement one or more aspects of the method 50 (FIG. 5) and/or the method 60 (FIG. 6) discussed. The processor core 200 follows the program sequence of instructions indicated by the code 213. Each instruction can enter the front-end part 210 and can be processed by one or more decoders 220. The decoder 220 may generate micro-operations along with its output, such as fixed-width micro-operations in a predetermined format, or may generate other instructions, micro-commands, or control signals that reflect the original code instructions. The front end portion 210 shown also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue operations corresponding to conversion instructions for execution.The processor core 200 is shown as including execution logic 250, which has a set of execution units 255-1 to 255-N. Some embodiments may include multiple execution units dedicated to specific functions or groups of functions. Other embodiments may include only one execution unit or one execution unit that can perform a specific function. The execution logic 250 shown performs the operations specified by the code instructions.After completing the execution of the operation specified by the code instruction, the back-end logic 260 causes the instruction of the code 213 to retire. In one embodiment, the processor core 200 allows out-of-order execution, but requires sequential retirement of instructions. The retirement logic 265 can take many forms as known to those skilled in the art (e.g., reordering buffers, etc.). In this way, the processor core 200 during the execution of the code 213 at least in the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers modified by the execution logic 250 (not shown) ) Aspect is converted.Although not shown in FIG. 9, the processing element may include other elements on the chip along with the processor core 200. For example, the processing element may include memory control logic and the processor core 200. The processing element may include I/O control logic, and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more buffers.Referring now to FIG. 10, a block diagram of an embodiment of a computing system 1000 according to an embodiment is shown. FIG. 10 shows a multi-processor system 1000, which includes a first processing element 1070 and a second processing element 1080. Although two processing elements 1070 and 1080 are shown, it should be understood that an embodiment of the system 1000 may also include only one such processing element.The system 1000 is shown as a point-to-point interconnection system in which the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. As shown in FIG. It should be understood that any or all of the interconnects shown in FIG. 10 may be implemented as a multi-branch bus instead of a point-to-point interconnect.As shown in Figure 10, each of the processing elements 1070 and 1080 may be a multi-core processor, including first and second processor cores (i.e., processor cores 1074a and 1074b and processor cores 1084a and 1084b). Such cores 1074a, 1074b, 1084a, 1084b can be configured to execute instruction codes in a manner similar to that discussed above in connection with FIG. 9.Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. The shared caches 1896a, 1896b may respectively store data (e.g., instructions) utilized by one or more components of the processor (e.g., cores 1074a, 1074b and 1084a, 1084b). For example, the shared caches 1896a and 1896b can locally cache the data stored in the memories 1032 and 1034 for faster access by the components of the processor. In one or more embodiments, the shared cache 1896a, 1896b may include one or more middle-level caches, for example, level 2 (L2), level 3 (L3), level 4 (L4) or other levels of cache, the last level Cache (LLC), and/or a combination thereof.Although only two processing elements 1070, 1080 are shown, it should be understood that the scope of the embodiment is not limited by this. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of the processing elements 1070, 1080 may be elements other than the processor, such as an accelerator or a field programmable gate array. For example, the additional processing element(s) may include the same additional processor(s) as the first processor 1070, the additional processor(s) that are heterogeneous or asymmetric from the first processor 1070. Processor, accelerator (such as graphics accelerator or digital signal processing (DSP) unit), field programmable gate array, or any other processing element. There may be various differences between the processing elements 1070 and 1080 in terms of index measurement spectrum including architecture, micro-architecture, thermal, power consumption characteristics, etc. These differences may actually manifest themselves as the asymmetry and heterogeneity between the processing elements 1070 and 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.The first processing element 1070 may also include a memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include MC 1082 and P-P interfaces 1086 and 1088. As shown in Figure 10, MCs 1072 and 1082 couple the processors to respective memories (i.e., memory 1032 and memory 1034), which may be part of the main memory locally attached to the respective processors. Although MCs 1072 and 1082 are shown as integrated in the processing elements 1070, 1080, for alternative embodiments, the MC logic may be discrete logic external to the processing elements 1070, 1080 rather than integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to the I/O subsystem 1090 via P-P interconnects 1076, 1086, respectively. As shown in Figure 10, the I/O subsystem 1090 includes P-P interfaces 1094 and 1098. In addition, the I/O subsystem 1090 includes an interface 1092 to couple the I/O subsystem 1090 with a high-performance graphics engine 1038. In one embodiment, the bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090. Alternatively, point-to-point interconnects can couple these components.In turn, the I/O subsystem 1090 may be coupled to the first bus 1016 via the interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, but the embodiment The scope of is not limited by this.As shown in FIG. 10, various I/O devices 1014 (for example, biometric scanners, speakers, cameras, sensors) and a bus bridge 1018 may be coupled to the first bus 1016, and the bus bridge 1018 may couple the first bus 1016 To the second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1020. These devices include, for example, a keyboard/mouse 1012, a communication device(s) 1026, and a data storage unit 1019 (e.g., a disk Drive or other mass storage device). The code 1030 shown may implement one or more aspects of the method 50 (Figure 5) and/or the method 60 (Figure 6) discussed. In addition, the audio I/O 1024 can be coupled to the second bus 1020, and the battery 1010 can supply power to the computing system 1000.It should be noted that other embodiments are conceivable. For example, instead of the point-to-point architecture of Figure 10, the system can implement a multi-branch bus or another such communication topology. In addition, the elements of Fig. 10 may alternatively be partitioned using more or less integrated chips than those shown in Fig. 10.Additional notes and examples:Example 1 includes a performance-enhanced computing system that includes a network controller; a processor coupled to the network controller, wherein the processor includes a register; and a memory coupled to the processor, the memory including A group of executable program instructions, which when executed by the processor, causes the computing system to perform the following operations: detecting misalignment conditions, where the misalignment conditions include a memory map that is misaligned with the granularity of the register; adding a protection range To the memory map, where the protection range eliminates the misalignment condition; and the operating characteristics of the memory map are defined via registers.Example 2 includes the computing system according to Example 1, wherein the granularity of the register is a power of two, and the protection range is used to eliminate the misalignment condition by moving the upper limit of the address range in the memory map to a power of two address.Example 3 includes the computing system according to example 1, wherein the instruction, when executed, causes the computing system to confirm that there are sufficient resources available for appending the protection range to the memory map.Example 4 includes the computing system according to example 1, wherein the protection range is appended to the memory map when sufficient resources are available, and wherein the instruction, when executed, causes the computing system to perform the following operations: determine that there are not enough resources available It appends the protection range to the memory map and iteratively reduces the upper limit of the address range in the memory map until the misalignment condition is eliminated.Example 5 includes the computing system according to Example 1, wherein the scope of protection is added via source address decoder rules to prevent pseudo direct memory access and protect against malware drivers.Example 6 includes the computing system according to any one of Examples 1 to 5, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operating characteristic is a cache characteristic.Example 7 includes a semiconductor device that includes one or more substrates and logic coupled to the one or more substrates, wherein the logic is at least partially in configurable logic or fixed-function hardware logic Implemented in one or more, the logic coupled to one or more substrates is used to: detect a misalignment condition, where the misalignment condition includes a memory map that is misaligned with the granularity of the register, and the protection range is appended to the memory map, where , The protection range eliminates the misalignment condition and limits the operating characteristics of the memory map via registers.Example 8 includes the semiconductor device according to Example 7, wherein the granularity of the register is a power of two, and the protection range is used to eliminate the misalignment condition by moving the upper limit value of the address range in the memory map to a power of two address.Example 9 includes the semiconductor device of Example 7, wherein the logic is used to confirm that sufficient resources are available to append the protection range to the memory map.Example 10 includes the semiconductor device according to Example 7, wherein the protection range is appended to the memory map if sufficient resources are available, and wherein the logic is used to determine that there are not enough resources available to append the protection range to the memory map, and Iteratively reduce the upper limit of the address range in the memory map until the misalignment condition is eliminated.Example 11 includes the semiconductor device according to any one of Examples 7 to 10, wherein the protection range is added via a source address decoder rule, the protection range is a non-existent memory range, the register is a memory type range register, and the operating characteristic is Caching features.Example 12 includes the semiconductor device of any one of Examples 7 to 11, wherein the logic coupled to the one or more substrates includes a transistor channel region located within the one or more substrates.Example 13 includes at least one computer-readable storage medium, including a set of executable program instructions, which when executed by a computing system, cause the computing system to perform the following operations: detecting a misalignment condition, where the misalignment condition includes A memory map that is misaligned with the granularity of the register adds a protection range to the memory map, where the protection range eliminates the misalignment condition, and the operating characteristics of the memory map are defined via the register.Example 14 includes the at least one computer-readable storage medium according to Example 13, wherein the granularity of the register is a power of two, and the protection range is used to move the upper limit of the address range in the memory map to the power of two. Address and eliminate the misalignment condition.Example 15 includes the at least one computer-readable storage medium according to example 13, wherein the instructions, when executed, cause the computing system to confirm that sufficient resources are available to append the protection scope to the memory map.Example 16 includes the at least one computer-readable storage medium according to example 13, wherein the protection range is appended to the memory map when sufficient resources are available, and wherein the instruction, when executed, causes the computing system to perform the following operations : Determine that there are insufficient resources available to attach the protection range to the memory map, and iteratively reduce the upper limit of the address range in the memory map until the misalignment condition is eliminated.Example 17 includes the at least one computer-readable storage medium according to Example 13, wherein the protection scope is added via source address decoder rules to prevent pseudo direct memory access, and to prevent malware drivers.Example 18 includes the at least one computer-readable storage medium according to any one of Examples 13 to 17, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operating characteristic is a cache characteristic.Example 19 includes a method of operating a performance-enhanced computing system, the method includes detecting a misalignment condition, wherein the misalignment condition includes a memory map misaligned with the granularity of a register; automatically appending a protection range to the memory map, wherein the protection range The dislocation condition is eliminated, and the operating characteristics of the memory map are defined through registers.Example 20 includes the method according to Example 19, wherein the granularity of the register is a power of two, and the protection range eliminates the misalignment condition by moving the upper limit of the address range in the memory map to a power of two address.Example 21 includes the method according to Example 19, and further includes confirming that sufficient resources are available to append the protection range to the memory map.Example 22 includes the method of Example 19, wherein the protection range is appended to the memory map if sufficient resources are available, and the method further includes determining that there are not enough resources available to append the protection range to the memory map, and iteratively subtracting The upper limit of the address range in the small memory map until the misalignment condition is eliminated.Example 23 includes the method according to Example 19, wherein the protection scope is appended via source address decoder rules to prevent pseudo direct memory access and protect against malware drivers.Example 24 includes the method according to any one of Examples 19 to 23, wherein the protection range is a non-existent memory range, the register is a memory type range register, and the operating characteristic is a cache characteristic.Example 25 includes an apparatus including means for performing the method according to any one of Examples 19-24.Therefore, the technology described herein can provide a scalable solution that addresses potential MTRR deficiencies in a way that maximizes coverage and reduces the risk of escalation.The embodiments can be adapted for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLA), memory chips, network chips, system on chip (SoC), SSD/NAND controllers, ASICs, and so on. In addition, in some drawings, signal conductor lines are represented by lines. Some lines may be different to indicate more component signal paths, have a quantity label to indicate the number of component signal paths, and/or have arrows at one or more ends to indicate the main information flow direction. However, this should not be understood in a restrictive manner. Conversely, such increased details can be used in conjunction with one or more exemplary embodiments to facilitate easier understanding of the circuit. Any signal line represented (with or without additional information) may actually include one or more signals, which can travel in multiple directions and can be implemented in any suitable type of signal scheme, for example, with Digital or analog lines, optical fiber lines, and/or single-ended lines implemented by differential pairs.Example sizes/models/values/ranges may have been given, but the embodiments are not limited thereto. As manufacturing techniques (e.g., photolithography) become more mature, it is expected that devices of smaller sizes can be manufactured. In addition, for the sake of simplicity of illustration and discussion, and for the purpose of not obscuring certain aspects of the embodiments, known power/ground connections to IC chips and other components may or may not be shown in the figures. In addition, in order to avoid obscuring the embodiments, and also consider that the details about the implementation of such block diagram arrangements are highly dependent on the fact that the computing system in which the embodiments are to be implemented (ie, such details should be entirely within the knowledge of those skilled in the art). Within the scope of capability), the arrangement can be shown in the form of a block diagram. Where specific details (for example, circuits) are explained in order to describe example embodiments, it should be obvious to those skilled in the art that these implementations can be practiced without these specific details or with variations thereof. example. Therefore, the description should be regarded as illustrative and not restrictive.The term "coupled" may be used herein to refer to any type of relationship (direct or indirect) between the components under consideration, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical, or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and do not carry specific time or chronological meanings unless otherwise indicated.As used in this application and the claims, a list of items connected by the term "one or more of" can mean any combination of the listed items. For example, the phrase "one or more of A, B, or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C.Those skilled in the art will realize from the above description that the wide range of techniques of the embodiments can be implemented in various forms. Therefore, although the embodiments have been described in conjunction with their specific examples, the true scope of the embodiments should not be limited thereto, because after studying the drawings, the specification and the appended claims, other modifications will become apparent to the skilled person. .
Methods, systems, and devices for bit retiring to mitigate bit errors are described. A memory device may retrieve a set of bits from a first row of an address space and may determine that the set of bits includes one or more errors. The memory device may remap at least a portion of the first row from a first row index to a second row index, where the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device. Additionally or alternatively, the memory device may receive a first command to access a first logical address of a memory array that is associated with a first row index. The memory device may determine that the first row includes one or more errors and may transmit a signal indicating that the first row includes the one or more errors.
33CLAIMSWhat is claimed is:1. A method, comprising: retrieving a set of bits from a first row of an address space of a memory array, the address space addressable by a host device; determining that the set of bits includes one or more errors; and remapping at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, wherein the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device.2. The method of claim 1 , further comprising: remapping at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors.3. The method of claim 2, further comprising: receiving, from the host device, a command to access a first logical address associated with the first row index based at least in part on remapping the first row from the first row index to the second row index; and accessing the second row based at least in part on receiving the request to access the first logical address and remapping the second row to the first row index.4. The method of claim 1 , further comprising: retrieving a second set of bits from a third row of the address space of the memory array; determining that the second set of bits includes one or more second errors; and remapping at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based at least in part on determining that the second set of bits includes the one or more second errors, wherein the fourth row index, before the remapping, corresponds to a fourth row within the address space, and wherein the fourth row index is subsequent to the second row index. 345. The method of claim 1, further comprising: remapping a third row index from a third row of the address space to a fourth row of a second address space, wherein the second address space comprises one or more redundant rows for replacing one or more rows of the address space, wherein remapping the first row is based at least in part on remapping the third row index.6. The method of claim 1, wherein: the first row comprises one or more first subarrays; and remapping the portion of the first row further comprises remapping each of the one or more first subarrays from the first row index to the second row index.7. The method of claim 1, wherein: the first row comprises two or more first subarrays; the at least the portion of the first row is associated with the one or more errors and comprises a first of the two or more first subarrays, the method further comprising: remapping a first subarray of the two or more first subarrays from the first row index to the second row index; and maintaining a mapping of a second subarray of the two or more first subarrays to the first row index.8. The method of claim 1, wherein the one or more errors are associated with a retention time of one or more memory cells of the first row.9. The method of claim 1, wherein: each row of the address space is associated with a corresponding row index of a set of row indices; and the second row index has a highest value or lowest value of the set of row indices.10. The method of claim 1, further comprising: initiating a test operation on the memory array, wherein retrieving the set of bits and determining that the set of bits includes the one or more errors occurs as part of the test operation.11. The method of claim 1 , wherein the one or more errors includes single bit errors within the set of bits or multi-bit errors within the set of bits.12. The method of claim 1, wherein a memory device comprising the memory array or the host device determines that the set of bits includes the one or more errors.13. A method, comprising: receiving, from a host device, a first command to access a first logical address of a memory array that is associated with a first row index; determining that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command; and transmitting, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors.14. The method of claim 13, wherein the signal further indicates for the host device to request access to a second row index associated with a second row of the memory array.15. The method of claim 14, further comprising: receiving, from the host device, a second command to access a second logical address of the memory array that is associated with the second row index based at least in part on transmitting the signal.16. The method of claim 13, further comprising: refraining from accessing the first logical address of the memory array based at least in part on determining that the first row includes the one or more errors.17. The method of claim 16, wherein the first logical address is refrained from being accessed between a first time associated with receiving the first command and a second time associated with transmitting the signal.18. The method of claim 13, further comprising: identifying the first row index from a set of row indices, wherein each row index of the set of row indices is associated with a corresponding row that includes one or more respective errors, wherein determining that the first row includes the one or more errors is based at least in part on identifying the first row index from the set of row indices.19. The method of claim 18, further comprising: initiating a test operation on the memory array; and identifying the set of row indices that each include the one or more respective errors as part of the test operation, wherein identifying the first row index is based at least in part on initiating the test operation and identifying the set of row indices.20. The method of claim 13, wherein the one or more errors are associated with a retention time of one or more memory cells of the first row.21. An apparatus, comprising: a memory array, wherein the memory array comprises an array of memory cells that each comprise capacitive storage elements; and a circuit coupled with the memory array and configured to cause the apparatus to: retrieve a set of bits from a first row of an address space of the memory array, wherein the address space is addressable by a host device; determine that the set of bits includes one or more errors; and remap at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, wherein the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device.22. The apparatus of claim 21, wherein the circuit is further configured to cause the apparatus to: remap at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors. 3723. The apparatus of claim 22, wherein the circuit is further configured to cause the apparatus to: receive, from the host device, a command to access a first logical address associated with the first row index based at least in part on remapping the first row from the first row index to the second row index; and access the second row based at least in part on receiving the request to access the first logical address and remapping the second row to the first row index.24. The apparatus of claim 21, wherein the circuit is further configured to cause the apparatus to: retrieve a second set of bits from a third row of the address space of the memory array; determine that the second set of bits includes one or more second errors; and remap at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based at least in part on determining that the second set of bits includes the one or more second errors, wherein the fourth row index, before the remapping, corresponds to a fourth row within the address space, and wherein the fourth row index is subsequent to the second row index.25. The apparatus of claim 21, wherein the circuit is further configured to cause the apparatus to: remap a third row index from a third row of the address space to a fourth row of a second address space, wherein the second address space comprises one or more redundant rows for replacing one or more rows of the address space, wherein remapping the first row is based at least in part on remapping the third row index.26. The apparatus of claim 21, wherein: the first row comprises one or more first subarrays; remapping the portion of the first row further comprises the circuit configured to cause the apparatus to remap each of the one or more first subarrays from the first row index to the second row index.27. The apparatus of claim 21, wherein: the first row comprises two or more first subarrays; 38 the at least the portion of the first row is associated with the one or more errors and comprises a first of the two or more first subarrays, and wherein the circuit is further configured to cause the apparatus to: remap a first subarray of the two or more first subarrays from the first row index to the second row index; and maintain a mapping of a second subarray of the two or more first subarrays to the first row index.28. The apparatus of claim 21, wherein the one or more errors are associated with a retention time of one or more memory cells of the first row.29. An apparatus, comprising: a memory array, wherein the memory array comprises an array of memory cells that each comprise capacitive storage elements; a circuit coupled with the memory array and configured to cause the apparatus to: receive, from a host device, a first command to access a first logical address of the memory array that is associated with a first row index; determine that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command; and transmit, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors.30. The apparatus of claim 29, wherein the signal further indicates for the host device to request access to a second row index associated with a second row of the memory array.31. The apparatus of claim 30, wherein the circuit is further configured to cause the apparatus to: receive, from the host device, a second command to access a second logical address of the memory array that is associated with the second row index based at least in part on transmitting the signal. 3932. The apparatus of claim 29, wherein the circuit is further configured to cause the apparatus to: refrain from accessing the first logical address of the memory array based at least in part on determining that the first row includes the one or more errors.33. The apparatus of claim 29, wherein the circuit is further configured to cause the apparatus to: identify the first row index from a set of row indices, wherein each row index of the set of row indices is associated with a corresponding row that includes one or more respective errors, wherein determining that the first row includes the one or more errors is based at least in part on identifying the first row index from the set of row indices.34. A non-transitory computer-readable medium storing code comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to: retrieve a set of bits from a first row of an address space of a memory array, wherein the address space is addressable by a host device; determine that the set of bits includes one or more errors; and remap at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, wherein the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device.35. The non-transitory computer-readable medium of claim 34, wherein the instructions, when executed by the processor of the electronic device, further cause the electronic device to remap at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors.
BIT RETIRING TO MITIGATE BIT ERRORSCROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 17/580,329 by Schaefer, entitled “BIT RETIRING TO MITIGATE BIT ERRORS”, filed January 20, 2022 and U.S. Provisional Patent Application No. 63/142,781 by Schaefer, entitled “BIT RETIRING TO MITIGATE BIT ERRORS”, filed Januaiy 28, 2021; each of which is assigned to the assignee hereof and each of which is expressly incorporated by reference in its entirety herein.FIELD OF TECHNOLOGY[0002] The following relates generally to one or more systems for memory and more specifically to bit retiring to mitigate bit errors.BACKGROUND[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing memory cells within a memory device to various states. For example, binary memory cells may be programmed to one of two supported states, often denoted by a logic 1 or a logic 0. In some examples, a single memory cell may support more than two states, any one of which may be stored. To access the stored information, a component may read, or sense, at least one stored state in the memory device. To store information, a component may write, or program, the state in the memory device.[0004] Various types of memory devices and memory cells exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), selfselecting memory, chalcogenide memory technologies, and others. Memory cells may be volatile or non-volatile. Non-volatile memory, e.g., FeRAM, may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices, e.g., DRAM, may lose their stored state when disconnected from an external power source. BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example of a system that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0006] FIG. 2 illustrates an example of a system that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0007] FIG. 3 illustrates an example of a process flow that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0008] FIG. 4 illustrates an example of a bit retiring procedure that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0009] FIG. 5 illustrates an example of a bit retiring procedure that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0010] FIG. 6 illustrates an example of a process flow that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0011] FIG. 7 shows a block diagram of a memory device that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0012] FIG. 8 shows a flowchart illustrating a method or methods that support bit retiring to mitigate bit errors in accordance with examples as disclosed herein.[0013] FIG. 9 shows a flowchart illustrating a method or methods that support bit retiring to mitigate bit errors in accordance with examples as disclosed herein.DETAILED DESCRIPTION[0014] In some memory systems, a row of a memory array may have a bit error when a memory device retrieves a set of bits from the row. The bit error may occur due to a defect (e.g., a memory cell of the row may have a significantly higher or lower retention time than other memory cells of the row, which may be referred to as a variable retention time (VRT)). For instance, the bit error may occur from either internal or external sources (e.g., due to a neutron particle or a VRT) after testing of the memory device occurs. To avoid rows that have defects, the memory device may remap a row index associated with the row that has the bit error to a corresponding redundant row, where a redundant row may be a row not addressable by a host device prior to the memory device performing a remapping to the row. However, in some examples, the quantity of redundant rows may be limited such that there are more rows with bit errors than there are redundant rows.[0015] Systems, devices, and techniques are described for the memory device to remap the row index associated with the row with the bit error to a different row in the memory that is addressable by a host device. By remapping the row index to the row in standard memory, the memory device may increase a likelihood that the host device avoids requesting access to rows with bit errors, but the redundant rows may not be used. For example, the row with the error may be remapped with a row that is less likely to be accessed by the host device during standard operations of the memory device. Additionally, remapping the row index to a different row in standard memory may enable the host device to reduce a likelihood of accessing rows with bit errors.[0016] Additionally or alternatively, the memory device may transmit, to a host device, an indication that a row associated with a logical address requested by the host device for access has one or more bit errors. In some such examples, the host device may request avoid accessing rows with one or more bit errors after receiving the indication. By transmitting an indication, the memory device may enable the host device to avoid rows with bit errors.[0017] Features of the disclosure are initially described in the context of systems as described with reference to FIGs. 1 and 2. Features of the disclosure are described in the context of process flows and bit retiring procedures as described with reference to FIGs. 3-6. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to bit retiring to mitigate bit errors as described with reference to FIGs. 7-9.[0018] FIG. 1 illustrates an example of a system 100 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. The system 100 may include a host device 105, a memory device 110, and a plurality of channels 115 coupling the host device 105 with the memory device 110. The system 100 may include one or more memory devices 110, but aspects of the one or more memory devices 110 may be described in the context of a single memory device (e.g., memory device 110).[0019] The system 100 may include portions of an electronic device, such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a vehicle, or other systems. For example, the system 100 may illustrate aspects of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet- connected device, a vehicle controller, or the like. The memory device 110 may be a component of the system operable to store data for one or more other components of the system 100.[0020] At least portions of the system 100 may be examples of the host device 105. The host device 105 may be an example of a processor or other circuitry within a device that uses memory to execute processes, such as within a computing device, a mobile computing device, a wireless device, a graphics processing device, a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, a vehicle controller, a system on a chip (SoC), or some other stationary or portable electronic device, among other examples. In some examples, the host device 105 may refer to the hardware, firmware, software, or a combination thereof that implements the functions of an external memory controller 120. In some examples, the external memory controller 120 may be referred to as a host or a host device 105.[0021] A memory device 110 may be an independent device or a component that is operable to provide physical memory addresses/space that may be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with one or more different types of host devices. Signaling between the host device 105 and the memory device 110 may be operable to support one or more of: modulation schemes to modulate the signals, various pin configurations for communicating the signals, various form factors for physical packaging of the host device 105 and the memory device 110, clock signaling and synchronization between the host device 105 and the memory device 110, timing conventions, or other factors.[0022] The memory device 110 may be operable to store data for the components of the host device 105. In some examples, the memory device 110 may act as a slave-type device to the host device 105 (e.g., responding to and executing commands provided by the host device 105 through the external memory controller 120). Such commands may include one or more of a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands.[0023] The host device 105 may include one or more of an external memory controller 120, a processor 125, a basic input/output system (BIOS) component 130, or other components such as one or more peripheral components or one or more input/output controllers. The components of host device 105 may be coupled with one another using a bus 135.[0024] The processor 125 may be operable to provide control or other functionality for at least portions of the system 100 or at least portions of the host device 105. The processor 125 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or a combination of these components. In such examples, the processor 125 may be an example of a central processing unit (CPU), a graphics processing unit (GPU), a general purpose GPU (GPGPU), or an SoC, among other examples. In some examples, the external memory controller 120 may be implemented by or be a part of the processor 125.[0025] The BIOS component 130 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100 or the host device 105. The BIOS component 130 may also manage data flow between the processor 125 and the various components of the system 100 or the host device 105. The BIOS component 130 may include a program or software stored in one or more of read-only memory (ROM), flash memory, or other non-volatile memory.[0026] The memory device 110 may include a device memory controller 155 and one or more memory dies 160 (e.g., memory chips) to support a desired capacity or a specified capacity for data storage. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, local memory controller 165-A) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, memory array 170-A). A memory array 170 may be a collection (e.g., one or more grids, one or more banks, one or more tiles, one or more sections) of memory cells, with each memory cell being operable to store at least one bit of data. A memory device 110 including two or more memory dies may be referred to as a multi-die memory or a multi-die package or a multi-chip memory or a multi-chip package.[0027] The device memory controller 155 may include circuits, logic, or components operable to control operation of the memory device 110. The device memory controller 155 may include the hardware, the firmware, or the instructions that enable the memory device 110 to perform various operations and may be operable to receive, transmit, or execute commands, data, or control information related to the components of the memory device 110. The device memory controller 155 may be operable to communicate with one or more of the external memory controller 120, the one or more memory dies 160, or the processor 125. In some examples, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160.[0028] In some examples, the memory device 110 may receive data or commands or both from the host device 105. For example, the memory device 110 may receive a write command indicating that the memory device 110 is to store data for the host device 105 or a read command indicating that the memory device 110 is to provide data stored in a memory die 160 to the host device 105.[0029] A local memory controller 165 (e.g., local to a memory die 160) may include circuits, logic, or components operable to control operation of the memory die 160. In some examples, a local memory controller 165 may be operable to communicate (e.g., receive or transmit data or commands or both) with the device memory controller 155. In some examples, a memory device 110 may not include a device memory controller 155, and a local memory controller 165, or the external memory controller 120 may perform various functions described herein. As such, a local memory controller 165 may be operable to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 120, or the processor 125, or a combination thereof. Examples of components that may be included in the device memory controller 155 or the local memory controllers 165 or both may include receivers for receiving signals (e.g., from the external memory controller 120), transmitters for transmitting signals (e.g., to the external memory controller 120), decoders for decoding or demodulating received signals, encoders for encoding or modulating signals to be transmitted, or various other circuits or controllers operable for supporting described operations of the device memory controller 155 or local memory controller 165 or both.[0030] The external memory controller 120 may be operable to enable communication of one or more of information, data, or commands between components of the system 100 or the host device 105 (e.g., the processor 125) and the memory device 110. The external memory controller 120 may convert or translate communications exchanged between the components of the host device 105 and the memory device 110. In some examples, the external memory controller 120 or other component of the system 100 or the host device 105, or its functions described herein, may be implemented by the processor 125. For example, the external memory controller 120 may be hardware, firmware, or software, or some combination thereof implemented by the processor 125 or other component of the system 100 or the host device 105. Although the external memory controller 120 is depicted as being external to the memory device 110, in some examples, the external memory controller 120, or its functions described herein, may be implemented by one or more components of a memory device 110 (e.g., a device memory controller 155, a local memory controller 165) or vice versa.[0031] The components of the host device 105 may exchange information with the memory device 110 using one or more channels 115. The channels 115 may be operable to support communications between the external memory controller 120 and the memory device 110. Each channel 115 may be examples of transmission mediums that carry information between the host device 105 and the memory device. Each channel 115 may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system 100. A signal path may be an example of a conductive path operable to carry a signal. For example, a channel 115 may include a first terminal including one or more pins or pads at the host device 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be operable to act as part of a channel.[0032] Channels 115 (and associated signal paths and terminals) may be dedicated to communicating one or more types of information. For example, the channels 115 may include one or more command and address (C A) channels 186, one or more clock signal (CK) channels 188, one or more data (DQ) channels 190, one or more other channels 192, or a combination thereof. In some examples, signaling may be communicated over the channels 115 using single data rate (SDR) signaling or double data rate (DDR) signaling. In SDR signaling, one modulation symbol (e.g., signal level) of a signal may be registered for each clock cycle (e.g., on a rising or falling edge of a clock signal). In DDR signaling, two modulation symbols (e.g., signal levels) of a signal may be registered for each clock cycle (e.g., on both a rising edge and a falling edge of a clock signal).[0033] In some examples, CA channels 186 may be operable to communicate commands between the host device 105 and the memory device 110 including control information associated with the commands (e.g., address information). For example, commands carried by the CA channel 186 may include a read command with an address of the desired data. In some examples, a CA channel 186 may include any quantity of signal paths to decode one or more of address or command data (e.g., eight or nine signal paths).[0034] In some examples, data channels 190 may be operable to communicate one or more of data or control information between the host device 105 and the memory device 110. For example, the data channels 190 may communicate information (e.g., bi-directional) to be written to the memory device 110 or information read from the memory device 110.[0035] In some examples, a row of a memory array 170 may produce a bit error when a memory device 110 retrieves a set of bits from the row. In some examples, the bit error may occur due to a defect (e.g., a memory cell of the row may have a significantly higher or lower retention time than other memory cells of the row, which may be referred to as a VRT). For instance, the bit error may occur from either internal or external sources (e.g., due to a neutron particle or a VRT) after testing of the memory device 110 occurs. To avoid rows that have defects, the memory device 110 may remap a row index associated with the row with the bit error to a corresponding redundant row, where a redundant row may be a row that was not addressable by a host device 105 prior to the memory device 110 performing a remapping to the row at least once. However, in some examples, the quantity of redundant rows may be limited such that there are more rows with bit errors than there are redundant rows.[0036] In some examples, the memory device 110 may remap the row index associated with the row with the bit error to a corresponding row in standard memory, where the row in standard memory may be a row that was addressable by a host device 105. By remapping the row index to the row in standard memory, the memory device 110 may increase a likelihood that the host device avoids requesting access to rows with bit errors. Such techniques may also decrease a likelihood that redundant rows may be used, and thereby may reduce an amount of area that may be used for redundant rows in some cases. Additionally, remapping the row index to a row in standard memory may enable the host device 105 to reduce a likelihood of accessing rows with bit errors in examples where the memory device 110 does not include redundant rows.[0037] Additionally or alternatively, the memory device 110 may transmit, to a host device 105 each time the host device 105 requests access to a logical address that maps to a row with bit errors, an indication that the row is with bit errors. In some such examples, the host device 105 may request access to a different logical address after receiving the indication. By transmitting an indication, the memory device 110 may enable the host device 105 to avoid writing data to rows with bit errors. Additionally, transmitting the indication may enable the host device to reduce a likelihood of accessing rows with bit errors in examples.[0038] FIG. 2 illustrates an example of a system 200 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. System 200 may include a host device 205, which may be an example of a host device 105 as described with reference to FIG. 1. System 200 may also include a memory device 210, which may be an example of a memory device 110 as described with reference to FIG. 1.[0039] The system 200 may include a first address space 215 and a second address space 220. Each address space may include a set of addresses (e.g., logical addresses) that each map to a corresponding row index, where each row index is associated with a corresponding row in a memory array of the memory device 210. For instance, the first address space 215 may include row indices 225 and the second address space 220 may include row indices 230. The first address space 215 may be addressable by a host device 205 (e.g., the host device 205 may request access to and may store data at rows of the first address space 215) and rows of the first address space 215 may be referred to as standard rows. The second address space 220 may be examples of redundant address space that may be used to replace defect rows (or other defective components) in the first address space 215. The second address space 220 may initially by inaccessible by the host device 205, but may become addressable by the host device 205 if a row in the second address space 220 is used to replace (e.g., be remapped to) a row in the first address space 215. The rows of the second address space 220 may be referred to as redundant rows.[0040] In some examples, a row that is mapped to a row index 225 of the first address space 215 may produce a bit error. The errors may be identified when the memory device 210 performs a test operation that includes retrieving a set of bits from the row. The bit error may occur, for instance, due to a defect (e.g., a memory cell of the row may have a significantly higher or lower retention time than other memory cells of the row, which may be referred to as a VRT). For instance, the bit error may occur from either internal or external sources (e.g., due to a neutron particle or a VRT) after testing of the memory device 210 occurs. To avoid a row of the first address space 215 with defects, the memory device may remap the row index 225 to a redundant row of the second address space 220. In this manner, the redundant row (which was part of the second address space 220) may become a row of the first address space 215. As such, when a host device 205 requests to access the row index 225 that initially mapped to a row that has a bit error, the memory device 210 may instead access the redundant row. The memory device may not map the row with bit errors to another row index 225 and/or may map the row with bit errors to a row index 230 of the second address space 220.[0041] The quantity of redundant rows of the second address space 220 may be limited. As such, in some examples, the quantity of rows in the first address space 215 with bit errors may exceed the quantity of redundant rows in the second address space 220. Additionally or alternatively, there may be examples where memory device 210 does not have a second address space 220 (e.g., there may be no redundant rows). As such, there may be examples where the host device requests access to a row that has bit errors. If the row has a single bit error due to defects but no other errors are introduced when the set of bits is retrieved, the memory device 210 or the host device 205 may be capable of correcting the bit errors using error control operations, such as error correcting codes. However, having a known error in a row may reduce the effectiveness of such error control operations because some error control operations have a limited quantity of errors that they can correct for in the data. In such cases, a memory device may have less of an ability to correct for transient errors because of permanent errors that are present in the memory array. For example, if the row has a single bit error due to defects but at least one error is introduced when the set of bits is retrieved, the memory device 210 or the host device 205 may fail to correct the bit errors when the error control operation is capable of correcting a single error but not two errors. If the bit errors are not corrected, the data stored at the row with bit errors may be lost.[0042] To prevent data from being lost, a memory device 210 may perform one or more methods to enable a host device 205 to reduce a likelihood of requesting access to rows with bit errors. For instance, the host device 205 may send a write command to store data to a logical address that maps to a row of the first address space 215 with bit errors. In such examples, the memory device 210 may transmit, to the host device 205, an indication that the row has bit errors. As such, the host device 205 may issue a new write command that requests access to a different logical address (e.g., a logical address that maps to a row that does not have bit errors) after receiving the indication. By requesting access to a different logical address, the host device may avoid accessing rows that have bit errors, even in examples where there are more rows with bit errors than redundant rows. Additional details about performing these one or more methods may be described with reference to FIG. 6. [0043] Additionally or alternatively, the memory device 210 may remap at least a portion of a first row (e.g., a portion with bit errors) from a first row index 225 of the first address space 215 to a second row index 225 of the first address space 215. In some cases, rows of the first address space 215 may be more likely to be accessed by the host device 205 and rows of the first address space 215 may be less likely to be accessed by the host device 205. Rows with errors may be swapped (or remapped) with rows that are less likely to be accessed, thereby reducing a likelihood that the host device 205 uses rows with permanent or manufacturing errors. The first address space 215 may include a designated set of row indices 225 (e.g., a set including the highest-value row indices 225 or the lowest-value row indices 225) to be mapped to rows or portions thereof that include bit errors, and the designated set may include the second row index 225. Additionally, the memory device 210 may remap the first row index 225 to a different row in the first address space 215. For instance, the memory device 210 may remap the first row index 225 to a row that the second row index 225 mapped to prior to the remapping. The total quantity of rows of the first address space 215 that may be used for remapping may have a defined value or may be undefined (e.g., unlimited).[0044] By remapping the row index 225 to another row previously addressable in the first address space 215, the memory device 210 may help the host device 205 avoid requesting access to rows with bit errors, even in examples in which there are more rows with bit errors than there are redundant rows. Additionally, remapping the row index 225 to another row previously addressable in the first address space 215, the host device 205 may reduce a likelihood of accessing rows with bit errors in examples where the memory device 210 does not include redundant rows (e.g., does not include the second address space 220). Additional details corresponding to an entire row being mapped to a different row index 225 may be described in further detail herein, for instance, with reference to FIG. 4 and additional details corresponding to a portion of a row being mapped to a different row index 225 may be described in further detail herein, for instance, with reference to FIG. 5. Additionally, further detail about how a memory device 210 performs the remapping may be described herein, for instance, with reference to FIG. 3. In some examples, the memory device 210 may notify and/or inform the host device 205 of memory constraints during boot up (e.g., the host device may store information obtained during boot up).[0045] FIG. 3 illustrates an example of a process flow 300 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. Host device 205-a may be an example of a host device 205 as described with reference to FIG. 2 and memory device 210-a may be an example of a memory device 210 as described with reference to FIG. 2.[0046] In some examples, a memory device 210-a may initiate a test operation on a memory array of memory device 210-a. For instance, memory device 210-a may perform one or more of 305, 310, and 315 as part of the test operation. At 305, memory device 210-a may retrieve a set of bits from a first row of an address space of a memory array, the address space addressable by host device 205-a.[0047] At 310, memory device 210-a may determine that the set of bits includes one or more errors. In some examples, the one or more errors may be associated with a retention time of one or more memory cells of the first row. In some examples, the one or more errors may include single bit errors within the set of bits or multi-bit errors within the set of bits. If memory device 210-a determines that the set of bits has multi-bit errors, memory device 210-a may determine that the set of bits is uncorrectable and that the first row may not be used for storing data. If memory device 210-a determines that the set of bits has single-bit errors, memory device 210-a may determine that the set of bits is correctable. However, if an additional bit error occurs when retrieving data from or storing data at the first row (e.g., due to a hard error, which may be referred to as a defect, or a soft error, which may be referred to as a neutron or transient error), the data may become uncorrectable. In some examples, single bit errors may occur due to a variable retention time among memory cells of the first row.The retention time among cells may become variable due to stress (e.g., heat) experienced by the memory cells during an associated manufacturing process (e.g., attaching or connecting the memory device 210-a to a board).[0048] If the first row has multi-bit errors or single bit errors during testing, memory device 210-a may retire the first row. For instance, at 315, memory device 210-a may remap at least a portion of the first row from a first row index to a second row index based on determining that the set of bits includes the one or more errors. In some such examples, the second row index, before the remapping, may correspond to a second row within a second address space that is not addressable by host device 205-a prior to the remapping (e.g., a redundant row). However, as described herein, the total quantity of rows in the second address space may be limited or the second address space may not be included. As such, in some examples, the second row index, before the remapping, may correspond to a second row within the address space addressable by host device 205-a. In some examples memory device 210-a may remap at least a portion of the second row from the second row index to the first row index based on determining that the set of bits includes the one or more errors.[0049] In some examples, memory device 210-a may retrieve a second set of bits from a third row of the address space of the memory array, memory device 210-a may determine that the second set of bits includes one or more second errors and may remap at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based on determining that the second set of bits includes the one or more second errors. In some examples, the fourth row index, before the remapping, corresponds to a fourth row within the address space, where the fourth row index is subsequent to the second row index.[0050] In some examples, memory device 210-a may remap a third row index from a third row of the address space to a fourth row of a second address space, where the second address space includes one or more redundant rows for replacing one or more rows of the address space, where remapping the first row is based on remapping the third row index. In some examples, the first row may include one or more first subarrays and memory device 210-a remapping the portion of the first row may include memory device 210-a remapping each of the one or more first subarrays from the first row index to the second row index. In some examples, the first row may include two or more first subarrays and the at least the portion of the first row may be associated with the one or more errors and may include a first of the two or more first subarrays. In some such examples, memory device 210-a may remap a first subarray of the two or more first subarrays from the first row index to the second row index and may maintain a mapping of a second subarray of the two or more first subarrays to the first row index.[0051] In some examples, each row of the address space is associated with a corresponding row index of a set of row indices and the second row index may have a highest value or lowest value of the set of row indices. In some examples, retrieving the set of bits and/or determining that the set of bits includes the one or more errors may occur as part of the test operation. In some examples, host device 205-a may determine that the set of bits includes the one or more errors. For instance, memory device 210-a may transmit the set of bits to host device 205-a and host device 205-a may transmit an indication to memory device 210-a that the set of bits includes the one or more errors. [0052] At 320, host device 205-a may transmit, to memory device 210-a, a command to access a first logical address associated with the first row index based on remapping the first row from the first row index to the second row index.[0053] At 325, memory device 210-a may access the second row, which may be referred to as a repair row, based on receiving the request to access the first logical address and remapping the second row to the first row index.[0054] FIG. 4 illustrates an example of a bit retiring procedure 400 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. In some examples, bit retiring procedure 400 may represent a procedure performed by a memory device to remap a row index mapped to a row with bit errors to another row.[0055] A memory device may include a memory array with one or more rows of memory cells that map to one or more corresponding row indices 405. For instance, the memory device may include a first row which maps to a first row index 405-a (i.e., row index 405-a), a second row (e.g., row 415) which maps to a second row index 405-b (i.e., Row 1), and a third row which maps to a third row index 405-c (i.e., Row 0). The row mapped from each row index 405 may be divided into one or more portions which may be referred to as subarrays, where each subarray may have their own respective subarray index 410. For instance, a subarray of a row may map to subarray index 410-a (i.e., Sub 0), subarray index 410-b (i.e., Sub 1), subarray index 410-c (i.e., Sub 2), subarray index 410-d (i.e., Sub 3), subarray index 410-e (i.e., Sub 4), and subarray index 410-f (i.e., Sub 5).[0056] At stage 402-a, a subarray of row 415 (e.g., the subarray with subarray index 510-c and row index 505-b) may have a bit error (e.g., due to a memory cell of a subarray mapping to subarray index 410-c and of the second row having a significantly higher or lower retention time than other memory cells of the second row). At stage 402-b, the memory device may remap row index 405-b (i.e., Row 1) to the first row (e.g., the row at stage 402-a that mapped from row index 405-a) and may remap row index 405-a (i.e., Row 2) to row 415. Generally, for each row that has a subarray with a bit error, the memory device may remap the row index 405 for that row to a set of highest or a set of lowest row indices (e.g., the row indices with the highest or lowest values).[0057] FIG. 5 illustrates an example of a bit retiring procedure 500 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. In some examples, bit retiring procedure 500 may represent a procedure performed by a memory device to remap a row index for a portion of a row (e.g., subarray) to a corresponding portion of another row.[0058] A memory device may include a memory array with one or more rows of memory cells that map to one or more corresponding row indices 505. For instance, the memory device may include a first row which maps to a first row index 505-a (i. e. , Row 2), a second row which maps to a second row index 505-b (i.e., Row 1), and a third row which maps to a third row index 505-c (i.e., Row 0). The row mapped from each row index 405 may be divided into one or more portions which may be referred to as subarrays, where each subarray may have their own respective subarray index 410. For instance, a subarray of a row may map to subarray index 510-a (i.e., Sub 0), subarray index 510-b (i.e., Sub 1), subarray index 510-c (i.e., Sub 2), subarray index 510-d (i.e., Sub 3), subarray index 510-e (i.e., Sub 4), and subarray index 510-f (i.e., Sub 5).[0059] At stage 502-a, subarray 515 of the second row (e.g., the row which maps from row index 505-b at stage 502-a) and subarray 520 of the third row (e.g., the row which maps from row index 505-c at stage 502-a) may have bit errors (e.g., due to a memory cell of subarray 515 and a memory cell of subarray 520 having a significantly higher or lower retention time than other memory cells of the second row and the third row, respectively). At stage 502-a, subarray 515 may map to row index 505-b and subarray index 510-c, and subarray 520 may map to row index 505-c and subarray index 510-a. At stage 502-b, the memory device may remap the subarray 520 of the third row and subarray 515 of the second row to row index 505-a. In some such examples, subarray 515 and 520 may have the same subarray index as at stage 502-a (e.g., subarray 515 may still map to subarray index 510-a and subarray 520 may still map to subarray index 510-c). Additionally, the memory device may remap a first subarray of the first row (e.g., the subarray at subarray index 510-a and of the row which maps from row index 505-a at stage 502-a) to row index 505-c and may remap a second subarray of the first row to row index 505-b (e.g., the subarray at subarray index 510-c and of the row which maps from row index 505-a at stage 502-a). Generally, for each row that has a subarray with a bit error, the memory device may remap the subarray to one of a set of highest row indices or a set of lowest row indices (e.g., the row indices with the highest or lowest values). [0060] FIG. 6 illustrates an example of a process flow 600 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. Host device 205-b may be an example of a host device 205 as described with reference to FIG. 2 and memory device 210-b may be an example of a memory device 210 as described with reference to FIG. 2.[0061] In some examples, a memory device 210-b may initiate a test operation on a memory array of memory device 210-b. As part of the test operation, memory device 210-b may identify rows of the memory array that have errors. For instance, memory device 210-b may retrieve a set of bits from some or each rows of the memory array and may determine whether each of the set of bits includes one or more errors (e.g., multi-bit errors or single bit errors). If a row has errors during testing, memory device 210-b may retire the row. For instance, memory device 210-b may remap at least a portion of the row from a first row index to a second row index based on determining that the set of bits includes the one or more errors, where second row index may correspond to a second row within a second address space that is not addressable by host device 205-b prior to the remapping. However, as described herein, the total quantity of rows in the second address space may be limited or the second address space may not be included. As such, in some examples, memory device 210-b may identify a set of row indices that each include the one or more respective errors as part of the test operation. For instance, memory device 210-b may add a row index corresponding to the row to a list of row indices that map to rows with errors.[0062] At 605, host device 205-b may transmit, to memory device 210-b, a first command to access a first logical address of a memory array that is associated with a first row index.[0063] At 610, memory device 210-b may determine that a first row associated with the first row index includes one or more errors based on receiving the first command. In some examples, memory device 210-b may identify the first row index from a set of row indices (e.g., the list generated during the test operation), where each row index of the set of row indices is associated with a corresponding row that includes one or more errors. In some such examples, determining that the first row includes the one or more errors may be based on identifying the first row index from the set of row indices. In some examples, identifying the first row index is based on initiating the test operation and identifying the set of row indices. In some examples, memory device 210-b may refrain from accessing the first logical address of the memory array based on determining that the first row includes the one or more errors. In some examples, the one or more errors may be associated with a retention time (e.g., a variable retention time) of one or more memory cells of the first row.[0064] At 615, memory device 210-b may transmit, to host device 205-b, a signal indicating that the first row includes the one or more errors based on determining that the first row includes the one or more errors. In some examples, the signal may further indicate for the host device to request access to a second row index associated with a second row of the memory array. In some examples, the first logical address may be refrained from being accessed (e.g., by memory device 210-b) between a first time associated with receiving the command (e.g., at 610) and a second time associated with transmitting the signal (e.g., at 615).[0065] At 620, host device 205-b may transmit, to memory device 210-b, a second command to access a second logical address of the memory array that is associated with the second row index based on transmitting the signal (e.g., at 615).[0066] At 625, memory device 210-b may access the second logical address of the memory array.[0067] FIG. 7 shows a block diagram 700 of a memory device 720 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. The memory device 720 may be an example of aspects of a memory device as described with reference to FIGs. 1 through 6. The memory device 720, or various components thereof, may be an example of means for performing various aspects of bit retiring to mitigate single bit errors as described herein. For example, the memory device 720 may include a bit retriever 725, an error determination component 730, a remapping component 735, a command receiver 740, an error indication transmitter 745, a test operation initiating component 750, a row accessing component 755, a row index identifier 760, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses).[0068] The bit retriever 725 may be configured as or otherwise support a means for retrieving a set of bits from a first row of an address space of a memory array, the address space addressable by a host device. The error determination component 730 may be configured as or otherwise support a means for determining that the set of bits includes one or more errors. The remapping component 735 may be configured as or otherwise support a means for remapping at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, where the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device.[0069] In some examples, the remapping component 735 may be configured as or otherwise support a means for remapping at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors.[0070] In some examples, the command receiver 740 may be configured as or otherwise support a means for receiving, from the host device, a command to access a first logical address associated with the first row index based at least in part on remapping the first row from the first row index to the second row index. In some examples, the row accessing component 755 may be configured as or otherwise support a means for accessing the second row based at least in part on receiving the request to access the first logical address and remapping the second row to the first row index.[0071] In some examples, the bit retriever 725 may be configured as or otherwise support a means for retrieving a second set of bits from a third row of the address space of the memory array. In some examples, the error determination component 730 may be configured as or otherwise support a means for determining that the second set of bits includes one or more second errors. In some examples, the remapping component 735 may be configured as or otherwise support a means for remapping at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based at least in part on determining that the second set of bits includes the one or more second errors, where the fourth row index, before the remapping, corresponds to a fourth row within the address space, and where the fourth row index is subsequent to the second row index.[0072] In some examples, the remapping component 735 may be configured as or otherwise support a means for remapping a third row index from a third row of the address space to a fourth row of a second address space, where the second address space includes one or more redundant rows for replacing one or more rows of the address space, where remapping the first row is based at least in part on remapping the third row index. [0073] In some examples, the first row includes one or more first subarrays. In some examples, the remapping component 735 remapping the portion of the first row further includes the remapping component 735 remapping each of the one or more first subarrays from the first row index to the second row index.[0074] In some examples, the first row includes two or more first subarrays. In some examples, the at least the portion of the first row is associated with the one or more errors and includes a first of the two or more first subarrays. In some examples, the remapping component 735 may remap a first subarray of the two or more first subarrays from the first row index to the second row index. In some examples, the remapping component 735 may maintain a mapping of a second subarray of the two or more first subarrays to the first row index.[0075] In some examples, the one or more errors are associated with a retention time of one or more memory cells of the first row.[0076] In some examples, each row of the address space is associated with a corresponding row index of a set of row indices. In some examples, the second row index has a highest value or lowest value of the set of row indices.[0077] In some examples, the test operation initiating component 750 may be configured as or otherwise support a means for initiating a test operation on the memory array, where retrieving the set of bits and determining that the set of bits includes the one or more errors occurs as part of the test operation.[0078] In some examples, the one or more errors includes single bit errors within the set of bits or multi-bit errors with the set of bits.[0079] In some examples, a memory device including the memory array or the host device determines that the set of bits includes one or more errors.[0080] The command receiver 740 may be configured as or otherwise support a means for receiving, from a host device, a first command to access a first logical address of a memory array that is associated with a first row index. In some examples, the error determination component 730 may be configured as or otherwise support a means for determining that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command. The error indication transmitter 745 may be configured as or otherwise support a means for transmitting, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors.[0081] In some examples, the signal further indicates for the host device to request access to a second row index associated with a second row of the memory array.[0082] In some examples, the command receiver 740 may be configured as or otherwise support a means for receiving, from the host device, a second command to access a second logical address of the memory array that is associated with the second row index based at least in part on transmitting the signal.[0083] In some examples, the row accessing component 755 may be configured as or otherwise support a means for refraining from accessing the first logical address of the memory array based at least in part on determining that the first row includes the one or more errors.[0084] In some examples, the first logical address is refrained from being accessed between a first time associated with receiving the command and a second time associated with transmitting the signal.[0085] In some examples, the row index identifier 760 may be configured as or otherwise support a means for identifying the first row index from a set of row indices, where each row index of the set of row indices is associated with a corresponding row that includes one or more respective errors, where determining that the first row includes the one or more errors is based at least in part on identifying the first row index from the set of row indices.[0086] In some examples, the test operation initiating component 750 may be configured as or otherwise support a means for initiating a test operation on the memory array. In some examples, the row index identifier 760 may be configured as or otherwise support a means for identifying the set of row indices that each include the one or more respective errors as part of the test operation, where identifying the first row index is based at least in part on initiating the test operation and identifying the set of row indices.[0087] In some examples, the one or more errors are associated with a retention time of one or more memory cells of the first row. [0088] FIG. 8 shows a flowchart illustrating a method 800 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. The operations of method 800 may be implemented by a memory device or its components as described herein. For example, the operations of method 800 may be performed by a memory device as described with reference to FIGs. 1 through 7. In some examples, a memory device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory device may perform aspects of the described functions using special-purpose hardware.[0089] At 805, the method may include retrieving a set of bits from a first row of an address space of a memory array, the address space addressable by a host device. The operations of 805 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 805 may be performed by a bit retriever 725 as described with reference to FIG. 7.[0090] At 810, the method may include determining that the set of bits includes one or more errors. The operations of 810 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 810 may be performed by an error determination component 730 as described with reference to FIG. 7.[0091] At 815, the method may include remapping at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, where the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device. The operations of 815 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 815 may be performed by a remapping component 735 as described with reference to FIG. 7.[0092] In some examples, an apparatus as described herein may perform a method or methods, such as the method 800. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for retrieving a set of bits from a first row of an address space of a memory array, the address space addressable by a host device, determining that the set of bits includes one or more errors, and remapping at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, where the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device.[0093] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for remapping at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors.[0094] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for receiving, from the host device, a command to access a first logical address associated with the first row index based at least in part on remapping the first row from the first row index to the second row index and accessing the second row based at least in part on receiving the request to access the first logical address and remapping the second row to the first row index.[0095] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for retrieving a second set of bits from a third row of the address space of the memory array, determining that the second set of bits includes one or more second errors, and remapping at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based at least in part on determining that the second set of bits includes the one or more second errors, where the fourth row index, before the remapping, corresponds to a fourth row within the address space, and where the fourth row index may be subsequent to the second row index.[0096] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for remapping a third row index from a third row of the address space to a fourth row of a second address space, where the second address space includes one or more redundant rows for replacing one or more rows of the address space, where remapping the first row may be based at least in part on remapping the third row index.[0097] In some examples of the method 800 and the apparatus described herein, the first row includes one or more first subarrays and remapping the portion of the first row further includes remapping each of the one or more first subarrays from the first row index to the second row index. [0098] In some examples of the method 800 and the apparatus described herein, the first row includes two or more first subarrays, the at least the portion of the first row may be associated with the one or more errors and includes a first of the two or more first subarrays, and the method 800 and apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for remapping a first subarray of the two or more first subarrays from the first row index to the second row index, and maintaining a mapping of a second subarray of the two or more first subarrays to the first row index.[0099] In some examples of the method 800 and the apparatus described herein, the one or more errors may be associated with a retention time of one or more memory cells of the first row.[0100] In some examples of the method 800 and the apparatus described herein, each row of the address space may be associated with a corresponding row index of a set of row indices and the second row index may have a highest value or lowest value of the set of row indices.[0101] Some examples of the method 800 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for initiating a test operation on the memory array, where retrieving the set of bits and determining that the set of bits includes the one or more errors occurs as part of the test operation.[0102] In some examples of the method 800 and the apparatus described herein, the one or more errors includes single bit errors within the set of bits or multi-bit errors with the set of bits.[0103] In some examples of the method 800 and the apparatus described herein, a memory device including the memory array or the host device determines that the set of bits includes one or more errors.[0104] FIG. 9 shows a flowchart illustrating a method 900 that supports bit retiring to mitigate bit errors in accordance with examples as disclosed herein. The operations of method 900 may be implemented by a memory device or its components as described herein. For example, the operations of method 900 may be performed by a memory device as described with reference to FIGs. 1 through 7. In some examples, a memory device may execute a set of instructions to control the functional elements of the device to perform the described functions. Additionally or alternatively, the memory device may perform aspects of the described functions using special-purpose hardware.[0105] At 905, the method may include receiving, from a host device, a first command to access a first logical address of a memory array that is associated with a first row index. The operations of 905 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 905 may be performed by a command receiver 740 as described with reference to FIG. 7.[0106] At 910, the method may include determining that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command. The operations of 910 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 910 may be performed by an error determination component 730 as described with reference to FIG. 7.[0107] At 915, the method may include transmitting, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors. The operations of 915 may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of 915 may be performed by an error indication transmitter 745 as described with reference to FIG. 7.[0108] In some examples, an apparatus as described herein may perform a method or methods, such as the method 900. The apparatus may include, features, circuitry, logic, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for receiving, from a host device, a first command to access a first logical address of a memory array that is associated with a first row index, determining that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command, and transmitting, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors.[0109] In some examples of the method 900 and the apparatus described herein, the signal further indicates for the host device to request access to a second row index associated with a second row of the memory array. [0110] Some examples of the method 900 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for receiving, from the host device, a second command to access a second logical address of the memory array that may be associated with the second row index based at least in part on transmitting the signal.[OHl] Some examples of the method 900 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for refraining from accessing the first logical address of the memory array based at least in part on determining that the first row includes the one or more errors.[0112] In some examples of the method 900 and the apparatus described herein, the first logical address may be refrained from being accessed between a first time associated with receiving the command and a second time associated with transmitting the signal.[0113] Some examples of the method 900 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for identifying the first row index from a set of row indices, where each row index of the set of row indices may be associated with a corresponding row that includes one or more respective errors, where determining that the first row includes the one or more errors may be based at least in part on identifying the first row index from the set of row indices.[0114] Some examples of the method 900 and the apparatus described herein may further include operations, features, circuitry, logic, means, or instructions for initiating a test operation on the memory array and identifying the set of row indices that each include the one or more respective errors as part of the test operation, where identifying the first row index may be based at least in part on initiating the test operation and identifying the set of row indices.[0115] In some examples of the method 900 and the apparatus described herein, the one or more errors may be associated with a retention time of one or more memory cells of the first row.[0116] It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined. [0117] Another apparatus is described. The apparatus may include a memory array, where the memory array includes an array of memory cells that each include capacitive storage elements, and a circuit coupled with the memory array and configured to cause the apparatus to: retrieve a set of bits from a first row of an address space of the memory array, where the address space is addressable by a host device, determine that the set of bits includes one or more errors, and remap at least a portion of the first row from a first row index to a second row index based at least in part on determining that the set of bits includes the one or more errors, where the second row index, before the remapping, corresponds to a second row within the address space addressable by the host device[0118] In some examples of the apparatus, the circuit may be further configured to cause the apparatus to remap at least a portion of the second row from the second row index to the first row index based at least in part on determining that the set of bits includes the one or more errors.[0119] In some examples, the circuit may be further configured to cause the apparatus to receive, from the host device, a command to access a first logical address associated with the first row index based at least in part on remapping the first row from the first row index to the second row index and to access the second row based at least in part on receiving the request to access the first logical address and remapping the second row to the first row index.[0120] In some examples, the circuit may be further configured to cause the apparatus to retrieve a second set of bits from a third row of the address space of the memory array, determine that the second set of bits includes one or more second errors, and to remap at least a portion of the third row associated with the one or more second errors from a third row index to a fourth row index based at least in part on determining that the second set of bits includes the one or more second errors, where the fourth row index, before the remapping, corresponds to a fourth row within the address space, and where the fourth row index may be subsequent to the second row index.[0121] In some examples of the apparatus, the circuit may be further configured to cause the apparatus to remap a third row index from a third row of the address space to a fourth row of a second address space, where the second address space includes one or more redundant rows for replacing one or more rows of the address space, where remapping the first row may be based at least in part on remapping the third row index. [0122] In some examples of the apparatus, the first row includes one or more first subarrays and remapping the portion of the first row further may include the circuit being configured to cause the apparatus to remap each of the one or more first subarrays from the first row index to the second row index.[0123] In some examples of the apparatus, the first row includes two or more first subarrays, the at least the portion of the first row may be associated with the one or more errors and includes a first of the two or more first subarrays, and the circuit may be further configured to cause the apparatus to: remap a first subarray of the two or more first subarrays from the first row index to the second row index, and maintain a mapping of a second subarray of the two or more first subarrays to the first row index.[0124] In some examples of the apparatus, the one or more errors may be associated with a retention time of one or more memory cells of the first row.[0125] Another apparatus is described. The apparatus may include a memory array, where the memory array includes an array of memory cells that each include capacitive storage elements, a circuit coupled with the memory array and configured to cause the apparatus to: receive, from a host device, a first command to access a first logical address of the memory array that is associated with a first row index, determine that a first row associated with the first row index includes one or more errors based at least in part on receiving the first command, and transmit, to the host device, a signal indicating that the first row includes the one or more errors based at least in part on determining that the first row includes the one or more errors[0126] In some examples of the apparatus, the signal further indicates for the host device to request access to a second row index associated with a second row of the memory array.[0127] In some examples, the circuit may be further configured to cause the apparatus to receive, from the host device, a second command to access a second logical address of the memory array that may be associated with the second row index based at least in part on transmitting the signal.[0128] In some examples, the circuit may be further configured to cause the apparatus to include refrain from accessing the first logical address of the memory array based at least in part on determining that the first row includes the one or more errors. [0129] In some examples, the circuit may be further configured to cause the apparatus to: include identify the first row index from a set of row indices, where each row index of the set of row indices may be associated with a corresponding row that includes one or more respective errors, where determining that the first row includes the one or more errors may be based at least in part on identifying the first row index from the set of row indices.[0130] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0131] The terms “electronic communication,” “conductive contact,” “connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some examples, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0132] The term “coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0133] The term “isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0134] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some examples, the substrate is a semiconductor wafer. In other examples, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon- on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0135] A switching component or a transistor discussed herein may represent a fieldeffect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are electrons), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be “on” or “activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be “off’ or “deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0136] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0137] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0138] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.[0139] For example, the various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0140] As used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of’ or “one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i. e. , A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.”[0141] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. [0142] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
The utility model relates to an integrated device sealing. The utility model discloses a purpose provides integrated device sealing. This integrated device sealing includes: the sealing base plate, with thermoelectric generator (" TEG ") the device that the package substrate electricity is connected, it is the electric current that the TEG device is configured as thermal energy conversion, be in with the setting magnet on the front side of TEG device, the magnet is configured as connects the heat source and inject the heat source with lead hot path between the TEG device. At least one among the technical problem has been solved and realized to an embodiment the utility model discloses a corresponding favourable effect.
1.An integrated device package, comprising:Sealing substratea thermoelectric generator "TEG" device electrically coupled to the package substrate, the TEG device being configured to convert thermal energy into a current; andA magnet disposed on a front side of the TEG device, the magnet configured to connect a heat source and define a thermally conductive path between the heat source and the TEG device.2.The closure of claim 1 further comprising a sensor chip mounted to said seal substrate and in electrical communication with said TEG device, said TEG device being configured to provide power to said sensor chip.3.The closure of claim 2 further comprising a transmitter mounted to said enclosure substrate and in electrical communication with said sensor chip, said transmitter being configured to wirelessly obtain data obtained by said sensor Transfer to an external device.4.The closure of claim 1 further comprising a heat sink attached to the back side of the TEG device.5.The closure of claim 1 further comprising a thermal interface element disposed between said TEG device and said magnet along said thermally conductive path, said thermal interface element being configured to reduce transmission stress To the front side of the TEG device.6.The seal of claim 1 wherein said seal substrate comprises an aperture in which said TEG device is disposed.7.The closure of claim 1 further comprising a belt having one or more magnets, said belt being configured to surround at least a portion of said heat source to connect said integrated device seal to said heat source .8.An integrated device package, comprising:a sealing substrate, including a hole;a thermoelectric generator "TEG" device positioned in the aperture and electrically connected to the package substrate, the TEG device being configured to convert thermal energy into electrical current;A thermally conductive element disposed on a first side of the TEG device, the thermally conductive element configured to define a thermally conductive path between the heat source and the TEG device.9.The closure of claim 8 wherein said thermally conductive element comprises a magnet.10.The closure of claim 8 further comprising a sensor chip mounted to said seal substrate and in electrical communication with said TEG device, said TEG device being configured to provide power to said sensor chip.11.The closure of claim 10 further comprising a transmitter mounted to said enclosure substrate and in electrical communication with said sensor chip, said transmitter being configured to wirelessly obtain data obtained by said sensor Transfer to an external device.12.The closure of claim 8 further comprising a heat sink attached to the second side of the TEG device.13.An integrated device package, comprising:First heat conducting element;Second heat conducting element;a sealing substrate disposed between the first and second thermally conductive elements; anda thermoelectric generator "TEG" device disposed between the first and second thermally conductive elements and electrically connected to the seal substrate, the TEG device being configured to be based between the first and second thermally conductive elements The temperature difference produces electricity from heat.14.The closure of claim 13 wherein said first thermally conductive element comprises a magnet and said second thermally conductive element comprises a heat sink.15.The closure of claim 13 wherein said closure substrate comprises an aperture in which said TEG device is positioned.16.The closure of claim 15 further comprising a second TEG device disposed in the second aperture of the closure substrate adjacent the TEG device.17.The closure of claim 13 further comprising a thermal interface element disposed between said TEG device and said first thermally conductive element, said thermal interface element being configured to reduce transmission stress to said The front side of the TEG device.18.The closure of claim 13 further comprising a housing disposed about said first thermally conductive element, said housing mechanically coupling said second thermally conductive element.19.The closure of claim 13 further comprising an attachment mechanism configured to mechanically connect the integrated device package to a heat source.20.The seal of claim 13 further comprising:a sensor chip mounted to the seal substrate and in electrical communication with the TEG device, the TEG device configured to provide power to the sensor chip; andA transmitter mounted to the seal substrate and in electrical communication with the sensor chip, the transmitter being configured to wirelessly transmit data obtained by the sensor to an external device.
Integrated device packageTechnical fieldThe field of the invention relates to integrated device packages, and in particular to integrated device packages including thermoelectric generator (TEG) devices.Background techniqueIntegrated device packages can be used in a variety of larger electronic systems to provide sensors, transducers, processors, memory devices, or other types of devices for use in a variety of environments. In some environments, providing electrical and/or electrical communication between an integrated device enclosure (or a larger electronic system) and an external device disposed in another environment or location can be challenging. For example, in some systems, providing power or communication lines between an integrated device package and an external device may be economically or technically inefficient or physically challenging. Using batteries to power these devices can result in critical downtime for the packaged device to operate between depletion and charging or battery replacement. Therefore, there is still a need to improve integrated device packages for use in different environments.Utility model contentIt is an object of the present invention to provide an integrated device package.In one embodiment, an integrated device package is disclosed. The integrated device package can include a package substrate and a thermoelectric generator ("TEG") device electrically connected to the package substrate, the TEG device being configured to convert thermal energy into electrical current. A magnet can be disposed on a front side of the TEG device, the magnet being configured to connect a heat source and define a thermally conductive path between the heat source and the TEG device.In another embodiment, an integrated device package can include: a package substrate including a hole; and a thermoelectric generator ("TEG") device positioned in the hole and electrically connected to the package substrate, the TEG The device is configured to convert thermal energy into electrical current. A thermally conductive element can be disposed on a first side of the TEG device, the thermally conductive element configured to define a thermally conductive path between the heat source and the TEG device.In another embodiment, the integrated device package can include a first thermally conductive element and a second thermally conductive element. The seal can include a seal substrate disposed between the first and second thermally conductive elements. A thermoelectric generator ("TEG") device can be disposed between the first and second thermally conductive elements and electrically coupled to the encapsulation substrate. The TEG device can be configured to generate electrical power from thermal energy based on a temperature difference between the first and second thermally conductive elements.In accordance with one aspect of the present invention, an integrated device package is provided comprising: a package substrate; a thermoelectric generator ("TEG") device electrically coupled to the package substrate, the TEG device configured to convert thermal energy And a magnet disposed on a front side of the TEG device, the magnet being configured to connect a heat source and define a thermally conductive path between the heat source and the TEG device.Preferably, the integrated device package further includes a sensor chip mounted to the seal substrate and in electrical communication with the TEG device, the TEG device being configured to provide power to the sensor chip.Preferably, the integrated device package further includes a transmitter mounted to the seal substrate and in electrical communication with the sensor chip, the transmitter being configured to wirelessly transmit data obtained by the sensor to an external device.Preferably, the integrated device package further includes a heat sink attached to the back side of the TEG device.Advantageously, the integrated device package further includes a thermal interface element disposed between the TEG device and the magnet along the thermally conductive path, the thermal interface element configured to reduce transmission stress to the TEG device Positive.Preferably, the seal substrate includes a hole in which the TEG device is disposed.Preferably, the integrated device package further includes a strip having one or more magnets configured to surround at least a portion of the heat source to connect the integrated device seal to the heat source.According to still another aspect of the present invention, there is provided an integrated device package comprising: a package substrate including a hole; a thermoelectric generator ("TEG") device positioned in the hole and electrically connected to the package substrate The TEG device is configured to convert thermal energy into a current; and a thermally conductive element disposed on a first side of the TEG device, the thermally conductive element configured to define a thermal conduction between the heat source and the TEG device path.Preferably, the thermally conductive element comprises a magnet.Preferably, the integrated device package further includes a sensor chip mounted to the seal substrate and in electrical communication with the TEG device, the TEG device being configured to provide power to the sensor chip.Preferably, the integrated device package further includes a transmitter mounted to the seal substrate and in electrical communication with the sensor chip, the transmitter being configured to wirelessly transmit data obtained by the sensor to an external device.Preferably, the integrated device package further includes a heat sink attached to the second side of the TEG device.According to still another aspect of the present invention, there is provided an integrated device package comprising: a first thermally conductive element; a second thermally conductive element; a sealing substrate disposed between the first and second thermally conductive elements; and thermoelectric generation And a device ("TEG") device disposed between the first and second thermally conductive elements and electrically connected to the package substrate, the TEG device being configured to be based between the first and second thermally conductive elements The temperature difference produces electricity from heat.Preferably, the first thermally conductive element comprises a magnet and the second thermally conductive element comprises a heat sink.Preferably, the seal substrate includes a hole in which the TEG device is positioned.Preferably, the integrated device package further includes a second TEG device disposed in a second aperture of the package substrate adjacent the TEG device.Preferably, the integrated device package further includes a thermal interface element disposed between the TEG device and the first thermally conductive element, the thermal interface element configured to reduce transmission stress to a front side of the TEG device.Preferably, the integrated device package further includes a housing disposed about the first thermally conductive element, the housing mechanically coupling the second thermally conductive element.Preferably, the integrated device package further includes an attachment mechanism configured to mechanically connect the integrated device package to the heat source.Advantageously, the integrated device package further comprises: a sensor chip mounted to the package substrate and in electrical communication with the TEG device, the TEG device being configured to provide power to the sensor chip; and a transmitter, Mounted to the seal substrate and in electrical communication with the sensor chip, the transmitter is configured to wirelessly transmit data obtained by the sensor to an external device.One embodiment has solved at least one of the technical problems and achieves the corresponding advantageous effects of the present invention.The details of one or more implementations of the subject matter described in this specification are set forth in the drawings and the description below. Other features, aspects, and advantages will be apparent from the description, drawings and claims. Please note that the relative dimensions of the image below may not be drawn to scale.DRAWINGSSpecific implementations of the present invention will now be described with reference to the following drawings, which are by way of illustration and not limitation.1 is a schematic side cross-sectional view of an integrated device package having a thermoelectric generator device and connected to a heat source, in accordance with various embodiments.2 is an enlarged front cross-sectional view of the integrated device package of FIG. 1.3 is a partial exploded perspective view of the integrated device package shown in FIGS. 1 and 2.4 is a schematic side view of the integrated device package shown in FIGS. 1-3.Figure 5 is a top plan view of the integrated device package shown in Figures 1-4.6 is a schematic side cross-sectional view of an integrated device package connected to a plurality of heat sources, in accordance with another embodiment.7 is a schematic front and bottom isometric view of an integrated device seal attached to a strap configured to mount a seal to a heat source.8 is a schematic side cross-sectional view of an integrated device package having a thermoelectric generator device and coupled to a heat source, in accordance with another embodiment.Figure 9 is an enlarged front cross-sectional view of the integrated device package of Figure 8.Figure 10 is a schematic isometric exploded and inverted view of the integrated device seal portion shown in Figures 8 and 9.Figure 11 is a schematic side elevational view of the integrated device package illustrated in Figures 8-10.Figure 12 is a top plan view of the integrated device package shown in Figures 8-11.13 is a schematic side cross-sectional view of an integrated device seal connected to a plurality of heat sources, in accordance with another embodiment.14 is a schematic front and bottom isometric view of an integrated device package attached to a strap configured to mount a seal to a heat source.Detailed waysVarious embodiments disclosed herein relate to integrated device packages that include one or more thermoelectric generator ("TEG") devices. The TEG device generates a current from thermal energy based on a temperature difference ([Delta]T) between the first side of the TEG device (eg, the hot side of the TEG device) and the second side of the TEG device (eg, the cold side of the TEG device). In various TEG devices, the larger the temperature difference ΔT, the greater the electrical energy that TEG may generate. Embodiments disclosed herein may utilize TEG devices associated with high temperature heat sources, such as steam tubes, radioactive elements (such as those used in space probes), tailpipes or engines of automobiles, and the like. Embodiments disclosed herein may be configured to monitor vibration of a steam pipe or boiler wall in a power plant, monitor vibration of a water pump in a water treatment plant, and any other suitable sensing application. One challenge in manufacturing an efficient thermoelectric generator system is to provide high thermal conductivity between the first and second sides of the TEG device (e.g., between the hot side and the cold side of the TEG), as well as providing large amounts throughout the operation of the system. ΔT. Various embodiments disclosed herein provide an integrated device package having a TEG device that can operate over a wide range of temperature differences ΔT and for relatively small temperature differences between the first side and the second side of the TEG device The system may be particularly beneficial. Embodiments disclosed herein can also provide very low thermal resistance to reduce heat loss in the system.Embodiments disclosed herein may be beneficial for electronic systems having sensors that operate for relatively long periods of time and/or for multiple series of measurements without replacement. Embodiments disclosed herein are also particularly beneficial for systems that are used remotely and/or inaccessible places where power sources may not be easily accessible and/or replacement of power sources may be difficult. The integrated device package described herein can be mechanically and thermally coupled to a support structure that can serve as a first source of heat for the package. For example, a support structure or heat source (such as a steam tube) can have a higher temperature as a heat source for integrated device packages and TEG devices. Thermal energy from the support structure or heat source can be converted to current by the TEG device. Current generated by the TEG device can be provided to provide power to one or more integrated device dies of the package. For example, in some embodiments, a current can be powered to a sensor chip, a processor chip configured to process a signal (eg, a signal converted by a sensor chip), a communication chip (eg, a transmitter configured to wirelessly transmit wireless signals to An external device), a memory die, and/or any other suitable type of integrated device die, a battery that is directly or indirectly recharged through the TEG device. In some embodiments, the integrated device mold can monitor the operating environment including, for example, the temperature, humidity, etc. of the steam tube to which the seal is attached.Beneficially, the integrated device package is capable of generating sufficient power to power the operation of the integrated device package without the need to connect to an external power source. In addition, the integrated device package can be in electrical communication with an external device (eg, a computing device) over a wireless network through one or more communication dies in the enclosure, which can also be powered directly or indirectly by the TEG device. Thus, the embodiments disclosed herein enable sensing, processing, and/or communication capabilities in a remote environment without the need to connect to an external power source.1 is a schematic side cross-sectional view of an integrated device package 1 having a thermoelectric generator (TEG) device 16 and connected to a support structure such as the illustrated heat source 22, in accordance with various embodiments. 2 is an enlarged front cross-sectional view of the integrated device package 1 of FIG. 1 without the heat source 22. 3 is a partial schematic perspective exploded view of the integrated device package 1 shown in FIGS. 1 and 2. As shown in FIG. 1, the seal can include a first thermally conductive element 10, a second thermally conductive element 12, a package substrate 14, a TEG device 16, and a plurality of electronic components 18 (eg, for sensing, processing, storing, and/or communicating) , passive electronic components, batteries, etc.) and the housing 20. As shown in FIGS. 1 and 2, the substrate 14, the electronic component 18, and the TEG device 16 may be vertically disposed on the first and second thermally conductive elements 10, 12. Any suitable number of TEG devices 16 can be used in the disclosed embodiments. For example, in the embodiment of Figures 1-3, a plurality (e.g., two) of TEG devices 16 are shown. The first thermally conductive element 10 and/or the second thermally conductive element 12 may comprise any suitable thermally conductive material, such as a metal such as iron, nickel, cobalt, aluminum or copper, and alloys of these materials.Substrate 14 can include any suitable type of package substrate. In the illustrated embodiment, substrate 14 includes a laminate substrate (eg, a printed circuit board), but in other embodiments, substrate 14 may include a lead frame, a molded lead frame, a ceramic substrate, a polymer substrate, and the like. As shown in FIG. 3, the substrate 14 can include one or more apertures 26 in which the TEG device 16 can be positioned. The aperture 26 can thermally couple the first side 31 of the TEG device 16 to the first thermally conductive element 10 and the second side 33 of the TEG device 16 for thermal coupling to the second thermally conductive element 12. Thus, in the illustrated embodiment, the TEG device 16 may not be mechanically supported by the substrate 14. Rather, as explained herein, the second side 33 of the TEG device 16 can be connected to the second thermally conductive element 12, such as by a thermally conductive adhesive (eg, a thermal die bonding epoxy), or by otherwise The TEG device 16 is attached to a second thermally conductive element 12 having a thermal gap pad, thermal grease or other thermal interface material (TIM). The TEG device 16 can be electrically connected to corresponding contact pads of the substrate 14 in any suitable manner. For example, in some embodiments, the TEG device 16 can be wire bonded to the contact pads of the substrate 14 after bonding the substrate 14 to the thermally conductive element 10 or 12 that initially supports the TEG device 16. In another embodiment, the terminals of the TEG device can be connected to the traces on the substrate 14 by spring contacts.In some embodiments, the second thermally conductive element 12 can include or can act as a heat sink. As shown in FIG. 1, for example, the second thermally conductive element 12 can include a lateral conductive plate 12a and a plurality of fins 12b extending perpendicularly outward from the lateral conductive plates 12. The heat sink 12b can facilitate the transfer of heat from the package 1 to the external environment. As explained herein, in some embodiments, the second component 12 may not include a finned heat sink, but may, for example, include or be coupled to a second heat source or support structure having a different temperature than the heat source 22. In some embodiments, the second element 12 can be omitted and the second side 33 of the TEG device 16 can be exposed to the external environment. In various embodiments, the second component 12 can be detachable and replaced by a user to meet desired operational characteristics. The second component 12 can comprise any suitable thermally conductive material, such as cast steel or die cast steel, aluminum, copper, and the like.As shown in FIG. 3, the second thermally conductive element 12 can include a cavity 12c that is sized and shaped to receive the substrate 14, the electrical component 18, and the TEG device 16. The cavity 12c can be sized and configured to receive the electrical component 18 and/or the substrate 14. Portions of the lateral conductive plate 12a defining the bottom of the cavity 12c may be adhered to the second (eg, top) side 33 (FIG. 2) of the TEG device 16 by a thermally conductive adhesive. A housing 20 may be provided to mechanically secure or couple the first thermally conductive element 10 to the second thermally conductive element 12 and to protect the electrical element 18. For example, one or more fasteners 28 (eg, screws, bolts, etc.) can mechanically couple the housing 20 to the second thermally conductive element 12. The fasteners 28 enable the user to easily assemble and/or disassemble, particularly the replacement of the second component 12 for an alternate structure for different applications. As shown in FIGS. 1 and 2, the protruding portion 10a of the first component 10 can extend through the opening to be thermally coupled to the TEG device 16. The outwardly extending flange portion 10b of the first member 10 can extend generally parallel to the housing 20. The housing 20 can support or otherwise engage the flange portion 10b to secure the first component 10 to the package 1 and position the first component 10 relative to the TEG device 16 to effectively dissipate heat from the first component 10 is passed to the TEG device 16. The housing 20 can surround the first component 10 to secure the first component 10 within the package 1.As shown in FIGS. 2 and 3, the first side 31 of the TEG device 16 can be thermally coupled to the first thermally conductive element 10 along a thermally conductive path. For example, the first (eg, bottom) side 31 of the TEG device 16 can be thermally coupled to the first element 10 by a thermal interface element 11 (such as a thermally conductive gap pad or TIM) disposed between the first component 10 and the TEG device 16. In various embodiments, the thermal interface element 11 can include a gap pad (eg, a soft dielectric film) or a TIM (which can include a metal carrier, grease, etc.). The first thermally conductive element 10 may have different temperatures (eg, different temperatures under various operating conditions and environmental conditions) during use, which may result in expansion and/or contraction of the first thermally conductive element 10. Additionally, vibration and/or other motion of the heat source 22 can be transferred to the TEG device 16 and the substrate 14 through the first component 10. The transmitted vibrations and/or motion may cause mechanical stress in the TEG device 16, which may damage the TEG 16 and/or may reduce the thermal conductivity of the TEG 16 and/or the first component 10.The thermal interface element 11 can include a configuration configured to absorb expansion and/or contraction of the first element 10 relative to the TEG device 16 by providing the thermal interface element 11 as a sufficiently flexible cushioning material, and to reduce or eliminate transmission to the TEG by absorbing vibration The material of the device 16 (eg, to the first side 31) and/or the stress of the first component 10. In some embodiments, the thermal interface element 11 can comprise any suitable flexible or flexible material that is thermally conductive, such as an amine epoxy, an amide epoxy, an alicyclic epoxy, an amine adduct epoxy, or Any other suitable material for the operating environment. In various embodiments, the thermal interface component 11 can include a thermal pad, a thermally conductive grease, and the like. Thus, the thermal interface element 11 enables the first thermally conductive element 10 to mechanically float on the TEG device 16 while providing a low thermal resistance path to the TEG device 16.The TEG device 16 may generate a current based on a temperature difference ΔT between a first (eg, bottom) side 31 of the TEG device 16 and a second (eg, top) side 33 of the TEG device 16 opposite the first side 31. In various embodiments, TEG device 16 can include a multi-layer semiconductor die that generates current in the presence of an inter-layer thermal gradient. In some embodiments, TEG device 16 can include a microelectromechanical system (MEMS) die, although other types of TEG devices can be used. In various embodiments, TEG device 16 can include a TEG die that includes an integrated single-chip thermoelectric energy harvester that includes a plurality of electrically connected n-type and p-type thermoelectric elements. In some embodiments, the TEG device 16 can convert thermal energy into electrical power to achieve a temperature difference ΔT of at least 5 ° C, at least 10 ° C, or at least 15 ° C. The TEG device 16 can generate current at a range of 0.00001% to 0.1% of the thermal power level provided to the TEG device 16 or at an electrical power level in the range of 0.0001 to 0.1% of the thermal power level. The TEG device 16 can produce a temperature difference ΔT of 25 microwatts to 150 microwatts per 10 °C. For example, at a temperature difference ΔT of about 10 ° C, in some configurations, 1 W of thermal power supplied to the TEG device 16 can produce electrical power of about 0.1 mW. For a further example of such a TEG, the following references are hereby incorporated by reference in its entirety for all purposes for all purposes: U.S. Patent entitled "WAFER SCALE THERMOELECTRIC ENERGY HARVESTER", published on September 4, 2014, the disclosure of which is 2014/0246066A1. As shown in Figures 1-3, a plurality (e.g., two) of TEG devices 16 may be disposed in parallel with each other in a corresponding aperture 26. The use of multiple TEG devices 16 can provide increased electrical power output compared to a seal using a single TEG device. However, in other embodiments, the enclosure 1 may comprise a single TEG device, or more than two TEG devices. Depending on the operating environment, the TEG device 16 can be made of any suitable material, such as antimony telluride, lead telluride, calcium manganese oxide, silicon, and/or combinations thereof. As explained herein, the TEG device 16 electrically connected to the substrate 14 by, for example, wire bonding or spring loaded contacts (not shown) can supply the generated electrical power directly or indirectly through the rechargeable battery to the electrical components on the substrate 14. 18.The first thermally conductive element 10 can contact the heat source 22 along the first thermal interface surface 24 (which can be external to the enclosure or electronic device, such as a conduit carrying the thermal fluid) to the first side 31 of the heat source 22 and the TEG device 16 The first thermal energy is transferred between the first element 10 defining a thermally conductive path between the first heat source 22 and the first side 31 of the TEG device 16. The first component 10 can comprise any thermally conductive material that effectively conducts heat, such as iron, copper, tungsten, and the like. In some embodiments, for example, if heat transfer is required to be delayed, a lower thermally conductive material can be used. In other arrangements, the seal may include one or more energy storage devices (eg, batteries) to store electrical energy generated by the TEG device. In the illustrated embodiment, the first thermally conductive element 10 comprises a magnetic material or magnet such that the thermally conductive element 10 can be directly and mechanically and thermally coupled to the heat source 22. Advantageously, the use of the magnetically thermally conductive material for the first element 10 may be such that the first element 10 acts both as a thermally conductive path and as a mechanical connector for attaching the closure 1 to the external heat source 22. Such an arrangement may simplify the design of the closure 1 , reduce the overall size of the closure 1 and/or increase the thermal conductivity between the first thermal interface surface 24 and the first side 31 of the TEG device 16 .As described above, the second thermally conductive element 12 can be coupled to the second side 33 of the TEG device 16. The second side 33 of the TEG device 16 and the second thermally conductive element 12 can define a second thermal path between the TEG device 16 and the external environment (e.g., through the fins 12b and corresponding air gaps therebetween). The temperature difference ΔT generated between the first and second thermally conductive elements 10, 12 can produce a thermal gradient across the TEG device 16 sufficient to generate current.The plurality of electronic components 18 can include sensor chips, wireless communication dies (eg, wireless transmitter dies and/or receiver dies), processor dies or microcontrollers, memory dies, and suitable for operating the package 1 Other parts. The current generated by the TEG device 16 can be transmitted to the substrate 14 through the conductive traces of the substrate 14 (e.g., through bond wires) and to the electrical component 18. For example, in some embodiments, the closure 1 can include a sensor chip, such as one or more temperature sensors, optical sensors, pressure sensors, humidity or humidity sensors, and/or motion sensors. The enclosure 1 may also include a processor or microcontroller chip to process signals converted by the sensor chip and the communication die to wirelessly transmit the processed data to and/or receive processed data from the external computing device. Seal 1 can be used in a variety of operating environments. For example, the first thermally conductive element 10 of the closure 1 can be mounted on a steam pipe or on a tail pipe of a vehicle to measure various parameters of these systems. The second thermally conductive element 12 can be exposed to ambient air. The TEG device 16 can generate a current based on the temperature difference ΔT between the steam conduit or the tailpipe and the surrounding air. The closure 1 can thereby provide power to the electrical component 18 directly or indirectly through the battery without the need for an external power source.4 is a schematic side view of the integrated device package 1 shown in FIGS. 1-3. Figure 5 is a top plan view of the integrated device package 1 shown in Figures 1-4. The components of Figures 4-5 may be identical or substantially similar to the components of the same number of Figures 1-3, unless otherwise indicated. As shown in Figure 4, the closure 1 can have a height h defined by the largest vertical dimension between the first thermal interface surface 24 and the top edge of the fin 12b. The height h may be less than 40 mm, such as in the range of 10 mm to 40 mm or in the range of 25 mm to 35 mm. As shown in Fig. 5, the width w of the seal 1 can be defined by the widest transverse dimension of the seal 1 as viewed from the rear. The width w may be less than 100 mm, such as in the range of 35 mm to 100 mm or in the range of 55 mm to 80 mm. Advantageously, the closure 1 disclosed herein can have a low vertical profile and a small lateral footprint, particularly for functions that can be replaced by an enclosure without the need for an external power source or requiring frequent battery replacement.FIG. 6 is a schematic side cross-sectional view of an integrated device package 1 connected to a plurality of heat sources 22, 32, in accordance with another embodiment. The components of Figure 6 may be identical or substantially similar to the same numbered components of Figures 1-5 unless otherwise stated. By way of comparison, in the embodiment of Figure 1, the first thermally conductive element 10 is thermally coupled to the heat source 22 and the second thermally conductive element 12 is exposed to the surrounding environment. Unlike FIG. 1, in FIG. 6, the second thermally conductive element 12 is thermally coupled to the second heat source 32, and the temperature of the second heat source 32 is the same as the temperature of the heat source 22. In the embodiment of Figure 6, the first and second thermally conductive elements 10, 12 may comprise a thermally conductive magnetic material. As described above, the magnetic material for the elements 10, 12 can be used to mechanically attach the seal 1 to the respective heat sources 22, 32 while providing effective heat transfer to the TEG device 16.In some embodiments, heat source 22 can include a hot steam tube and second heat source 32 can include a cold water tube. Therefore, one of the so-called "heat sources" is actually colder than the other. The first component 10 transfers heat from the first heat source 22 to the first side 31 of the TEG 16. The second element 12 similarly transfers heat from the second side 33 of the TEG 16 to the second thermal interface surface 25 between the second element 12 and the second heat source 32. The temperature difference ΔT between the first side 31 and the second side 33 of the TEG device 16 can generate a current to provide electrical power to the electrical components 18 on the substrate 14.It should be understood that the seal 1 can be connected to any suitable device that produces a thermal gradient (eg, temperature difference ΔT) on the TEG device 16. In some embodiments, as shown in FIG. 1, the first thermally conductive element 10 can be thermally coupled to a support structure or heat source at a first temperature, and the second thermally conductive element 12 can be exposed to ambient air. In some embodiments, the first temperature can be greater than (eg, at least 10 ° C higher than the second temperature) the second temperature. For example, heat source 22 can include a steam tube that is at a higher temperature than ambient air. In other configurations, the closure 1 can be integrated into a wearable garment, such as a ski cap or helmet, wherein the first thermally conductive element 10 is thermally coupled to the user's body as the heat source 22 and the second thermally conductive element 12 is exposed to ambient air. During the winter, the temperature difference between the user's body and the ambient air may be large enough to generate current for powering various electronic devices. In other embodiments, the surrounding environment may be hotter than the heat source 22, allowing heat to flow from the surrounding environment to a so-called "heat source."FIG. 7 is a schematic front and bottom isometric view of the integrated device package 1 with a strap 30 attached to the housing 20. The strap 30 (eg, an attachment mechanism) may have sufficient flexibility to wrap at least a portion of the heat source 22 and may be configured to facilitate attachment of the seal 1 to the heat source 22. For example, in the illustrated embodiment, the belt includes a plurality of magnets 34 to connect the seal 1 to the heat source 22. Thus, the embodiment of Figure 7 enables a user to easily attach the closure 1 to the heat source 22 without the need for any external cables or wires to provide power or communication to the enclosure 1. In other embodiments, the strap 30 may include an adhesive in addition to or in place of the magnet 34 to connect the strap 30 to the heat source 22. Moreover, although the embodiments described herein are in a tubular device, for example, it should be understood that the strap 30 and the closure 1 can be configured to be attached to any suitable support structure or heat source, including a flat or curved support structure. Moreover, although an attachment mechanism including strap 30 is illustrated herein, it should be understood that other types of attachment mechanisms can be used to mechanically couple seal 1 to heat source 22 (and/or heat source 32).8-14 illustrate another embodiment of an integrated device package 1 incorporating an TEG device 16. In particular, Figure 8 is a schematic side cross-sectional view of an integrated device package 1 having a thermoelectric generator device 16 and coupled to a heat source 22, in accordance with another embodiment. Figure 9 is an enlarged front cross-sectional view of the integrated device package 1 of Figure 8. Figure 10 is a schematic perspective exploded view of the portion of the integrated device package 1 shown in Figures 8 and 9. Figure 11 is a schematic side elevational view of the integrated device package 1 of Figures 8-10. Figure 12 is a top plan view of the integrated device package 1 shown in Figures 8-11. FIG. 13 is a schematic side cross-sectional view of an integrated device package 1 connected to a plurality of heat sources 22, 32, in accordance with another embodiment. Figure 14 is a schematic front and bottom perspective view of the integrated device package 1 connected to a strap 30 configured to mount the seal to a heat source.Features of Figures 8-14 may be identical or substantially similar to the same numbered features of Figures 1-7 unless otherwise stated. Unlike the embodiment of FIGS. 1-7, as shown in FIG. 10, the second thermally conductive element 12 can include a mount structure 12d that is configured to support the first thermally conductive element 10. As shown in FIGS. 8 and 9, the mount structure 12d can be thermally coupled to the upper side of the TEG device 16, and the first component 10 can be thermally coupled to the underside of the TEG device 16. The abutment structure 12c can include narrow protrusions that extend from the floor or outer surface of the side panel 12a. The abutment structure 12c can be positioned in alignment with the first component 10. As shown in FIG. 10, one or more fasteners 44 (eg, screws, bolts, etc.) and washers 45 may be used to connect the first thermally conductive element 10 with the second thermally conductive element 12. As shown, the substrate 14 and electrical component 18 can be disposed in a cavity 12c defined by the second component 12 and the outer casing 20. In some embodiments, in the embodiment of FIGS. 1-7, a thermal interface element (eg, thermal interface element 11) can be disposed between the first thermally conductive element 10 and the second thermally conductive element 12.Although disclosed in the context of certain embodiments and examples, those skilled in the art will appreciate that the present invention extends to other alternative embodiments and/or uses and obvious modifications and equivalents thereof. . In addition, while the present invention has been shown and described in detail, other modifications within the scope of the present disclosure will be apparent to those skilled in the art. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still fall within the scope of the disclosure. It is to be understood that the various features and aspects of the disclosed embodiments may be combined or substituted with each other to form different modes of the disclosed invention. Therefore, the scope of the invention disclosed herein is not intended to be limited by the particular disclosed embodiments disclosed herein, but rather
A technique to verify firmware. One embodiment of the invention uses a processor's micro-code to verify a system's firmware, such that the firmware can be included in a trusted chain of code along with the operating system.
CLAIMS What is claimed is: 1. A system comprising: a first memory to store firmware; a processor to perform micro-code to cause the integrity of the firmware to be verified. 2. The system of claim 1 further comprising a second memory to store an authenticated code (AC) module to cause a hashing function to be performed on the firmware. 3. The system of claim 2, wherein the hashing function is to generate a result, which can be used to verify the integrity of the firmware. 4. The system of claim 3, wherein the micro-code is to reference the AC module through a first pointer within a firmware interface table (FIT). 5. The system of claim 4, wherein a result of the hashing function is to be stored in a platform control register to be used by a trusted platform module. 6. The system of claim 1, wherein an address of a firmware interface table (FIT) containing an address of a routine to verify the integrity of the firmware is stored in the micro-code at location a 4GB - 0xl8B from the beginning of the micro-code. 7. The system of claim 6 wherein the first memory is a flash memory device. 8. The system of claim 2 wherein the second memory is a cache memory. 9. A machine-readable medium having stored therein a set of instructions, which if executed by a machine cause the machine to perform a method comprising: <-> locating a firmware interface table (FIT) entry containing an address of an authenticated code (AC) module; authenticating the AC module; performing a hashing function to authenticate a firmware module; storing the result of the hashing function; . booting an operating system if the FIT is successfully located, the AC module is successfully authenticated, and the firmware is successfully authenticated. 10. The machine-readable medium of claim 9, wherein a processor's micro-code is responsible for locating the FIT before the hashing function is performed. 1 1. The machine-readable medium of claim 10, wherein the result of the hashing function is to be stored in a platform control register to be used by a trusted platform module. 12. The machine-readable medium of claim 9, wherein an address of the FIT is stored in the micro-code at location a 4GB - 0xl8B from the beginning of the micro-code. 13. The machine-readable medium of claim 12, wherein the safe location is located at Oxfffff 0 in relation to the beginning of the micro-code. 14. The machine-readable medium of claim 10, wherein the authenticated firmware is to be part of a trusted chain of code, which includes the micro-code and the operating system. 15. The machine-readable medium of claim 9, wherein the firmware is stored in a flash memory device. 16. The machine-readable medium of claim 15, wherein the AC module is to be performed from a cache memory. 17. A processor comprising: micro-code to cause the integrity of a firmware module corresponding to a computer system to be verified. 18. The processor of claim 17, wherein the micro-code is to locate an authenticated code (AC) module, which includes a routine to verify the integrity of the firmware module. 19. The processor of claim 18, wherein the routine is a hashing function. 20. The processor of claim 19, wherein the hashing function is to generate a result, which can be used to verify the integrity of the firmware module. 21. The processor of claim 20, wherein the micro-code is to reference the AC module through a first pointer within a firmware interface table (FIT). 22. The processor of claim 21 , wherein a result of the hashing function is to be stored in a platform control register to be used by a trusted platform module. 23. The processor of claim 22, wherein an address of the FIT is stored in the microcode at location a 4GB - 0xl8B from the beginning of the micro-code. 24. The processor of claim 22 further comprising a cache memory to store the AC module. 25. A method comprising: turning on a computer system in which micro-code corresponding to a processor within the computer system is responsible for causing the computer system's firmware to be verified; using the computer system. 26. The method of claim 25, wherein the micro-code is to locate an authenticated code (AC) module to verify the firmware. 27. The method of claim 25, wherein the micro-code is to cause a hashing function to be performed on the firmware. 28. The method of claim 25, wherein an operating system corresponding to the computer system is to be booted if the firmware is verified. 29. The method of claim 28, wherein the micro-code, the firmware, and the operating system are to compose a trusted code chain if the firmware is successfully verified. 30. The method of claim 25 further comprising updating and verifying the firmware without shutting down the computer system.
TECHNIQUE FOR PROVIDING SECURE FIRMWAREFIELD[0001] Embodiments of the invention relate to microprocessors and microprocessor systems. More particularly, embodiments of the invention pertain to a technique to provide software security in a microprocessor system.BACKGROUND[0002] Software security in microprocessor systems typically involves verifying the authenticity, accuracy, etc., of several layers of code in a software stack, including the operating system (OS) and applications that run within the operating system. Microprocessors and microprocessor systems, however, typically also include software that is specific to a particular computing system, such as "firmware", which can include software to perform basic input/output system (BIOS) routines. It may be desirable in some computing systems to verify the integrity of the firmware running within the system, since this firmware may be used by other functions within the OS or various applications and is therefore a vital part of the "trust chain" of verifiable software running in the system. [0003] Prior art software security techniques may not verify the integrity of firmware within a computing system, particularly in a server system, because verifying firmware typically requires the system to be reset while system management operations verify the <->firmware. One prior art technique, in particular, attempts to measure and verify firmware without resetting the system by including the requisite system management operations within software stored in a portion of non-volatile memory (e.g., flash memory) that is responsible for booting the system (i.e., "boot block").[0004] One problem with the above-mentioned prior art technique is that the boot block in some non-volatile memories may be accessible by a user and the code stored therein may be modified, thereby compromising the trust chain of software running the system. Another shortcoming of the prior art is that the prior art may require server systems to include a boot block. In a computing system, in which software integrity is at risk by malicious intruders, such as viruses, worms, etc., it is increasingly important to verify the integrity of software running therein, including firmware. Furthermore, in systems in which downtime may be undesirable, or even unacceptable, prior art security techniques are remiss in providing an acceptable software security solution.BRIEF DESCRIPTION OF THE DRAWINGS [0005] Embodiments of the invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0006] Figure 1 is a diagram illustrating various software components that may used in conjunction with one embodiment of the invention. [0007] Figure 2 is a diagram illustrating a system memory map including a firmware interface table (FIT) according to one embodiment of the invention.[0008] Figure 3 is a flow diagram illustrating operations used in one embodiment of the invention.[0009] Figure 4 is a front-side bus computing system in which one embodiment of the invention may be used.[0010] Figure 5 is a point-to-point bus computing system in which one embodiment of the invention may be used.DETAILED DESCRIPTION [0011] Embodiments of the invention relate to microprocessors and microprocessor systems. More particularly, embodiments of the invention relate to software security within a computing system. At least one embodiment of the inventions provides a technique to verify the integrity of platform-specific code, such as BIOS or firmware (hereafter "firmware"), without powering down or otherwise resetting a computing system in which the code is used.[0012] In one embodiment of the invention, firmware integrity may be verified at the time of reset at which time the firmware component may be measured by using processor- specific software (e.g., embedded "micro-code") to invoke a trusted code module (i.e., authenticated code module, or "ACM") that may verify the integrity of the firmware during a boot-up process of the system, before firmware modules are invoked. Processor micro-code is typically as trusted as processor hardware because it originates from only the processor manufacturer and like hardware is built into the silicon at the time of manufacture. Furthermore, processor micro-code is typically programmed into the processor ROM and can not be modified by external or internal agents. Hence, processor micro-code can be used as the lowest level of code upon which a trusted chain of code, including the OS, may be built In one embodiment, the ACM (Authenticated Code Module) may be a separate software module from the micro-code and/or the firmware, whereas in other embodiments, the AC may be a part of the micro-code or firmware.[0013] By verifying the security of the firmware at boot time, the firmware is included in the trust chain of software running in the system, such that subsequently run software, such as the OS, can rely on the integrity of the underlying firmware, thereby creating a trusted chain of software from the firmware layer to the OS to the applications running within the OS. Furthermore, embodiments of the invention can create a trusted chain of system software from the firmware layer through the OS and application layers of the system software stack without requiring the system to be rebooted or otherwise powered down after the trusted chain is established. This may be particularly useful in server applications, which cannot tolerate system downtime. [0014] By extending the trust chain from the microcode to all firmware components to the OS, one embodiment enables the firmware to be successfully integrated into the trust domain of a trusted OS. As a result, the trusted OS can use platform firmware to accomplish various functions, such as reliability, availability, and serviceability (RAS) platform management tasks. [0015] Figure 1 illustrates several software modules that may be used in at least one embodiment of the invention. In other embodiments, one or more these modules may be combined or omitted altogether. Referring to Figure 1, micro-code module 101, in one embodiment, communicates with an embedded table inside firmware module 110 (firmware interface table or "FIT"). In one embodiment, the micro-code module 101 is located at an architectural address (e.g., 4GB - 0x18) and searches the FIT records to determine whether an AC module (105) is registered in it. If an FIT is not found, the microcode may abandon the trusted mode boot and will instead invoke the a reset vector at a location, such as 4GB - 0x10, used in certain "legacy" processors. If the FIT is present and an AC module is registered in it and the record passes all the integrity tests, then micro-code can invoke the AC module by loading it into the processor secure environment (e.g., called caches as RAM or "CRAM" address space).[0016] In one embodiment, the three modules are located in different locations within the computing system. For example, in one embodiment, the micro-code is programmed into micro-code ROM (read only memory) logic within a processor, the ACM may be located in a non-volatile memory (e.g., flash memory), and the firmware is stored in a nonvolatile memory (e.g., flash memory) or other memory in a storage device within the computing system. [0017] In one embodiment of the invention, the ACM includes a routine or routines to perform a security verification operation on the firmware, such as a SHA-2 hash function or other security function. The result of the verification routine, such as the hash function, may be a value or set of values that represent a secure identity of the firmware to be verified or authenticated. This value or values may be stored in a location, such as a platform configuration register (PCR) within a secure hardware component, such as a trusted platform module (TPM 1.2) used by the trusted chain of system code. Later on, the secure OS can hash the module again and the result value of the hashing function may be compared against an expected value or values to verify the integrity of the firmware. [0018] In one embodiment, the AC is stored in a non-volatile memory location, such <v> as on a disk or in flash memory, and copied into a cache memory or other relatively fast- access and secure storage location, from where it may be executed by a processor. Some processors may execute AC modules from a special mode called "CRAM mode" (Caches As RAM Mode), in which the ACM is loaded into the processor cache and is executed securely. No other executing agent can modify the ACM when it is executing from CRAM mode. The exact location of the AC is somewhat arbitrary, particularly in some embodiments that use a firmware interface table (FIT) or other structure that can identify the location of the AC.[0019] Figure 2 illustrates a system memory map containing a pointer to a firmware interface table (FIT), according to one embodiment. The FIT may contain, among other values, a pointer to an AC module to be used to verify the firmware in one embodiment. The system memory map of Figure 2 contains, among other things, a reset pointer 201 at the 4GB boundary containing the address of where a program counter of a processor is to start executing code at boot-up. Also contained in the system memory map of Figure 2 is a pointer 205 to a FIT, stored in memory, which contains a pointer 210 to an AC module to be used to verify the firmware. In the memory map of Figure 2, for example, the pointer to the FIT is stored at the 4GB +18B boundary, which contains a pointer to the FIT stored elsewhere in memory. [0020] In one embodiment, micro-code of a processor may initially cause the processor to read the information stored at the 4GB - 0x18 location, which contains a pointer to the start of the FIT. The processor may read out of the FIT, to find out details of all the FIT registered modules. Among other things, a pointer to an AC module containing a verification routine (e.g., hash function) to verify the firmware of the system. Both the FIT and the' AC module may be stored in memory that is contiguous or non-continuous, and may be stored in any memory location within the system.[0021] Advantageously, in one embodiment of the invention micro-code updates, or "patches", can be implemented by updating the FIT to point to the appropriate patch or update at reset time. In this way, micro-code can be upgraded or repaired before calling other high-level code such that the trusted chain of code is not disrupted. [0022] Figure 3 is a flow diagram illustrating operations that may be used in some embodiments of the invention. At operation 301, a computer system is booted, in which one embodiment of the invention is used. At operation 305, micro-code used by a processor in the system locates and transfers control to an AC module. In one embodiment, the micro-code locates the AC module by referencing a FIT, which contains a pointer to the AC module. In other embodiments, other structures may be used to locate the AC module. If the micro-code is unable to find the FIT or if the AC module is not found, in one embodiment, the micro-code may jump to a "safe" location in program order, such as OxfffffO.[0023] At operation 308, the AC module is authenticated. In one embodiment, this may be done through prior art means, (e.g., loading the module to internal CRAM and authenticating it using microcode based hashing function and a CPU stored key. If the AC module passes this authentication, then processor will execute it.) and if the AC module cannot be verified, the micro-code may jump to a "safe" location in program order, such as OxfffffO, at operation 309. In one embodiment, program control may jump to a similar address if the FIT cannot be located or if the firmware cannot be verified. At operation 310, the AC module initializes a trusted program module (TPM), which contains information used by the system in relation to the trusted chain of software. At operation 315, a hash function, or other verifying function, verifies firmware to be used by the system and extends this hash into PCR of the trusted platform module (TPM). In one embodiment, the result of a hashing function is stored in a platform control register associated with the system. In other embodiments, the result may be stored in other locations, which are secure and not alterable by any other agents. . The hashing operation, in one embodiment, continues to verify modules of the firmware until all the firmware has been verified, at which time other trusted software in the system, such as the secure (trusted) OS, may boot. [0024] Figure 4 illustrates a front-side-bus (FSB) computer system in which one embodiment of the invention may be used. A processor 405 accesses data from a level one (Ll) cache memory 410 and main memory 415. In other embodiments of the invention, the cache memory may be a level two (L2) cache or other memory within a computer system memory hierarchy. Furthermore, in some embodiments, the computer system of Figure 4 may contain both a Ll cache and an L2 cache.[0025] Illustrated within the processor of Figure 4 is a storage area 406 for machine state. In one embodiment storage area may be a set of registers, whereas in other embodiments the storage area may be other memory structures. Also illustrated in Figure 4 is a storage area 407 for save area segments, according to one embodiment. In other embodiments, the save area segments may be in other devices or memory structures. The processor may have any number of processing cores. Other embodiments of the invention, however, may be implemented within other devices within the system, such as a separate bus agent, or distributed throughout the system in hardware, software, or some combination thereof. [0026] The main memory may be implemented in various memory sources, such as dynamic random-access memory (DRAM), a hard disk drive (HDD) 420, or a memory source located remotely from the computer system via network interface 430 containing various storage devices and technologies. The cache memory may be located either within the processor or in close proximity to the processor, such as on the processor's local bus 407.[0027] Furthermore, the cache memory may contain relatively fast memory cells, such as a six-transistor (6T) cell, or other memory cell of approximately equal or faster access speed. The computer system of Figure 4 may be a point-to-point (PtP) network of bus agents, such as microprocessors, that communicate via bus signals dedicated to each agent on the PtP network. Figure 5 illustrates a computer system that is arranged in a point-to- point (PtP) configuration. In particular, Figure 5 shows a system where processors, memory, and input/output devices are interconnected by a number of point-to-point interfaces. [0028] The system of Figure 5 may also include several processors, of which only two, processors 570, 580 are shown for clarity. Processors 570, 580 may each include a local memory controller hub (MCH) 572, 582 to connect with memory 22, 24. Processors 570, 580 may exchange data via a point-to-point (PtP) interface 550 using PtP interface circuits 578, 588. Processors 570, 580 may each exchange data with a chipset 590 via individual PtP interfaces 552, 554 using point to point interface circuits 576, 594, 586, 598. Chipset 590 may also exchange data with a high-performance graphics circuit 538 via a high- performance graphics interface 539. Embodiments of the invention may be located within any processor having any number of processing cores, or within each of the PtP bus agents of Figure 5.[0029] Other embodiments of the invention, however, may exist in other circuits, logic units, or devices within the system of Figure 5. Furthermore, in other embodiments of the invention may be distributed throughout several circuits, logic units, or devices illustrated in Figure 5. [0030] Embodiments of the invention described herein may be implemented with circuits using complementary metal-oxide-semiconductor devices, or "hardware", or using a set of instructions stored in a medium that when executed by a machine, such as a processor, perform operations associated with embodiments of the invention, or "software". Alternatively, embodiments of the invention may be implemented using a combination of hardware and software.[0031] While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope of the invention.
Systems and methods are disclosed for providing secure access to a non-volatile random access memory. One such method comprises sending an unlock password to a non-volatile random access memory (NVRAM) in response to a trusted boot program executing on a system on chip (SoC). The NVRAM compares the unlock password to a pass gate value provisioned in the NVRAM. If the unlock password matches the pass gate value, a pass gate is unlocked to enable the SoC to access a non-volatile cell array in the NVRAM.
CLAIMSWhat is claimed is:1. A method for providing secure access to a non-volatile random access memory, the method comprising:in response to a trusted boot program executing on a system on chip (SoC), sending an unlock password to a non- volatile random access memory (NVRAM);the NVRAM comparing the unlock password to a pass gate value provisioned in the NVRAM; andif the unlock password matches the pass gate value unlocking a pass gate to enable the SoC to access a non-volatile cell array in the NVRAM.2. The method of claim 1, wherein the pass gate value is fetched from aprogrammable memory cell.3. The method of claim 1, wherein the pass gate value is stored in a NVRAM fuse.4. The method of claim 1, wherein the pass gate value is hardcoded into a memory device using one or more of a plurality of logic circuits, a read only memory, and metal traces.5. The method of claim 1, wherein the pass gate comprises one or more switches for electrically coupling a random access memory (RAM) controller residing on the SoC to the non-volatile cell array.6. The method of claim 1 , wherein the unlock password comprises an encrypted message, and the NVRAM comparing the unlock password to the pass gate value provisioned in the NVRAM comprises decrypting an encrypted message comprising the unlock password.7. The method of claim 1, further comprising:if the unlock password does not match the pass gate value, maintain the pass gate in a locked state.8. The method of claim 1, further comprising:maintaining a self-destruct counter to keep track of a number of failed password exchanges between the SoC and the NVRAM; andif the self-destruct counter exceeds a threshold, permanently lock the pass gate.9. A system for providing secure access to a non-volatile random access memory, the method comprising:means for sending an unlock password to a non-volatile random access memory (NVRAM) in response to a trusted boot program executing on a system on chip (SoC); means for comparing the unlock password to a pass gate value provisioned in the NVRAM; andmeans for unlocking a pass gate if the unlock password matches the pass gate value to enable the SoC to access a non-volatile cell array in the NVRAM.10. The system of claim 9, wherein the pass gate value is fetched from aprogrammable memory cell.1 1. The system of claim 9, wherein the pass gate value is stored in a NVRAM fuse.12. The system of claim 9, wherein the means for unlocking the pass gate comprises one or more switches for electrically coupling a random access memory (RAM) controller residing on the SoC to the non-volatile cell array.13. The system of claim 9, wherein the unlock password comprises an encrypted message.14. The system of claim 13, wherein the means for comparing the unlock password to the pass gate value provisioned in the NVRAM comprises:means for decrypting the encrypted message comprising the unlock password.15. The system of claim 9, further comprising:means for maintaining the pass gate in a locked state if the unlock password does not match the pass gate value.16. The system of claim 9, further comprising:means for maintaining a self-destruct counter to keep track of a number of failed password exchanges between the SoC and the NVRAM; andmeans for permanently locking the pass gate if the self-destruct counter exceeds a threshold.17. A non-volatile random access memory device comprising:a non-volatile cell array;a fuse comprising a pass gate value; anda pass gate configured to prevent read/write access to the non-volatile cell array if a received unlock password does not match the pass gate value.18. The non-volatile random access memory device of claim 17, further comprising logic configured to:fetch the pass gate value from the fuse;compare the received unlock password to the pass gate value; andsend a control signal to the pass gate in response to the comparison of the unlock password to the pass gate value.19. The non-volatile random access memory device of claim 17, further comprising logic configured to:maintain a self-destruct counter to keep track of a number of times that the received unlock password does not match the pass gate value; andif the self-destruct counter exceeds a threshold, permanently lock the pass gate.20. The non-volatile random access memory device of claim 17, wherein the pass gate is configured to enable read/write access to the non-volatile cell array if the received unlock password matches the pass gate value.21. The non-volatile random access memory device of claim 17, wherein the fuse comprises a programmable memory cell that stores the pass gate value.22. The non-volatile random access memory device of claim 17, wherein the pass gate comprises one or more switches electrically coupled to the non-volatile cell array.23. The non-volatile random access memory device of claim 17, wherein the unlock password comprises an encrypted message, and further comprising logic configured to decrypt the encrypted message.24. A system for providing secure access to a non-volatile random access memory, the system comprising:a system on chip (SoC) comprising a random access memory (RAM) controller; anda non-volatile random access memory (NVRAM) electrically coupled to the RAM controller, the NVRAM comprising:a non-volatile cell array;a NVRAM fuse comprising a pass gate value; anda pass gate configured to prevent read/write access to the non-volatile cell array if an unlock password received from the RAM controller does not match the pass gate value.25. The system of claim 24, wherein the NVRAM further comprises logic configured to:fetch the pass gate value from the fuse;compare the received unlock password to the pass gate value; andsend a control signal to the pass gate in response to the comparison of the unlock password to the pass gate value.26. The system of claim 25, wherein the NVRAM further comprises logic configured to:maintain a self-destruct counter to keep track of a number of times that the received unlock password does not match the pass gate value; andif the self-destruct counter exceeds a threshold, permanently lock the pass gate.27. The system of claim 26, wherein the pass gate is configured to enable read/write access to the non-volatile cell array if the received unlock password matches the pass gate value.28. The system of claim 24, wherein the fuse comprises a programmable memory cell that stores the pass gate value.29. The system of claim 24, further comprising logic configured to:lock the pass gate in response to one of a device power down and a device hibernation.30. The system of claim 24, wherein the system on chip is part of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, or a hand-held computer with a wireless connection or link.
ON- VOLATILE RANDOM ACCESS MEMORYWITH GATED SECURITY ACCESSDESCRIPTION OF THE RELATED ART[0001] Portable computing devices (e.g., cellular telephones, smart phones, tablet computers, portable digital assistants (PDAs), portable game consoles, wearable devices, and other battery-powered devices) and other computing devices continue to offer an ever-expanding array of features and services, and provide users with unprecedented levels of access to information, resources, and communications. To keep pace with these service enhancements, such devices have become more powerful and more complex. Portable computing devices now commonly include a system on chip (SoC) comprising a plurality of memory clients embedded on a single substrate (e.g., one or more central processing units (CPUs), a graphics processing unit (GPU), digital signal processors, etc.). The memory clients may read data from and store data in an external system memory (i.e. , random access memory (RAM)) electrically coupled to the SoC via a high-speed bus.[0002] Due to its relatively low cost and high capacity, volatile memory (e.g., dynamic RAM (DRAM) and static RAM (SRAM)) are widely used for external system memory in digital electronics, such as, portable computing devices. Despite these advantages, volatile memory devices consume relatively more power than non-volatile memory devices because the memory cells lose their contents after power is removed and, therefore, must be periodically refreshed. As non-volatile memory becomes more cost-effective, it may become a more viable solution for use as system memory in computing devices. Non-volatile RAM (NVRAM) contains non-volatile memory cells that (unlike DRAM and SRAM) retain their data after power is shut-off While this may improve power efficiency, the data contained in NVRAM may be susceptible to unauthorized reading and/or writing.[0003] For security and privacy purposes, some of the contents contained in the NV cells may be required to be tamper-proof. To provide this capability, existing solutions may employ encryption to ensure that the contents of the NV cells cannot be read and altered. All data read/written by a memory client is first de-encrypted/encrypted and then stored in the NV cells. However, de-encryption/encryption introduces latency into the read/write data path, which can reduce performance for upstream memory clients. [0004] Another solution to the privacy/security concerns associated with NVRAM is to overwrite/erase the content of NVRAM upon power-down. The problem with this approach is that power is required to write the NVRAM and a bad power-down may not entirely complete the operation. Also, it may be advantageous to keep NVRAM contents intact so that the next device boot can benefit in speed from the non-volatile retention of content.[0005] Accordingly, there is a need for improved systems and methods for providing secure access to NVRAM.SUMMARY OF THE DISCLOSURE[0006] Systems and methods are disclosed for providing secure access to a non-volatile random access memory. One such method comprises sending an unlock password to a non-volatile random access memory (NVRAM) in response to a trusted boot program executing on a system on chip (SoC). The NVRAM compares the unlock password to a pass gate value provisioned in the NVRAM. If the unlock password matches the pass gate value, a pass gate is unlocked to enable the SoC to access a non-volatile cell array in the NVRAM.[0007] An embodiment of a system comprises a system on chip (SoC) and a NVRAM. The SoC comprises a random access memory (RAM) controller electrically coupled to the NVRAM. The NVRAM comprises a non-volatile cell array; a NVRAM fuse comprising a pass gate value, and a pass gate configured to prevent read/write access to the non-volatile cell array if an unlock password received from the RAM controller does not match the pass gate value.[0008] Another embodiment is a non-volatile random access memory devicecomprising a non-volatile cell array, a fuse comprising a pass gate value, and a pass gate configured to prevent read/write access to the non-volatile cell array if a received unlock password does not match the pass gate value.BRIEF DESCRIPTION OF THE DRAWINGS[0009] In the Figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as " 102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same Figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all Figures.[0010] FIG. 1 is a block diagram of an embodiment of a system for providing secure access to a non-volatile random access memory (NVRAM).[0011] FIG. 2 is a block diagram illustrating an embodiment of the NVRAM in FIG. 1[0012] FIG. 3 is a flowchart illustrating an embodiment of a method for providing secure access to the NVRAM in FIGS. 1 and 2.[0013] FIG. 4 is a block diagram illustrating an exemplary implementation of the pass gate in the NVRAM of FIGS. 1 and 2.[0014] FIG. 5 is a table illustrating an exemplary method of an encrypted password exchange between the SoC and the NVRAM of FIGS. 1, 2, and 4.[0015] FIG. 6 is a flowchart illustrating an embodiment of a method for initializing the SoC and the NVRAM of FIGS. 1, 2, and 4.[0016] FIG. 7 is a block diagram of an embodiment of a portable computing device for incorporating the system of FIGS. 1, 2, and 4.DETAILED DESCRIPTION[0017] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0018] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0019] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0020] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).[0021] In this description, the terms "communication device," "wireless device," "wireless telephone", "wireless communication device," and "wireless handset" are used interchangeably. With the advent of third generation ("3G") wireless technology and four generation ("4G"), greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities. Therefore, a portable computing device may include a cellular telephone, a pager, a PDA, a smartphone, a navigation device, or a hand-held computer with a wireless connection or link.[0022] FIG. 1 illustrates an embodiment of a system 100 for providing secure access to a non-volatile random access memory (NVRAM). The system 100 comprises a system on chip (SoC) 102 electrically coupled to a tamper/snoop-resistant NVRAM 104 via a high-speed bus 126. NVRAM 104 may comprise any desirable type of nonvolatile memory that retains NV cell content when power is removed (e.g., spin-transfer torque magnetic random-access memory (STT-RAM), phase-change RAM (PC-RAM), resistive RAM (RE-RAM), etc.). As described below in more detail in connection with FIGS. 2 - 6, NVRAM 104 comprises a gate mechanism 204 that generally includes functionality for preventing read/write operations from accessing a NV cell array 202 unless a successful authentication or password exchange occurs between the SoC 102 and the NVRAM 104.[0023] It should be appreciated that system 100 may be implemented in any computing device, including a personal computer, a workstation, a server, a portable computing device (PCD), such as a cellular telephone, a smartphone, a portable digital assistant (PDA), a portable game console, a navigation device, a tablet computer, a wearable device, such as a sports watch, a fitness tracking device, etc., or other battery- powered, web-enabled devices.[0024] The SoC 102 comprises various on-chip components, including a central processing unit (CPU) 110, a static random access memory (SRAM) 112, read only memory (ROM) 114, a RAM controller 120, a storage memory controller 122, a power management interface 1 18, and fuses 132 electrically coupled via SoC bus 1 16. RAM controller 120, which is electrically coupled to NVRAM 104 via high-speed bus 126, controls communications with NVRAM 104. Storage memory controller 122, which is electrically coupled to external storage memory 106 via bus 128, controlscommunication with storage memory 106. Power management interface 1 18 is electrically coupled to a power manager controller 108 via a connection 124. Power manager controller 108 controls the power supplied to various system components. As illustrated in FIG. 1, power is supplied to SoC 102, NVRAM 104, and storage memory 106 via connections 134, 138, and 136, respectively. System 100 further comprises a power source {e.g., a battery), which is not shown.[0025] As illustrated in FIGS. 1 & 2, the SoC 102 comprises fuse(s) 132 that are securely paired with fuse(s) 210 residing in NVRAM 104. The SoC fuse(s) 132 and the NVRAM fuse(s) 210 are provisioned with data, values, passwords, private/public keys associated with encryption/decryption algorithm(s), etc. for implementing a secure password exchange between the SoC 102 and NVRAM 104. When powered down and upon boot-up, the gate mechanism 204 in NVRAM 104 is configured in a "locked" state that prevents read/write operations from accessing NV cell array 202. When the system 100 is booted up, a trusted boot program 130 begins executing on the CPU 110. The trusted boot program may be initially stored on the SoC 102 in ROM 1 14 or it may be stored externally (e.g., retrieved from storage memory 106 or from peripherals such as USB 342 or network card 388). It should be appreciated that a secure and trusted boot program that is authenticated during the boot process may be allowed to perform the secure unlocking actions. The authenticity of the trusted boot program 130 may be determined (pass or fail) by an SoC on-chip authentication scheme, which is typically implemented using immutable hardware and read-only memory (ROM) within the SoC 102. These or other steps may confirm the authenticity of the program that unlocks the NVRAM 104 so that system security is not compromised by an intruder. Failure of authentication may stop the program from advancing, resulting in the NVRAM 104 remaining locked. Upon successful verification of its legitimacy, the trusted boot program 130 (or other secure software) may proceed with the unlock procedure by fetching secure password exchange data stored in fuse(s) 132 on SoC 102. Based on the raw security data stored in fuse(s) 132 (or data calculated therefrom using private and/or public keys, encryption algorithm(s), etc.), an unlock password may be provided to RAM controller 120 and sent to NVRAM 104 via bus 126.[0026] As illustrated in FIG. 2, RAM controller 120 comprises an interface controller 212 and a physical layer 214. Interface controller 212 reformats the data to/from clients of SoC 102 (e.g., CPU 110, a GPU, a DSP, etc.) into a packet and/or bus protocol compatible with the NVRAM device 104. Reformatting may include the data segmentation/reassembly, physical address realignment, link error handling, and the generation of control and address signals that may be driven/received by the physical layer 214 via the bus 126. Physical layer 214 provides the SoC's external electrical interface and physical connections of high-speed bus 126 to a corresponding physical layer 206 in NVRAM 104. Physical layer 206 in NVRAM 104 is electrically coupled to the gate mechanism 204. In response to receiving the unlock password from the SoC 102, the NVRAM 104 compares the received unlock password to a pass gate value provisioned in fuse(s) 210. Fuse(s) 210 may leverage existing fuse functionality in NVRAM devices. For example, fuse(s) 210 may be implemented using the fuse functionality conventionally used in memory devices for the repair of failed row replacement (e.g., additional row(s) for storing pass gate value(s)). In an embodiment, the fuse(s) 210 may comprise a programmable memory cell. It should be appreciated, however, that the pass gate value may be hardcoded in NVRAM 104. In anembodiment, the pass gate value may be hardcoded into a memory device using, for example, logic circuit(s), state machines, a read only memory, metal traces, etc. If the unlock password received from the SoC 102 matches the pass gate value, the gate mechanism 204 may be changed from the "locked state" (in which read/write operations are disabled) to an "unlocked state" in which the SoC 102 is able to perform read/write operations to the NV cell array 202. If the unlock password received from the SoC 102 does not match the pass gate value, the gate mechanism 204 may be maintained in the "locked state" with read/write operations disabled. If repeated unlock attempts fail, the gate mechanism 204 may permanently disable the NVRAM 104 when a self-destruct counter exceeds a threshold.[0027J FIG. 3 illustrates an embodiment of a method 300 for providing secure access to NVRAM 104 via the gate mechanism 204, the SoC fuse(s) 210, and the NVRAM fuse(s) 132. At block 302, the system 100 is booted up and the trusted boot program 130 begins executing on the CPU 102. The trusted boot program 130 initiates a fetch of the unlock password stored in fuse(s) 210 on the SoC 102. At block 304, the unlock password is sent to NVRAM 104 by RAM controller 120 via bus 126. In anembodiment, the unlock password may be sent either unencrypted or encrypted.Encryption may be performed within the SoC 102. It may be encryptedprogrammatically by software running on the CPU 1 10, or it may be encrypted in dedicated encryption hardware (not shown). At block 306, NVRAM 104 receives the unlock password at physical layer 206. If the unlock password was encrypted by the SoC 102, it must first be decrypted by the hardware gate logic 404. At block 308, NVRAM compares the unencrypted unlock password to a pass gate value stored in fuse(s) 210. At decision block 3 10, the pass gate mechanism 204 is unlocked if the unlock password matches the pass gate value. If the unlock password does not match the pass gate value, the pass gate mechanism 204 may be maintained in the locked state to prevent read/write access to the NV cell array 202.[0028] It should be appreciated that the gate mechanism 204 in NVRAM 104 may be implemented in various ways to accommodate, for example, cost, complexity, performance, level of security, etc. FIG. 4 illustrates a circuit diagram of an exemplary implementation of a gate mechanism 204 configured to provide a cost-effective, mass- producible NVRAM device. In this embodiment, the gate mechanism 204 comprises one or more pass gates 402 and control logic 404. One of ordinary skill in the art will appreciate the design advantages of implementing the gate mechanism 204 with relatively uncomplicated circuits and logic with minimal memory die area without the use of a more complicated microcontroller.[0029] As illustrated in FIG. 4, the pass gates 402 may comprise one or more in-line switches that connect/di sconnect the physical layer 206 to an interface controller 208 that provides access to NV cell array 202. As mentioned above, the physical layer 206 provides the connections associated with high-speed bus 126. Connections 126d correspond to data signals, and connections 126c correspond to address/control signals. The physical layer 206 provides the data signals associated with connections 126d to the pass gates 402 and the control logic 404 via connections 412d. The physical layer 206 provides the address/control signals associated with connections 126d to the pass gates 402 and the control logic 404 via connections 412c. [0030] As further illustrated in FIG. 4, each pass gate 402 comprises a first contact and a second contact. The first contact is electrically coupled to the corresponding data connection(s) 412d and address/control connections 412c, and the second contact on the other side of the gate or switch is electrically coupled to corresponding gated data connection(s) 414d and gated address/control connections 414c. The control logic 404 is electrically coupled to each pass gate 402 via connection(s) 416 through which gate control signals may be provided to open and close the individual switches. In this regard, the "locked state" corresponds to the operational state in which the pass gates 402 are opened to prevent access to gated connections 414d and 414c.[0031] Other embodiments of the pass gate 402 function may include a bidirectional transceiver with an output enable controlled by the gate control 416, a bidirectional transceiver that may be powered on/off via a power rail under the control of the gate control 416, or a bidirectional latch/register that may have either output enable or power rail under the control of the gate control 416. The circuits employed may bepurposefully designed for bidirectional signaling, or may consist of two separate circuits for handling each (forward and reverse) direction corresponding to write and read data traffic.[0032] As mentioned above, when the device is powered down, the control logic 404 may receive a corresponding command from the power manager controller 108 and, in response, send a "lock" gate control signal via connection(s) 416 to the pass gates 402. It should be appreciated that the gate control signals may comprise individual signals (e.g., one gate control wire for one pass gate) or a single signal (e.g., one gate control for all of the pass gates). In other embodiments, the pass gates 402 may be replaced by a power switch that powers-up or powers-down the interface controller 208 to NV cell array 202. In response to the "lock" gate control signal, the pass gates 402 are opened to prevent access to gated connections 414d and 414c. In this manner, when the device is booted, the gate mechanism 204 is in the "locked state" with the pass gates 402 in the open position to initially prevent read/write operations from accessing NV cell array 202.[0033] When system 100 is booted up and the trusted boot program 130 begins executing on the CPU 102, the unlock password stored in fuse(s) 132 on the SoC 102 may be fetched and provided to physical layer 206, as described above. The control logic 404 fetches the pass gate value provisioned in fuse(s) 210 via, for example, a fuse data bus 418 and a fuse control bus 420. As illustrated in FIG. 4, the fuse(s) 210 may comprise a controller 422 to facilitate communication with the control logic 404. The control logic 404 compares the pass gate value to the unlock password received from the SoC 102. If the unlock password matches the pass gate value, the control logic 404 sends an "unlock" gate control signal to the pass gates 402 via connection(s) 416. In response to the "unlock" gate control signal, the pass gates 402 are closed, thereby connecting data connection(s) 412d and address/control connections 412c to gated connection(s) 414d and gated address/control connections 414c, respectively. In this "unlocked state", the gate mechanism 204 provides unrestricted access to NV cell array 210 via data bus 424 and control bus 426.[0034] As mentioned above, the password exchange between the SoC 102 and the gated NVRAM 104 may be implemented in various ways. In one embodiment, a simple unencrypted password exchange may be implemented via fuse(s) 132 and 210. In other embodiments, the secure password exchange may employ any desirable encryption algorithm(s) to improve the level of security. As illustrated in FIG. 4, when the secure password exchange employs encryption, the control logic 404 may comprise logic modules to support a decode function (block 406), a hash function (block 408), and a check function (block 410).[0035] Decode logic 406 receives control and address via bus 412c, and data via bus 412d. In an embodiment, a predetermined and/or standardized protocol may be implemented for controlling the gate logic block 404, exchanging information such as keys and passwords, or the initialization and programming of elements such as fuses 210. For example, there may be a specific command on the control and address bus 412d that is decoded in block 406 and can then initiate the specific command function. In other embodiments, there may be a unique command and data associated for each type of function (e.g., reset gate logic, program fuse data (multiple locations), program private key, program password, program self-destruct failed tries, enable tamper mechanism, input key modulus p, input key base g, retrieve hash, unlock unencrypted password, unlock encrypted password, etc.).[0036] Decode logic 406 may be responsible for parsing and triggering the appropriate operations in response to the incoming control, address, and data. As further illustrated in FIG. 4, the control and address 412c and data 412d also arrive at pass gates 402 and, if unlocked, propagate to interface controller 208 where it will perform similar predetermined and/or standardized mission-mode operations such as NV cell array read, NV cell array write, NV cell array page select, NV cell array repair, NVRAM device configuration, PHY advanced configuration, and any otherfunctionality that is unrelated to tamper-proofing functions.[0037] A hash function 408 performs modulo arithmetic operations for a secret key exchanging procedure and may include lookup tables and also modulo addition sequential and parallel computation logic. A check function 410 comprises the control logic for comparing the password sent from the SoC 102 against a local copy previously programmed into local NVRAM fuses 210. Decryption logic (not shown) may be included within check function 410 because the SoC 102 may choose to send the password using encryption to prevent a snooper from viewing the password as it travels via external bus 126. If the SoC 102 has encrypted the password, then the decryption logic will first decrypt the password using a shared secret key derived during a secure exchange process such as the Diffie-Hellman method.[0038] FIG. 5 illustrates an exemplary embodiment for unlocking gate mechanism 204 using a Diffie-Hillman password exchange between the SoC 102 and NVRAM 104. Each row in table 500 represents a corresponding step in the password exchange method. The operation of each step is listed in column 530. Column 532 lists information that is "known" by the SoC 102. Column 534 lists information that is "not known" by the SoC 102. Column 536 lists information that is "known" by NVRAM 104. Column 538 lists information that is "not known" by NVRAM 104. Column 540 lists information that may be susceptible to capture by a malicious "snooper". Column 542 lists information that is not susceptible to capture by the malicious "snooper", exemplifying the security provided via the Diffie-Hillman password exchange.[0039] At steps 502 and 504, the SoC 102 sends changeable public keys "g" and "p" over NVRAM bus 126. At step 506, the SoC 102 and NVRAM 104 retrieve a fixed private key, which may be programmed into the fuses 132 and 210, respectively. At steps 508, 510, and 512, the private and public keys locally generate a hash, which is exchanged. The SoC 102 transmits its hash "A" to NVRAM 104 and also reads back the NVRAM's hash "B" . At steps 514 and 516, using the hash, public keys, and their respective private key, the SoC 102 and NVRAM 104 separately compute the secret shared key. Without having any access to "a" or "b", the snooper cannot compute "s". At step 518, using this secret key "s", the SoC 102 encrypts and sends a password that was previously stored in NVRAM fuses 210. At steps 520 and 522, NVRAM 104 receives the password message, decrypts it with the secret key "s", and if it matches the previously stored password then gate mechanism 204 is opened, in the manner described above.[0040] As mentioned above, the gate mechanism 204 in NVRAM 104 may be configured in various alternative ways to accommodate, for example, cost, complexity, performance, level of security, etc. In one embodiment, the gate mechanism 204 may be configured, as follows, to provide a cost-effective design while providing a practically reasonable level of security protection. The control logic 402 may include a self-destruct counter configured to permanently lock the gate mechanism 204 after a predetermined number of unsuccessful password exchanges. It should be appreciated that the self-destruct counter provides an additional level of security to against brute- force attacks. The fuse(s) 132 and 210 may be simplified in structure and complexity to allow a limited number of permissible values for the public and private key. In this regard, the hash function described above (block 408) may be implemented in a straightforward manner using, for example, a lookup table, linear feedback shift register, or parallel logic. In embodiments with limited public/private key values, a brute force attacker may obtain secret shared keys and attempt the password unlock. However, without knowledge of the password, the chance of a brute force attacker gaining access before the self-destruct counter mechanism permanently disables the device would be extremely low. Furthermore, the password value may be sufficiently long (e.g., any 256-bit value) while using a relatively uncomplicatedencryption/decryption implementation (e.g., a stream cipher, a linear feedback shift register, block cipher, other modulo/xor logic, etc.). One of ordinary skill in the art will appreciate that, by keeping each security feature relatively low in complexity, NVRAM 104 may be implemented in cost-effective design with a reasonable level oftamper/snoop resistance. It should be appreciated that, in a simplified configuration, the systems and methods illustrated in FIGS. 3, 4, and 5 may be implemented with a reduced level of complexity and secure protection, for example, by non-programmable hardcoding the password in the NVRAM 104 with the SoC 102 sending the hardcoded password without using encryption.[0041] FIG. 6 is a flowchart illustrating an embodiment of a method 600 for initializing a computing device manufactured to incorporate the SoC 102 and the NVRAM 104. At block 602, NVRAM 104 may be configured to an initial state in which the pass gate feature is initially disabled by unlocking the gate mechanism 204 and setting a private = 0. At block 604, the NVRAM 104 may be paired with the SoC 102 by provisioning a private key = b, setting the self-destruct counter threshold (MAX TRY THRESHOLD) = n, a password = "password", and enabling the pass gate feature. At block 606, upon device boot-up, the device is in a default state with the gate mechanism 204 locked. A key exchange sequence may be executed, and the SoC 102 may randomly select from a set of public keys p and g. At decision block 608, NVRAM 104 initiates password authentication. If the password is authenticated, at block 616, the gate mechanism 204 is unlocked to enable read/write access to NV cell array 202. When the device is initiated to be powered down, reset, or enter a hibernate mode (block 618), the gate mechanism 204 is locked (block 620), with process flow returning to block 606. If however, the password is not authenticated (decision block 608), the method 600 may determine (decision block 610) whether the self-destruct failed tries counter has exceeded a threshold (MAX TRY THRESHOLD). If the threshold is exceeded, a self- destruct feature many be initiated to permanently disable NVRAM 104. If the threshold is not exceeded, the gate mechanism 204 may be maintained in the "locked state", with process flow returning to block 606 and the failed tries counter being incremented. At block 616 on a successful unlocking, the failed tries counter may be reset. It should also be appreciated that, at block 604, the NVRAM 104 may be paired with the SoC 102 without enabling the pass gate feature. In this manner, the NVRAM 1064 may be used in a legacy mode with an SoC that is not configured to support tamper proof operations. For example, in an embodiment, the SoC 102 may not include fuses 132, or the SoC 102 may not support additional commands to communicate with and control the NVRAM gate logic 404.[0042] As mentioned above, the system 100 may be incorporated into any desirable computing system. FIG.7 illustrates the system 100 incorporated in an exemplary portable computing device (PCD) 700. It will be readily appreciated that certain components of the system 100 may be included on the SoC 322 (e.g., fuse(s) 132, RAM controller 120, trusted boot program 130) while other components (e.g. , NVRAM 104) may be external components coupled to the SoC 322. The SoC 322 may include a multicore CPU 702. The multicore CPU 702 may include a zeroth core 710, a first core 712, and an Nth core 714. One of the cores may comprise, for example, a graphics processing unit (GPU) with one or more of the others comprising the CPU.[0001] A display controller 328 and a touch screen controller 330 may be coupled to the CPU 702. In turn, the touch screen display 706 external to the on-chip system 322 may be coupled to the display controller 328 and the touch screen controller 330. [0002] FIG. 7 further shows that a video encoder 334, e.g., a phase alternating line (PAL) encoder, a sequential color a memoire (SEC AM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the multicore CPU 702. Further, a video amplifier 336 is coupled to the video encoder 334 and the touch screen display 706. Also, a video port 338 is coupled to the video amplifier 336. As shown in FIG. 7, a universal serial bus (USB) controller 340 is coupled to the multicore CPU 702. Also, a USB port 342 is coupled to the USB controller 340.[0003] Further, as shown in FIG. 7, a digital camera 348 may be coupled to the multicore CPU 702. In an exemplary aspect, the digital camera 348 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera.[0004] As further illustrated in FIG. 7, a stereo audio coder-decoder (CODEC) 350 may be coupled to the multicore CPU 702. Moreover, an audio amplifier 352 may coupled to the stereo audio CODEC 350. In an exemplary aspect, a first stereo speaker 354 and a second stereo speaker 356 are coupled to the audio amplifier 352. FIG. 7 shows that a microphone amplifier 358 may be also coupled to the stereo audio CODEC 350.Additionally, a microphone 360 may be coupled to the microphone amplifier 358. In a particular aspect, a frequency modulation (FM) radio tuner 362 may be coupled to the stereo audio CODEC 350. Also, an FM antenna 364 is coupled to the FM radio tuner 362. Further, stereo headphones 366 may be coupled to the stereo audio CODEC 350.[0005] FIG. 7 further illustrates that a radio frequency (RE) transceiver 368 may be coupled to the multicore CPU 702. An RF switch 370 may be coupled to the RF transceiver 368 and an RF antenna 372. A keypad 204 may be coupled to the multicore CPU 702. Also, a mono headset with a microphone 376 may be coupled to the multicore CPU 702. Further, a vibrator device 378 may be coupled to the multicore CPU 702.[0006] FIG. 7 also shows that a power supply 380 may be coupled to the on-chip system 322. In a particular aspect, the power supply 380 is a direct current (DC) power supply that provides power to the various components of the PCD 600 that require power. Further, in a particular aspect, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source.[0007] FIG. 7 further indicates that the PCD 700 may also include a network card 388 that may be used to access a data network, e.g., a local area network, a personal area network, or any other network. The network card 388 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra-low-power technology (PeANUT) network card, a television/cable/satellite tuner, or any other network card well known in the art. Further, the network card 388 may be incorporated into a chip, i.e., the network card 388 may be a full solution in a chip, and may not be a separate network card 388.[0008] As depicted in FIG. 7, the touch screen display 606, the video port 338, the USB port 342, the camera 348, the first stereo speaker 354, the second stereo speaker 356, the microphone 360, the FM antenna 364, the stereo headphones 366, the RF switch 370, the RF antenna 372, the keypad 374, the mono headset 376, the vibrator 378, and the power supply 380 may be external to the on-chip system 322.[0009] It should be appreciated that one or more of the method steps described herein may be stored in the memory as computer program instructions, such as the modules described above. These instructions may be executed by any suitable processor in combination or in concert with the corresponding module to perform the methods described herein.[0010] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.[001 1 ] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example.[0012] Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the Figures which may illustrate various process flows. [0013] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM,EEPROM, NAND flash, NOR flash, M-RAM, P-RAM, R-RAM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.[0014] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.[0015] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media.[0016] Alternative embodiments will become apparent to one of ordinary skill in the art to which the invention pertains without departing from its spirit and scope. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.
Damascene techniques are implemented using a silicon carbide bard mask to prevent contact between an organic photoresist mask and dielectric material, particularly a low-K dielectric material. Embodiments include etching using a silicon carbide hard mask to form a via opening through a low-K ILD, depositing an overlying ILD, e.g., another low-K ILD, forming a capping layer on the second ILD and etching to form a trench in communication with the underlying via opening to complete the dual damascene opening.
What is claimed: 1. A method of manufacturing a semiconductor device, the method comprising sequentially:forming a first dielectric layer on a first capping layer overlying a lower metal level; forming a silicon carbide hard mask on the first dielectric layer; etching to form a first opening having a cross-sectional width through the first dielectric layer exposing a portion of the capping layer; forming a second dielectric layer on the silicon carbide hard mask; and etching to form a second opening, having a width greater than the cross-sectional width of the first opening, through the second dielectric layer, while the silicon carbide hard mask protects an upper surface of the first dielectric layer; and continuing etching to remove a portion of the silicon carbide hard mask on the first dielectric layer and etching through the exposed portion of the first capping layer. 2. The method according to claim 1, comprising forming the silicon carbide hard mask by:depositing a layer of silicon carbide on the first dielectric layer; forming a photoresist mask on the silicon carbide layer; etching an opening in the silicon carbide layer stopping on the first dielectric layer to form the silicon carbide hard mask; and stripping the photoresist mask from the silicon carbide hard mask. 3. The method according to claim 2, comprising forming the photoresist mask at a thickness of about 1,000 Å to about 2,000 Å.4. The method according to claim 3, comprising depositing the layer of silicon carbide at a thickness of about 300 Å to about 800 Å.5. The method according to claim 1, further comprising:forming a second capping layer on the second dielectric layer; forming a photoresist mask on the second capping layer; and etching to form the second opening through the second capping layer and second dielectric layer. 6. The method according to claim 2, further comprising:forming a second dielectric layer on the silicon carbide hard mask; forming a second capping layer on the second dielectric layer; forming a photoresist mask over the second capping layer; and etching to form a second opening, having a width greater than the cross-sectional width of the first opening, through the second capping layer and second dielectric layer while the silicon carbide hard mask protects an upper surface of the first dielectric layer. 7. The method according to claim 6, comprising continuing etching to remove a portion of the silicon carbide hard mask on the first dielectric layer and etching through the exposed portion of the first capping layer exposing a portion of the lower metal feature.8. The method according to claim 6, wherein:the first opening constitutes a via opening; and the second opening constitutes a trench. 9. The method according to claim 8, comprising filling the via opening and trench with a metal to form a via in electrical contact with a metal line.10. The method according to claim 9, comprising filling the via opening and trench with copper (Cu) or a Cu alloy.11. The method according to claim 1, wherein the first dielectric layer comprises material having a dielectric constant less than about 3.12. The method according to claim 1, wherein the second dielectric layer comprises a material having a dielectric constant less then about 3.13. The method according to claim 12, wherein the first dielectric layer comprises material having a dielectric constant less than about 3.14. The method according to claim 1, wherein the first capping layer comprises silicon nitride.15. The method according to claim 1, wherein the first capping layer comprises silicon nitride.16. The method according to claim 6, wherein the first capping layer comprises silicon nitride.17. The method according to claim 16, wherein the second capping layer comprises silicon nitride.18. The method according to claim 2, comprising stripping the photoresist mask with an oxygen plasma.19. A method of manufacturing a semiconductor device, the method comprising:forming a first dielectric layer on a first capping layer overlying a lower metal level; depositing a layer of silicon carbide on the first dielectric layer; forming a photoresist mask, having a thickness of 1,200 Å to 2000 Å, on the silicon carbide layer; etching an opening in the silicon carbide layer stopping on the first dielectric layer to form the silicon carbide hard mask; stripping the photoresist mask from the silicon carbide hard mask; and etching to form a first opening having a cross-sectional width through the first dielectric layer exposing a portion of the capping layer. 20. The method according to claim 19, comprising the layer of silicon carbide at a thickness of about 300 Å to about 500 Å.21. The method according to claim 19, wherein the first dielectric layer comprises SiCOH.
TECHNICAL FIELDThe present invention relates to a method of manufacturing a semiconductor device with accurately dimensioned interconnection patterns and exhibiting reduced capacitance loading. The present invention has particular applicability in manufacturing high density, multi-level semiconductor devices comprising sub-micron dimensions and exhibiting high circuit speed.BACKGROUND ARTAs integrated circuit geometries continue to plunge deeper into the submicron regime, it becomes increasingly difficult to satisfy the demands for dimensional accuracy. Moreover, interconnection technology is constantly challenged to satisfy the ever increasing requirements for high performance associated with ultra large scale integration semiconductor devices. The speed of semiconductor circuitry varies inversely with the resistance (R) and capacitance (C) of the interconnection system. The higher the value of the R*C product, the more limiting the circuit speed. As integrated circuits become complex and feature sizes and spacings become smaller, the integrated circuit speed becomes less dependent upon the transistor itself and more dependent upon the interconnection pattern. Thus, the performance of multi-level interconnects is dominated by interconnect capacitance at deep sub-micron regimes, e.g., less than about 0.12 micron. The rejection rate due to integrated circuits speed delays in sub-micron regimes has become a limiting factor in fabrication.Conventional semiconductor devices comprise a semiconductor substrate, typically doped monocrystalline silicon, and a plurality of sequentially formed interlayer dielectrics and conductive patterns. An integrated circuit is formed containing a plurality of conductive patterns comprising conductive lines separated by interwiring spacings, and a plurality of interconnect lines, such as bus lines, bit lines, word lines and logic interconnect lines. Typically, the conductive patterns on different levels, i.e., upper and lower levels, are electrically connected by a conductive plug filling a via hole, while a conductive plug filling a contact hole establishes electrical contact with an active region on a semiconductor substrate, such as a source/drain region. Conductive lines are formed in trenches which typically extend substantially horizontal with respect to the semiconductor substrate. Semiconductor "chips" comprising five or more levels of metallization are becoming more prevalent as feature sizes shrink into the deep sub-micron regime.A conductive plug filling a via hole is typically formed by depositing an interlayer dielectric (ILD) on a patterned conductive level comprising at least one conductive feature, forming an opening through the ILD by conventional photolithographic and etching techniques, and filling the opening with a conductive material. The excess conductive material or overburden on the surface of the ILD is typically removed by chemical-mechanical polishing (CMP). One such method is known as damascene and basically involves forming an opening in the ILD and filling the opening with a metal. Dual damascene techniques involve forming an opening comprising a lower contact or via hole section in communication with an upper trench section, which opening is filled with a conductive material, typically a metal, to simultaneously form a lower contact or via in electrical contact with a conductive line.Copper (Cu) and Cu alloys have received considerable attention as alternative metallurgy to aluminum (Al) in interconnect metallizations. Cu is relatively inexpensive, easy to process, and has a lower resistively than Al. In addition, Cu has improved electrical properties vis-a-vis tungsten (W), making Cu a desirable metal for use as a conductive plug as well as conductive wiring. However, due to Cu diffusion through dielectric materials, such as silicon dioxide, Cu interconnect structures must be encapsulated by a diffusion barrier layer. Typical diffusion barrier materials include tantalum (Ta), tantalum nitride (TaN), titanium nitride (TiN), titanium-tungsten (TiW), Tungsten (W), tungsten nitride (WN), Ti-TiN, titanium silicon nitride (TiSiN), tungsten silicon nitride (WSIN), tantalum silicon nitride (TaSiN) and silicon nitride for encapsulating Cu. The use of such barrier materials to encapsulate Cu is not limited to the interface between Cu and the ILD), bat includes interfaces with other metals as well.Cu interconnect technology, by and large, has been implemented employing damascene techniques, wherein a first dielectric layer, is formed over an underlying pattern having a capping layer thereon, e.g., a Cu or Cu alloy pattern with a silicon nitride capping layer. A barrier layer and optional seedlayer are then deposited, followed by Cu deposition, as by electrodeposition or electroless deposition.The dielectric constant of materials currently employed in the manufacture of semiconductor devices for an interlayer dielectric (ILD) ranges from about 3.9 for dense silicon dioxide to over 8 for deposited silicon nitride. The value of a dielectric constant expressed herein is based upon a value of one for a vacuum. In an effort to reduce interconnect capacitance, dielectric materials with lower values of permitivity have bee explored. The expression "low-k" material has evolved to characterize materials with a dielectric constant less than about 3.9. One type of low-k material that has been explored are a group of flowable oxides which are basically ceramic polymers, such as hydrogen silsesquioxane (HSQ). Such polymers and their use are disclosed in, for example, U.S. Pat. Nos. 4,756,977 and 5,981,354. HSQ-type flowable oxides have been considered for gap filling between metal lines because of their flowability and ability to fill small openings. HSQ-type flowable oxides have been found to vulnerable to degradation during various fabrication steps, including plasma etching. Methods involving plasma treatment have been developed to address such problems attendant upon employing HSQ-type flowable oxides as a gap filling layer, as in the U.S. Pat. No. 5,866,945 and U.S. Pat. No. 6,083,851.There are several organic low-k materials, typically having a dielectric constant of about 2.0 to about 3.8, which offer promise for use as an ILD. As used throughout this disclosure, the term "organic" is intended to exclude HSQ type materials, e.g., flowable oxides and ceramic polymers, which are not true organic materials. Organic low-k materials which offer promise are carbon-containing dielectric materials such as FLARE 2.0(TM) dielectric, a poly(arylene)ether available from Allied Signal, Advanced Microelectronic Materials, Sunnyvale, Calif., Black-Diamond(TM) dielectric available from Applied Materials, Santa Clara, Calif., BCB (divinylsiloxane bis-benzocyclobutene) and Silks(TM) an organic polymer similar to BCB, both available from Dow Chemical Co., Midland, Mich.In implementing conventional damascene techniques, such as dual damascene techniques, the organic photoresist mask is typically removed employing an oxygen (O2) plasma stripping technique after forming an opening in a dielectric layer, such as a via hole, trench or dual damascene technique comprising a lower via hole in communication with an upper trench however, in attempting to employ organic low-k materials in such interconnect technology, e.g., as an ILD, the O2 plasma stripping technique disadvantageously also strips off and degrades a portion the organic low-k material, thereby adversely impacting device geometry and performance.There exists a need for methodology enabling the use organic of low-k dielectric materials as an ILD in high density, multilevel interconnection patterns. There exist a particular need for methodology enabling the use of such organic low-k materials in damascene technology without removal or damage during photoresist stripping.DISCLOSURE OF THE INVENTIONAn advantage of the present invention is a method of manufacturing a semiconductor device with accurately dimensioned interconnection patterns, particularly Cu and/or Cu alloy interconnection patterns, and exhibiting reduced parasitic RC time delays employing organic dielectric materials having a low dielectric constant.Additional advantages and other features of the present invention will be set forth in the description which follows and in part will be apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the present invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method of manufacturing a semiconductor device, the method comprising: forming a first dielectric layer on a first capping layer overlying a lower metal level; forming a silicon carbide hard mask on the first dielectric layer; and etching to form a first opening having a cross-sectional width through the first dielectric layer exposing a portion of the capping layer.Embodiments of the present invention comprise forming a via opening in the first dielectric layer, forming a second dielectric layer on the silicon carbide hard mask, forming a second capping layer on the second dielectric layer, forming a photoresist mask over the second capping layer, and anisotopically etching to form a trench opening, having a width greater than the cross-sectional width of the via opening, through the second capping layer and second dielectric layer while the silicon carbide hard mask advantageously protects the upper surface of the first dielectric layer under the trench. Etching is continued to remove exposed portions of the silicon carbide mask in the damascene opening, and through the exposed portion of the first capping layer exposing a portion of the lower metal feature. Embodiments of the present invention include filling the openings with Cu or a Cu alloy layer followed by planarization, as by chemical mechanical polishing (CMP), with subsequent deposition of a capping layer, such as silicon nitride.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein embodiments of the present invention are described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the present invention. Accordingly, the drawings and description are to be regarded and illustrative in nature, and not as restrictiveBRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1 through 7 schematically illustrate sequential phases of a method in accordance with an embodiment of the present invention.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves reliability problems attendant upon fabricating multi-layer interconnect semiconductor devices employing organic low-k materials to reduce parasitic RC time delays. The capacitance, both layer-to-layer and within-layer, is primarily attributed to the film properties of the ILD. The present invention provides methodology enabling the use of various organic low-k dielectric materials for ILDs, particularly in damascene techniques, such as dual damascene techniques, particularly via-first trench-last dual damascene techniques, without or with significantly reduced stripping and degradation of the low-k ILD, thereby improving dimensional accuracy and, hence, device reliabilityUpon attempting to employ various organic low-k materials as ILDs when implementing damascene techniques, it was found that portions thereof were stripped and that degradation occurred during stripping or the overlying organic photorosist mask, e.g., as when employing O2 plasma stripping techniques. The present invention addresses and solves such problems attendant upon employing organic low-k materials for ILDs by strategically employing a silicon carbide hard mask. Embodiments of the present invention comprise depositing a layer of silicon carbide, as at a thickness of 300 Å to about 800 Å, e.g., about 500 Å on a low-k ILD, and then forming a thin organic photoresist mask, as at a thickness of about 1,000 Å to about 2,000 Å, e.g., about 1,200 Å to about 1,300 Å, on the layer of silicon carbide. Anisotropic etching is then conducted to form the silicon carbide mask having an opening substantially corresponding to a via opening. The photoresist mask is then removed as by O2 plasma stripping . Advantageously, during removal of the photoresist mask, the low-k ILD is protected by the silicon carbide hard mask. Anisotropic etching is then conducted, using the silicon carbide hard mask, to form a via opening through the low-k ILD.Subsequently, after formation of the via opening, an upper ILD is deposited, e.g., an organic low-k ILD. A capping layer, such as silicon nitride, is formed on the upper ILD. A photoresist mask defining a trench opening is then formed on the capping layer and anisotropic etching is conducted to form a trench opening. Advantageously, the silicon carbide hard mask protects most of the underlying low-k ILD during trench patterning. The dual damascene opening is then filled with Cu or a Cu alloy layer. As employed throughout this disclosure, the symbol Cu is intended to encompass high purity elemental copper as well as Cu-based alloys such as Cu alloys containing minor amounts of tin, zinc, manganese, titanium, germanium, zirconium, strontium, palladium, magnesium, chromium and tantalum.A wide variety of organic low-k materials can be employed as an ILD in accordance with embodiments of the present invention, including various polyimides, BCB, FLARE(TM), Silk(TM), and Black-Diamond(TM) dielectrics. Other suitable low-k dielectrics include poly(arylene)ethers, poly(arylene)ether azoles; parylene-N, polyimides, polynapthalene-N, polyphenyl-quinoxalines (PPQ), polybenzoxazoles, polyindane, polynorbene, polystyrene, polyphenyleneoxide, polyethylene and polypropylene. It was found particularly suitable to employ SiCOH which exhibits a dielectric constant of about 3 and typically contains silicon in an amount of about 15 to about 18 at. %, eg., about 17 at. %, oxygen in an amount of about 28 to about 30 at. %, e.g., about 29 at. %, carbon in an amount of about 16 to about 18 at. %, e.g., about 17 at. % and hydrogen in an amount of about 36 to about 38 at. %, e.g., about 37 at. %. SiCOH contains SiC, SiH, CH and SiOH bonding.An embodiment of the present invention is schematically illustrated in FIGS. 1 through 7, wherein like or similar features are denoted by the same reference numerals. Adverting to FIG. 1, reference numeral 10 denotes a lower metal feature such as a metal, e.g., Cu, line, formed in ILD 10 overlying a substrate or wafer (not shown). ILD 11 can comprise any conventional dielectric material or an organic low-k material. A capping layer 12, such as silicon nitride or silicon oxynitride is formed on ILD 11 and metal line 10.As illustrated in FIG. 2, an organic low-k ILD 20, such as SiCOH, is deposited over ILD 11 on capping layer 12. A layer of silicon carbide 21, at a thickness suitable to function as a hard mask, such as about 300 Å to about 800 Å, e.g., about 500 Å, is deposited on ILD 20, as by chemical vapor deposition (CVD). A relatively thin organic photoresist mask 22 is then formed on silicon carbide layer 21; as at a thickness of about 1,000 Å to about 2,000 Å, e.g., about 1,200 Å to about 1,300 Å. Photoresist mask 22 contains an opening "V" substantially corresponding to the cross-sectional width a via opening to be formed in ILD 20. Advantageously, the use of a silicon carbide hard mask significantly reduces the amount of organic photoresist material required, so that photoresist mask 22 can be formed at a relatively minimal thickness.As shown in FIG. 3, anisotropic etching is conducted to extend opening "V" into silicon carbide layer 21 stopping on organic low-k ILD 20 employing an etching recipe with high selectivity to ILD 20. Subsequently, as shown in FIG. 4, organic photoresist mask 22 is removed, as by O2 plasma stripping. Advantageously, the presence of silicon carbide hard mask 21 on the upper surface of organic low-k ILD 20 protects ILD 20 from removal and degradation while stripping organic photoresist mask 22.As shown in FIG. 5, anisotropic etching is then conducted to form via opening 50, having a cross-sectional width substantially corresponding to "V" stopping on capping layer 12. The use of silicon carbide hard mask 21 to protect ILD 20 against removal and degradation during O2 plasma stripping of organic photoresist mask 22, enables via opening 50 to be formed with high dimensional accuracy.Adverting to FIG. 6, ILD 60 is formed over ILD 20 on silicon carbide mask 21. ILD 60 can also comprise an organic low-k dielectric material, or any conventional dielectric material employed in manufacture of semiconductor devices. A capping layer 61, such as silicon nitride or silicon oxynitride, is then formed on ILD 60 and a photoresist mask 62, having an opening "T", substantially corresponding to the width of the trench to be formed in ILD 60, is formed on capping layer 61. Anisotropic etching is then conducted through capping layer 61 into ILD 60 to form trench 63 having a width substantially corresponding to "T". Advantageously, the portions of silicon carbide mask 21 (illustrated by reference numeral 21A and shown in phantom) protect the upper surface or organic low-k ILD 20 during formation of trench 63. Anisotropic etching is continued until portions 21A of silicon carbide mask within the dual damascene opening are removed and the upper surface of lower metal feature 10 exposed, thereby completing formation of dual damascene opening comprising via opening 50 in communication with overlying trench 63.Subsequently, as illustrated in FIG. 7, the dual damascene opening is filled with metal. In an embodiment of the present invention, the dual damascene opening is filled with Cu by electroplating or electroless plating. Accordingly, consistent with the conventional practices, a barrier layer 70 is initially deposited to line the dual damascene opening, such as Ta or TaN. Seed layer 71, such as a Cu alloy containing magnesium, aluminum, zinc, zirconium, tin, nickel, palladium, silver or gold in a suitable amount, e.g., about 0.3 to about 12 at. %, is then deposited. Cu 72 is then deposited, as by electroplating or electroless plating, to fill the dual damascene opening and form an overburden. Subsequently, CMP is conducted so that the upper surface of the deposited Cu 72 is substantially coplanar with the upper surface of ILD 60. The resulting filled dual damascene opening comprises via 72A in electrical contact with lower metal feature 10 and connected to Cu line 72. Subsequently, a capping layer 73, such as silicon nitride, is deposited.The present invention provides methodology enabling the manufacture of semiconductor devices having accurately dimensioned interconnect patterns with increased circuit speed and reduced parasitic capacitance employing organic low-k ILDs and Cu metallization The present invention enjoys industrial applicability in manufacturing highly integrated semiconductor devices exhibiting increased circuit speed and sub-micron dimensions, e.g., with a design rule of about 0.12 micron and under. The present invention includes the use of various metals for the interconnection system, particularly Cu and Cu alloys, employing both single and dual damascene techniques.In the preceding detailed description, the present invention is described with reference to specifically exemplary embodiments thereof It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the present invention, as set forth in the claims. The specification and drawings are, accordingly, to be regarded as illustrative and not restrictive. It is understood that the present invention is capable of using various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein.
An accelerator architecture for processing very-sparse and hyper-sparse matrix data is disclosed. A hardware accelerator comprises one or more tiles, each including a plurality of processing elements (PEs) and a data management unit (DMU). The PEs are to perform matrix operations involving very- or hyper-sparse matrices that are stored by a memory. The DMU is to provide the plurality of PEs access to the memory via an interface that is optimized to provide low-latency, parallel, random accesses to the memory. The PEs, via the DMU, perform the matrix operations by, issuing random access read requests for values of the one or more matrices, issuing random access read requests for values of one or more vectors serving as a second operand, and issuing random access write requests for values of one or more vectors serving as a result.
A hardware accelerator comprising:one or more tiles, wherein each tile includes:a plurality of processing elements (PEs) to perform matrix operations involving, as a first operand, one or more very- or hyper-sparse matrices that are stored by a memory; anda data management unit (DMU) to provide the plurality of PEs access to the memory, the memory to be coupled with the hardware accelerator via an interface that is optimized to provide low-latency, parallel, random accesses to data;wherein the plurality of PEs, via the DMU, perform the matrix operations by,issuing a first set of random access read requests for values of the one or more matrices after identifying locations of the values by issuing random access read requests for pointer values;issuing a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; andissuing a third set of random access write requests for values of a second set of one or more vectors serving as a result.The hardware accelerator of claim 1, wherein the DMU comprises a cache to store data returned responsive to the issued first set of random access read requests for values of the one or more matrices.The hardware accelerator of any one of claims 1-2, wherein the memory is a system memory also utilized by another hardware processor.The hardware accelerator of any one of claims 1-3, wherein the hardware accelerator is to perform the matrix operations responsive to an offload of one or more tasks issued by another hardware processor.The hardware accelerator of any one of claims 1-4, wherein the one or more matrices are stored in a compressed format.The hardware accelerator of any one of claims 1-5, wherein the matrix operations include multiplication operations.The hardware accelerator of any one of claims 1-6, wherein the matrix operations include scale and update operations, multiplication operations, and dot product operations.A method in a hardware accelerator for performing matrix operations with very-sparse or hyper-sparse matrices comprising:issuing, by one or more processing elements (PEs) of a plurality of PEs of one or more tiles, a first set of random access read requests via one or more data management units (DMUs) to a memory for values of one or more very-sparse or hyper-sparse matrices after identifying locations of the values by issuing random access read requests for pointer values, wherein the one or more DMUs access the memory via an interface that is optimized to provide low-latency, parallel, random accesses to data;issuing, by the one or more PEs via the one or more DMUs, a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; andissuing, by the one or more PEs via the one or more DMUs, a third set of random access write requests for values of a second set of one or more vectors serving as a result.The method of claim 8, wherein the DMU comprises a cache to store data returned responsive to the issued first set of random access read requests for values of the one or more matrices.The method of any one of claims 8-9, wherein the memory is a system memory also utilized by another hardware processor.The method of any one of claims 8-10, wherein the issuing the first set of requests, second set of requests, and third set of requests occurs responsive to an offload of one or more tasks by another processor to the hardware accelerator.The method of any one of claims 8-11, wherein the one or more matrices are stored in a compressed format.The method of any one of claims 8-12, wherein the matrix operations include multiplication operations.The method of any one of claims 8-13, wherein the matrix operations include scale and update operations, multiplication operations, and dot product operations.A system comprising:one or more tiles, wherein each tile includes:a plurality of processing elements (PEs) to perform matrix operations involving, as a first operand, one or more very- or hyper-sparse matrices that are stored by a memory; anda data management unit (DMU) to provide the plurality of PEs access to the memory, the memory coupled with the hardware accelerator via an interface that is optimized to provide low-latency, parallel, random accesses to data;wherein the plurality of PEs, via the DMU, perform the matrix operations by,issuing a first set of random access read requests for values of the one or more matrices after identifying locations of the values by issuing random access read requests for pointer values;issuing a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; andissuing a third set of random access write requests for values of a second set of one or more vectors serving as a result.
TECHNICAL FIELDThe disclosure relates generally to electronics, and, more specifically, embodiments relate to a hardware accelerator architecture for processing very-sparse and hyper-sparse matrix data.BACKGROUNDIn just the past few years, algorithms from the relatively nascent field of machine learning have been widely applied for many types of practical applications, resulting in technologies such as self-driving vehicles, improved Internet search engines, speech, audio, and/or visual recognition systems, human health data and genome analysis, recommendation systems, fraud detection systems, etc. The growth of the use of these algorithms has in part been fueled by recent increases in the amount and types of data being produced by both humans and non-humans. Thus, as the increased amount of data available for analysis has skyrocketed, so too has the interest in machine learning.In many different contexts, machine learning algorithms are commonly being implemented using large matrices. Further, many of these matrices are "sparse" matrices in that they have a significant number of "empty" or "background" values - e.g., zero values. For example, social graphs can be modeled as matrices (e.g., "adjacency matrices") that have as many rows and columns as there are people in the data set, where the elements in the cells of the matrix represent some information about the connections between each pair of people.When storing and utilizing sparse matrices, it is useful (and sometimes, strictly necessary) to use specialized algorithms and data structures that can take advantage of the sparse structure of the matrix. This is because performing matrix operations using regular dense-matrix structures and algorithms will be quite inefficient when applied to large, sparse matrices as processing and storage resources are effectively "wasted" due to the existence of the substantial amount of zeroes. Thus, sparse data can be easily compressed to require significantly less storage, and particular algorithms and computing architectures can be implemented to accommodate these compressed structures.However, algorithms involving matrix manipulations, which include many machine learning algorithms, tend to be computationally expensive, as they can involve performing huge numbers of non-trivial operations with huge amounts of data. As a result, it is extremely important to implement these algorithms as efficiently as possible, as any small inefficiency is quickly magnified due to the large scale of computation.Accordingly, techniques and processing architectures that can enhance the performance of these types of operations involving sparse matrix data are strongly desired.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate some embodiments. In the drawings: Figure 1 is a block diagram illustrating matrices, vectors, and exemplary compressed matrix representations. Figure 2 is a block diagram illustrating exemplary compressed matrix representations. Figure 3 is a block diagram illustrating an exemplary sparse matrix, very-sparse matrix, and hyper-sparse matrix according to some embodiments. Figure 4 is a block diagram illustrating a system including a hardware processor optimized for sparse matrix operations according to some embodiments. Figure 5 is a block diagram illustrating a system including a hardware processor optimized for very-sparse and hyper-sparse matrix operations according to some embodiments. Figure 6 is a flow diagram illustrating a flow for performing very-sparse or hyper-sparse matrix operations according to some embodiments.Figure 7 illustrates an exemplary implementation in which an accelerator is communicatively coupled to a plurality of cores through a cache coherent interface according to some embodiments. Figure 8 illustrates another view of an accelerator according to some embodiments. Figure 9 illustrates an exemplary set of operations performed by the processing elements according to some embodiments. Figure 10a depicts an example of a multiplication between a sparse matrix A against a vector x to produce a vector y according to some embodiments. Figure 10b illustrates the CSR representation of matrix A in which each value is stored as a (value, row index) pair according to some embodiments. Figure 10c illustrates a CSC representation of matrix A which uses a (value, column index) pair. Figures 11a, 11b, and 11c illustrate pseudo code of each compute pattern, in which: Figure 11a illustrates a row-oriented sparse matrix dense vector multiply (spMdV_csr) according to some embodiments. Figure 11b illustrates a column-oriented sparse matrix sparse vector multiply (spMspC_csc) according to some embodiments. Figure 11c illustrates a scale and update operation (scale_update) according to some embodiments. Figure 12 illustrates the processing flow for one implementation of the data management unit and the processing elements according to some embodiments. Figure 13a highlights paths for spMspV_csc and scale_update operations according to some embodiments. Figure 13b illustrates paths for a spMdV_csr operation according to some embodiments. Figure 14a shows an exemplary graph. Figure 14b shows an example of representing the graph of Figure 14a as an adjacency matrix. Figure 14c illustrates a vertex program according to some embodiments. Figure 14d illustrates exemplary program code for executing a vertex program according to some embodiments. Figure 14e shows a generalized sparse matrix vector multiply (GSPMV) formulation according to some embodiments. Figure 15 illustrates one implementation of a design framework for GSPMV according to some embodiments. Figure 16 shows one implementation of an architecture template for GSPMV according to some embodiments. Figure 17 illustrates a summarization of the operation of each accelerator tile according to some embodiments. Figure 18a illustrates a table summarizing the customizable parameters of one implementation of the template according to some embodiments. Figure 18b illustrates tuning considerations of one implementation of the framework that performs automatic tuning to determine the best design parameters to use to customize the hardware architecture template in order to optimize it for the input vertex program and (optionally) graph data according to some embodiments. Figure 19 illustrates the compressed row storage sparse-matrix format according to some embodiments. Figure 20 shows exemplary steps involved in an implementation of sparse matrix-dense vector multiplication using the CRS data format according to some embodiments. Figure 21 illustrates one implementation of an accelerator includes an accelerator logic die and one of more stacks of DRAM die according to some embodiments. Figure 22 illustrates one implementation of the accelerator logic chip, oriented from a top perspective through the stack of DRAM die according to some embodiments. Figure 23 provides a high-level overview of a dot-product engine (DPE) which contains two buffers, two 64-bit multiply-add arithmetic logic units (ALUs), and control logic according to some embodiments. Figure 24 illustrates a blocking scheme for large sparse-matrix computations according to some embodiments. Figure 25 illustrates a format of block descriptors according to some embodiments. Figure 26 illustrates the use of block descriptors for a two-row matrix that fits within the buffers of a single dot-product engine, on a system with only one stacked dynamic random access memory (DRAM) data channel and four-word data bursts, according to some embodiments. Figure 27 illustrates one implementation of the hardware in a dot-product engine according to some embodiments. Figure 28 illustrates the contents of the match logic 3020 unit that does capturing according to some embodiments. Figure 29 illustrates the details of a dot-product engine design to support sparse matrix-sparse vector multiplication according to some embodiments. Figure 30 illustrates an example of a computation using specific values according to some embodiments. Figure 31 illustrates how the sparse-dense and sparse-sparse dot-product engines described above can be combined to yield a dot-product engine that can handle both types of computations according to some embodiments. Figure 32 is a block diagram of a register architecture according to some embodiments. Figure 33A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to some embodiments. Figure 33B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to some embodiments. Figures 34A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip: Figure 34A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to some embodiments. Figure 34B is an expanded view of part of the processor core in Figure 34A according to some embodiments. Figure 35 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to some embodiments. Figures 36-39 are block diagrams of exemplary computer architectures. Figure 36 shown a block diagram of a system in accordance with some embodiments. Figure 37 is a block diagram of a first more specific exemplary system in accordance with some embodiments. Figure 38 is a block diagram of a second more specific exemplary system in accordance with some embodiments. Figure 39 is a block diagram of a SoC in accordance with some embodiments. Figure 40 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to some embodiments.DETAILED DESCRIPTIONThe following description describes a hardware accelerator architecture for efficiently performing operations involving very-sparse and hyper-sparse matrix data. In this description, numerous specific details such as logic implementations, types and interrelationships of system components, etc., may be set forth in order to provide a more thorough understanding of some embodiments. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits, and/or full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.Throughout this description, the use of a letter character at the end of a reference numeral (corresponding to an illustrated entity) is not meant to indicate that any particular number of that entity must necessarily exist, but merely that the entity is one of potentially many similar entities. For example, processing elements 406A-406Z include both "A" and "Z" letter suffixes, which means that there could be two processing elements, three processing elements, sixteen processing elements, etc. Moreover, the use of dashed lines, as described above, indicates that one or more of the entities could be optional; thus, in some embodiments only one sparse tile 404A may utilized, whereas in other embodiments multiple sparse tiles 404A-404N may be utilized. Additionally, the use of different letter characters as reference suffixes for different entities is not meant to indicate that there must be different numbers of these entities. For example, although the sparse tiles 404A-404N and the memory units 412A-412M include different letter suffixes - i.e., "N" and "M" - there could be the same number (or different numbers) of these in various embodiments.As indicated above, many different kinds of matrix operations are important for machine learning and in other domains, including but not limited to systems implementing graph computations using matrix operations (e.g., breadth first search), bioscience modeling, high performance computing, computer vision, computer graphics, etc. In these applications, it is quite common to utilize extremely large matrices, and it is also common for the involved matrices to be sparse.In many systems, matrices are stored as two-dimensional arrays. Each entry in the array represents an element ai,jof the matrix and is accessed by the two indices,i(typically, the row index) andj(typically, the column index). For an m × n matrix, the amount of memory required to store the matrix in this format is somewhat proportional to m × n, though additional data also needs to be stored (e.g., the dimensions of the matrix, data structure "bookkeeping" data).In the case of sparse matrices, significant memory reductions can be gained by storing only non-zero entries. Various data structures have been developed to do just this, and different ones of these structures can be utilized which, based upon the number and distribution of the non-zero entries, can result in significant savings in memory when compared to the basic array-based approach. However, a trade-off arises in that accessing the individual elements can become more complex (e.g., require additional memory accesses due to following pointers, calculating memory addresses, etc.), and additional data structures may be needed to be able to recover the original matrix in a truly lossless manner.For example, Figure 1 is a block diagram illustrating matrices 102 and vectors 104/106 that may be involved in a matrix operation, and exemplary compressed matrix representations 108/122. This figure illustrates two different compressed matrix formats - Compressed Sparse Column (CSC) 108 and Compressed Sparse Row (CSR) 122 - but many formats exist, including but not limited to a Dictionary of Keys (DOK), List of Lists (LL), Doubly Compressed Sparse Column (DCSC, which is illustrated in Figure 2 ), etc.The first CSC example 100 illustrates a matrix 'A' 102 serving as a first operand for a matrix operation, a first vector 'X' 104 serving as a second operand for the matrix operation, and a vector 'Y' serving as a result for the matrix operation. This example includes a 6 × 4 matrix 'A' 102 (i.e., 6 rows and 4 columns), a 4-element vector 'X' 104 (which can also be thought of as a 4 × 1 matrix), and a 6-element vector 'Y' 106. However, as is described throughout, many modern applications utilizing matrices and matrix operations may utilize much larger matrices/vectors that could include hundreds, thousands, tens of thousands, hundreds of thousands, millions, etc.., of dimensions. Thus, the examples presented herein are purposefully simplified for ease of understanding, and it is to be understood that the techniques and architecture presented herein may be applicable to both "small" matrices as well as much larger ones.In this CSC example 100, the matrix 'A' 102 can be represented using the CSC format 108, in which a data structure (e.g., an array, list, vector) here named "colptr" includes four values, each of which represents a column of the matrix 102 and stores a pointer to one or more elements within the column. Each element is shown as having two data elements: a first being a value stored in the matrix, and a second being an index of that value as it is stored in the matrix. As illustrated, the column pointer that points to "col0" (the first column) includes three elements: (7, 1), (6, 3), and (2, 4) - indicating that the value "7" is stored in row[1] (i.e., the second row), value "6" is stored in row[3], and value "2" is stored in row[4].Of course, in many implementations, additional "bookkeeping" type data (and/or data structures) may also be stored and utilized (e.g., to demarcate the beginning/end of an element, to demarcate the end of the elements for a particular column) which will be discussed in further detail later herein.In this CSC example 100, one substep of a matrix multiplication is illustrated in which each value of the third column (i.e., the column having an index of 2) of the matrix 102 is multiplied by the third value (or, the element having an index of 2) within the vector 104. This results in the output vector 'Y' 106 - e.g., "3" (the first value of the third column of matrix 'A' 102) multiplied by "4" (the third value of vector 'X' 104) is "12", which is stored in the first value in output vector 'Y' 106.To perform this computation using the matrix 'A' 102 when it is stored in CSC format 108, the values of the "colptr" data structure (i.e., the pointers / memory addresses) must be first loaded from memory, and these pointers must be followed (e.g., via another load from memory) to find the particular elements of each corresponding column. Additionally, each element of the columns may or may not be stored contiguously in memory, which could require additional pointer chasing. For example, for "col2", the three illustrated elements might not be stored at contiguous memory locations, and thus, there might be additional bookkeeping data (e.g., underlying structural data of the data structure, which could be pointers) that allows for the locations of these elements to be determined. Thus, to perform this operation, there may need to be several "loads" of data from memory - loads of metadata/pointers and/or loads of actual elements representing values of the matrix 102.Another simplified example of a matrix storage format is shown as CSR example 120, which shows another operand matrix 'A', operand vector 'x', and output vector 'y'. The CSR format 122 for matrix A in this example is similar to the CSC format 108 above, but instead the values of the matrix are arranged according to rows, not columns. Thus, the CSR format 122 includes a "rowptr" structure of pointers to values of each of the rows. Again, this example is simplified as it does not show the underlying structural data metadata used to implement the CSR format 122 for matrix A, and thus again it is to be understood that a substantial number of memory loads may be required for a processor to access the values of the matrix.As a further example of matrix storage structures and the overheads introduced as a result, we turn to Figure 2 , which is a block diagram illustrating exemplary compressed matrix representations. Figure 2 first illustrates a tabular format 200 representation of the non-zero elements of a sparse 9 × 9 matrix. In this case, the column "A.I" stores values indicating the row of the value, "A.J" indicates the column of the value, and "A.V" indicates the value itself. Thus, the 9 × 9 matrix includes only four non-zero elements, and thus may benefit from being stored in a compressed format.Accordingly, one implementation of a CSC data structure 220 (for matrix 'A' 200) commonly utilized is illustrated as utilizing a column pointers array ("CP") including j+1 entries, where some of these entries point to "IR" (row) array entries that correspond to "NUM" (value) array entries. Notably, there are some repetitions in the CP array due to some empty columns, and further, a significant number of memory reads are required to traverse the values of the matrix.An additional matrix representation that is commonly utilized is also shown in Figure 2 - a DCSC data structure 240, which is a further-compressed version of CSC where the repetitions in the CP are eliminated. Instead, only columns that have at least one non-zero are represented, together with their column indices. In this example, a "JC" array (which is parallel to the CP array), provides the column numbers, and the CP array is compressed to avoid the repetitions of the CSC format. Thus, the DCSC representation can be viewed as a sparse array of sparse columns, whereas the CSC representation is a dense array of sparse columns.Accordingly, a variety of low-level matrix representations exist that are in common use for matrix operations that are storage efficient at the expense of some administrative and utilization overheads (e.g., pointer chasing, additional loads).Again, these matrix representations are particularly useful for use with sparse matrices having a significant amount of non-zero values. However, an interesting observation is that while the matrix representations described above provide significant benefits for storing and using sparse matrices, for a subset of these sparse matrices these matrix representations introduce significant overheads and inefficiencies.Thus, some types of sparse matrices - especially those that have many (or nearly all) non-zeros - are not processed very efficiently with current architectures. For example, Figure 3 is a block diagram illustrating an exemplary sparse matrix, very-sparse matrix, and hyper-sparse matrix according to some embodiments.For the purposes of this description, a differentiation can be made between different types of sparse matrices. As is known in the literature, there are a variety of ways to denote a matrix as being sparse. For example, a graph may be referred to as being sparse if nnz = O(n), where nnz is the number of edges in the graph, and n is the number of vertices.Another way to distinguish between sparse and not-sparse (or "dense") matrices is based upon how many of the elements of the matrix are zero. As used herein, a "sparse" matrix or vector is a matrix or vector in which many (or most) of the elements are zero. Thus, in some scenarios a matrix or vector may be sparse when at least half of its elements are zero, though in other scenarios the threshold can be different - e.g., a matrix or vector is sparse if at least twenty-five percent of its elements are zero, sixty-percent of its elements are zero, etc. Similarly, a "dense" matrix or vector is a matrix or vector in which many or most of the elements are non-zero. The "sparsity" of a matrix/vector may be defined based on the number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix). Thus, in one implementation, a matrix/vector is considered "sparse" if its sparsity is above a specified threshold.The category of "sparse" matrices and vectors can further be broken up into sub-segments - e.g., "regular" sparse matrices, "very-sparse" matrices, and "hyper-sparse" matrices.For example, some literature defines a subset of sparse data structures as being "hyper-sparse" when, for graphs, nnz < n, which is fairly rare in numerical linear algebra but occurs often in computations on graphs, particularly in parallel graph computations. Put another way, a hyper-sparse matrix may be one where an extremely large ratio of the elements of the matrix are zero. Of course, the threshold for determining whether a matrix is hyper-sparse can differ based upon the particular application. For example, a matrix may be deemed hyper-sparse when the sparsity of the matrix is at least 80%, or 90%, or 95%, or 97%, or 99%, or 99.5%, etc.A further category of sparse matrix deemed a "very-sparse" matrix can be defined as satisfying the threshold for "regular" sparse matrices but not satisfying the sparsity threshold to be considered a "hyper-sparse" matrix. Again, the precise formulations may vary based upon the application, but in some embodiments a "regular" sparse matrix could be one having a sparsity of 50-70% (i.e., a minimum threshold of 50% and a maximum threshold of 70%), a "very-sparse" matrix could be one having a sparsity greater than 70% but less than 98%, and a hyper-sparse matrix could be one having a sparsity greater than 98%. As another example, a regular sparse matrix could be one having a sparsity between 25-75%, a very-sparse matrix could be one having 75-95%, and a hyper-sparse matrix could be one having a sparsity in excess of 95%. Thus, it is to be understood that there are many different ways to align the particular thresholds.Accordingly, a small portion of an exemplary "regular" sparse matrix 305 (40,000 × 40,000) is illustrated to convey that a substantial number of its values are zero (here, 25 of the 56 values), whereas the small portion of an exemplary "very-sparse" matrix 310 includes more zero values (here, 44 of the 56 values), while the illustrated small portion of the hyper-sparse matrix 315 includes a very large number of zeros (here, 54 of the 56 values).In addition to categorizing the sparseness of a matrix based upon its sparsity ratio, in some scenarios the sparseness type (or category) can be based (in whole or in part) upon whether any rows or columns are completely empty. For example, in some embodiments, a very-sparse or hyper-sparse matrix may be defined as a matrix including a particular number of rows and/or columns that are empty. This determination of the sparseness type may be independent of the particular sparsity ratio of the matrix (e.g., a matrix with a very large sparsity ratio may not, in some cases, qualify as a very- or hyper-sparse matrix if it does not have a requisite threshold number of empty rows and/or columns), or may the determination may be a combination of both the sparsity ratio and the row/column-emptiness criteria, or either. Figure 4 is a block diagram illustrating a system 400 including a hardware processor 402 optimized for sparse matrix operations according to some embodiments. The hardware processor 402 can be an accelerator device that can perform operations that have been offloaded by another hardware processor 401 (e.g., via one or more interconnections / buses / etc. 416). Further details regarding accelerators as well as this architecture for processing sparse matrices is presented later herein with regard to later figures.The hardware processor 402 includes one or more "sparse" tiles 404A-404N. Each of the sparse tiles 404A-404N includes one or more processing elements (PEs) 406A-406Z, though in many embodiments each tile includes multiple PEs. PEs 406A-406Z can be thought of as similar to a processor core, and is presented in additional detail with regard to the later figures. Each of the processing elements 406A-406Z may comprise circuitry to execute one or more instructions to perform operations, and may or may not be part of a processor core. Thus, a processing element may be thought of as one type of a hardware processor or one part of a hardware processor.Each sparse tile (e.g., sparse tile 404A) can also include a random access memory 408 (e.g., an on-chip cache) as well as a data management unit (DMU) 410 that provides access to one or more off-tile memory units 412A-412M (e.g., storing the matrices involved in the operations) via a memory interface 414 that is optimized for high bandwidth data transfers.This hardware processor 402 can utilize a variety of techniques to optimize the execution efficiency of sparse matrix operations.First, in some embodiments, the hardware processor 402 can partition the matrix into small enough blocks such that each vector subset being operated against each block can fit in the on-chip RAMs 408, so that it can be efficiently accessed in an irregular/random manner locally and reused when operated against the non-zero elements in the matrix block. Thus, in some embodiments, the "X" vector 104 and/or "Y" vectors 106 (shown in Figure 1 ) can be kept on-chip in the RAMs 408 for very fast, low-latency updates.Second, in some embodiments, the hardware processor 402 can stream the non-zeros of the rows (or columns) of the matrix from the off-chip memory unit(s) 412A-412M to saturate the available memory bandwidth. Each of the streamed non-zeros can be applied against the vector subset being kept on-chip, as explained above. Thus, in some embodiments, the values of the matrix 'A' 102 of Figure 1 can be streamed over a high bandwidth connection to be processed by the processing elements 406A-406Z.These techniques work especially well with sparse matrices where there are sufficient amounts of non-zeros per block.However, this architecture is not as effective for very-sparse and hyper-sparse matrices. This is due to the following reasons:First, because a very/hyper-sparse matrix has very few non-zeros, it incurs relatively higher blocking overhead (e.g., due to row or column pointers). This means that there is larger overhead for processing "bookkeeping" data (e.g., different data structures, pointers, etc.) as well as making memory accesses to them, relative to the processing of the actual non-zero matrix elements.Additionally, because very/hyper-sparse matrices have very few non-zeros per column (or row), accessing the columns (or rows) involves making a large number of small (or "short") memory accesses. This is not efficient for an architecture optimizing memory accesses to be high bandwidth (e.g., at the expense of latency). This also means that there is less data reuse on the vector being operated against. For hyper-sparse matrices, there is also a heightened amount of additional short reads when using doubly-compressed formats (e.g., DCSC 240 of Figure 2 ) to more efficiently represent empty rows/columns.Further, any data dependence from having to access column (or row) pointer to access the non-zeros of the column (or row) is more exposed because there are few non-zeros to be accessed and processed that could potentially hide the access to the next column (or row) pointer. This results in performance being negatively impacted by the relatively-large memory latency.Accordingly, an alternate architecture for performing sparse matrix operations involving very- and/or hyper-sparse matrices is shown in Figure 5 , which is a block diagram illustrating a system 500 including a hardware processor 502 (e.g., an accelerator device) optimized for very-sparse and hyper-sparse matrix operations according to some embodiments.Embodiments utilizing this architecture can dramatically improve the processing efficiency of very/hyper-sparse matrix data, and can be implemented in a variety of ways, such as using Application-Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs), etc.As shown in Figure 5 , the hardware processor 502 includes one or more very/hyper-sparse tiles 504A-504N, each including one or more processing elements 406A-406Z and a DMU 510. The DMU 510 can provide the one or more processing elements 406A-406Z access to one or more off-tile memory units 512A-512M via a memory interface 514 that is optimized for low-latency random accesses (e.g., as opposed to the high-bandwidth accesses, such as streaming, of Figure 4 ) with high parallelism (e.g., using heavily-banked memory). In some embodiments, the DMU 510 can include a gather-scatter unit 512 to perform gathers (e.g., irregular accesses via following pointers, etc.) and scatters without, perhaps, requiring the involvement of the requesting one or more processing elements 406A-406Z.Using this architecture, the hardware processor 502 is optimized for processing large matrix blocks with a low-latency memory sub-system capable of handling parallel small/short irregular memory accesses.In some embodiments, the hardware processor 502 can minimize blocking overhead by using large blocks, even if it means that the vector subset being operated against the matrix block also becomes large.In some embodiments, the hardware processor 502 can thus use a larger vector subset, which can be kept in the memory unit(s) 512A-512M (as opposed to brining it onto RAM 408, as in Figure 4 ). Hence, the DMU 410 can be adapted (e.g., via gather/scatter unit 512) to efficiently handle parallel gather/scatter (i.e., irregular) memory accesses to this vector subset.Optionally, in some embodiments the DMU 510 can include a comparatively small on-chip cache 514 to capture the modest data re-use available in this vector subset. For example, when access values of a column of a matrix, in some cases there may be several values of the column stored in contiguous memory locations. Thus, depending upon the granularity of the memory system (e.g., the size/amount of data returned for a read) and the size of the matrix values (e.g., a data type of the values/indices), a memory access may possibly return a next-needed value/index. For example, if a value and an index (representing an element of a matrix) are each 4 bytes in size, a 16-byte memory access may retrieve two elements, the second of which might be a next-needed element, which provides the benefits of spatial locality.In some embodiments, the DMU 510 is also optimized for low latency to limit exposure to column (or row) pointer chasing dependencies, as well as support parallel memory short accesses tailored for short matrix columns (or rows).Thus, according to some embodiments, the memory unit 512A-512M is adapted for low latency, parallel, short, irregular accesses, even if this comes at the expense of lessened bandwidth. To implement these features, there are many memory optimizations known to those of ordinary skill in the art that can be used (smaller rows, narrow prefetch buffers, etc.).In some embodiments, as these matrix operations are memory-intensive, the number of PEs 406A-406Z involved in the operations can be minimized to match the rate of data being brought from memory unit 512A-512M.Thus, some embodiments using this architecture can handle the set of sparse matrix operations as in the previous architecture of Figure 4 , but at a better execution efficiency when the involved matrix datasets are very-sparse or hyper-sparse in nature. This results from, among other things, accessing the matrix values using short, irregular, low-latency memory accesses, whereas the architecture of Figure 4 (which provides efficient sparse matrix computations for "regular" sparse matrices) utilizes streaming non-zero elements of the rows (or columns) of the matrix, and/or localizing/re-using the vector subset being operated against in an on-chip memory, e.g., through properly blocking the matrix data.The number of PEs 106A-406Z can be specifically chosen, for example, based upon the memory connection technology (i.e., the latency and/or bandwidth of the memory providing the low-latency, parallel, random accesses). For example, a simulation modeling can be performed to determine the optimal amount of PEs 406A-406Z to properly saturate the memory to not under-utilize the memory or set of PEs 406A-406Z. Figure 6 is a flow diagram illustrating a flow 600 for performing very-sparse or hyper-sparse matrix operations according to some embodiments. The operations in this and other flow diagrams will be described with reference to the exemplary embodiments of the other figures. However, it should be understood that the operations of the flow diagrams can be performed by embodiments other than those discussed with reference to the other figures, and the embodiments discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagrams. In some embodiments, this flow 600 is performed by a hardware processor 502 (e.g., hardware accelerator) of Figure 5 . Flow 600 includes, at block 605, issuing, by one or more processing elements (PEs) of a plurality of PEs of one or more tiles, a first set of random access read requests via one or more data management units (DMUs) to a memory for values of one or more very-sparse or hyper-sparse matrices after identifying locations of the values by issuing random access read requests for pointer values. The one or more DMUs access the memory via an interface that is optimized to provide low-latency, parallel, random accesses to data.Flow 600 also includes, at block 610, issuing, by the one or more PEs via the one or more DMUs, a second set of random access read requests for values of a first set of one or more vectors serving as an operand. Flow 600 also includes, at block 615, issuing, by the one or more PEs via the one or more DMUs, a third set of random access write requests for values of a second set of one or more vectors serving as a result. Examples According to some embodiments, a hardware accelerator comprises one or more tiles, wherein each tile includes: a plurality of processing elements (PEs) to perform matrix operations involving, as a first operand, one or more very- or hyper-sparse matrices that are stored by a memory; and a data management unit (DMU) to provide the plurality of PEs access to the memory, the memory to be coupled with the hardware accelerator via an interface that is optimized to provide low-latency, parallel, random accesses to data; wherein the plurality of PEs, via the DMU, perform the matrix operations by: issuing a first set of random access read requests for values of the one or more matrices after identifying locations of the values by issuing random access read requests for pointer values; issuing a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; and issuing a third set of random access write requests for values of a second set of one or more vectors serving as a result.In some embodiments, the DMU comprises a cache to store data returned responsive to the issued first set of random access read requests for values of the one or more matrices. In some embodiments, the memory is a system memory also utilized by a hardware processor. In some embodiments, the hardware accelerator is to perform the matrix operations responsive to an offload of one or more tasks issued by a hardware processor. In some embodiments, the one or more matrices are stored in a compressed format. In some embodiments, the matrix operations include multiplication operations. In some embodiments, the matrix operations include scale and update operations, multiplication operations, and dot product operations.According to some embodiments, a method in a hardware accelerator for performing matrix operations with very-sparse or hyper-sparse matrices comprises: issuing, by one or more processing elements (PEs) of a plurality of PEs of one or more tiles, a first set of random access read requests via one or more data management units (DMUs) to a memory for values of one or more very-sparse or hyper-sparse matrices after identifying locations of the values by issuing random access read requests for pointer values, wherein the one or more DMUs access the memory via an interface that is optimized to provide low-latency, parallel, random accesses to data; issuing, by the one or more PEs via the one or more DMUs, a second set of random access read requests for values of a first set of one or more vectors serving as an operand; and issuing, by the one or more PEs via the one or more DMUs, a third set of random access write requests for values of a second set of one or more vectors serving as a result.In some embodiments, the DMU comprises a cache to store data returned responsive to the issued first set of random access read requests for values of the one or more matrices. In some embodiments, the memory is a system memory also utilized by another hardware processor. In some embodiments, the issuing the first set of requests, second set of requests, and third set of requests occurs responsive to an offload of one or more tasks by another processor to the hardware accelerator. In some embodiments, the one or more matrices are stored in a compressed format. In some embodiments, the matrix operations include multiplication operations. In some embodiments, the matrix operations include scale and update operations, multiplication operations, and dot product operations.According to some embodiments, a system comprises a memory, and one or more tiles, wherein each tile includes: a plurality of processing elements (PEs) to perform matrix operations involving, as a first operand, one or more very- or hyper-sparse matrices that are stored by the memory; and a data management unit (DMU) to provide the plurality of PEs access to the memory, the memory coupled with the hardware accelerator via an interface that is optimized to provide low-latency, parallel, random accesses to data. The plurality of PEs, via the DMU, perform the matrix operations by: issuing a first set of random access read requests for values of the one or more matrices after identifying locations of the values by issuing random access read requests for pointer values; issuing a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; and issuing a third set of random access write requests for values of a second set of one or more vectors serving as a result.In some embodiments, the DMU comprises a cache to store data returned responsive to the issued first set of random access read requests for values of the one or more matrices. In some embodiments, the memory is a system memory also utilized by another hardware processor. In some embodiments, the system is to perform the matrix operations responsive to an offload of one or more tasks issued by another hardware processor. In some embodiments, the one or more matrices are stored in a compressed format. In some embodiments, the matrix operations include multiplication operations. In some embodiments, the matrix operations include scale and update operations, multiplication operations, and dot product operations.According to some embodiments, a hardware accelerator to perform matrix operations with very-sparse or hyper-sparse matrices comprises: a first means, including: a second means to perform matrix operations involving, as a first operand, one or more very- or hyper-sparse matrices that are stored by a third means; and a fourth means to provide the second means access to the third means, the third means to be coupled with the hardware accelerator via an interface that is optimized to provide low-latency, parallel, random accesses to data; wherein the second means, via the fourth means, perform the matrix operations by, issuing a first set of random access read requests for values of the one or more matrices after identifying locations of the values by issuing random access read requests for pointer values; issuing a second set of random access read requests for values of a first set of one or more vectors serving as a second operand; and issuing a third set of random access write requests for values of a second set of one or more vectors serving as a result.Embodiments disclosed herein utilize electronic devices. An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. One or more parts of an embodiment may be implemented using different combinations of software, firmware, and/or hardware. Exemplary Accelerator Architectures OverviewIn some implementations, an accelerator is coupled to processor cores or other processing elements to accelerate certain types of operations such as graphics operations, machine-learning operations, pattern analysis operations, and (as described in detail below) sparse matrix multiplication operations, to name a few. The accelerator may be communicatively coupled to the processor/cores over a bus or other interconnect (e.g., a point-to-point interconnect) or may be integrated on the same chip as the processor and communicatively coupled to the cores over an internal processor bus/interconnect. Regardless of the manner in which the accelerator is connected, the processor cores may allocate certain processing tasks to the accelerator (e.g., in the form of sequences of instructions or µops) which includes dedicated circuitry/logic for efficiently processing these tasks. Figure 7 illustrates an exemplary implementation in which an accelerator 700 is communicatively coupled to a plurality of cores 710-711 through a cache coherent interface 730. Each of the cores 710-711 includes a translation lookaside buffer 712-713 for storing virtual to physical address translations and one or more caches 714-715 (e.g., L1 cache, L2 cache, etc.) for caching data and instructions. A memory management unit 720 manages access by the cores 710-711 to system memory 750 which may be a dynamic random access memory DRAM. A shared cache 726 such as an L3 cache may be shared among the processor cores 710-711 and with the accelerator 700 via the cache coherent interface 730. In one implementation, the cores ATA1010T-1011, MMU 720 and cache coherent interface 730 are integrated on a single processor chip.The illustrated accelerator 700 includes a data management unit 705 with a cache 707 and scheduler AT006 for scheduling operations to a plurality of processing elements 701-702, N. In the illustrated implementation, each processing element has its own local memory 703-704, N. As described in detail below, each local memory 703-704, N may be implemented as a stacked DRAM.In one implementation, the cache coherent interface 730 provides cache-coherent connectivity between the cores 710-711 and the accelerator 700, in effect treating the accelerator as a peer of the cores 710-711. For example, the cache coherent interface 730 may implement a cache coherency protocol to ensure that data accessed/modified by the accelerator 700 and stored in the accelerator cache 707 and/or local memories 703-704, N is coherent with the data stored in the core caches 710-711, the shared cache 726 and the system memory 750. For example, the cache coherent interface 730 may participate in the snooping mechanisms used by the cores 710-711 and MMU 720 to detect the state of cache lines within the shared cache 726 and local caches 714-715 and may act as a proxy, providing snoop updates in response to accesses and attempted modifications to cache lines by the processing elements 701-702, N. In addition, when a cache line is modified by the processing elements 701-702, N, the cache coherent interface 730 may update the status of the cache lines if they are stored within the shared cache 726 or local caches 714-715.In one implementation, the data management unit 1005 includes memory management circuitry providing the accelerator 700 access to system memory 750 and the shared cache 726. In addition, the data management unit 705 may provide updates to the cache coherentinterface 730 and receiving updates from the cache coherent interface 730 as needed (e.g., to determine state changes to cache lines). In the illustrated implementation, the data management unit 705 includes a scheduler 706 for scheduling instructions/operations to be executed by the processing elements 701-702, N. To perform its scheduling operations, the scheduler 706 may evaluate dependences between instructions/operations to ensure that instructions/operations are executed in a coherent order (e.g., to ensure that a first instruction executes before a second instruction which is dependent on results from the first instruction). Instructions/operations which are not inter-dependent may be executed in parallel on the processing elements 701-702, N.Accelerator Architecture for Matrix and Vector OperationsFigure 8 illustrates another view of accelerator 700 and other components previously described including a data management unit 705, a plurality of processing elements 701-N, and fast on-chip storage 800 (e.g., implemented using stacked local DRAM in one implementation). In one implementation, the accelerator 700 is a hardware accelerator architecture and the processing elements 701-N include circuitry for performing matrix * vector and vector * vector operations, including operations for sparse/dense matrices. In particular, the processing elements 701-N may include hardware support for column and row-oriented matrix processing and may include microarchitectural support for a "scale and update" operation such as that used in machine learning (ML) algorithms.The described implementations perform matrix/vector operations which are optimized by keeping frequently used, randomly accessed, potentially sparse (e.g., gather/scatter) vector data in the fast on-chip storage 800 and maintaining large, infrequently used matrix data in off-chip memory (e.g., system memory 750), accessed in a streaming fashion whenever possible, and exposing intra/inter matrix block parallelism to scale up.Implementations of the processing elements 701-N process different combinations of sparse matrixes, dense matrices, sparse vectors, and dense vectors. As used herein, a "sparse" matrix or vector is a matrix or vector in which most of the elements are zero. By contrast, a "dense" matrix or vector is a matrix or vector in which most of the elements are non-zero. The "sparsity" of a matrix/vector may be defined based on the number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix). In one implementation, a matrix/vector is considered "sparse" if its sparsity is above a specified threshold.An exemplary set of operations performed by the processing elements 701-N is illustrated in the table in Figure 9 . In particular, the operation types include a first multiply 900 using a sparse matrix, a second multiply 901 using a dense matrix, a scale and update operation 902m and a dot product operation 903. Columns are provided for a first input operand 910 and a second input operand 911 (each of which may include sparse or dense matrix/vector); an output format 913 (e.g., dense vector or scalar); a matrix data format (e.g., compressed sparse row, compressed sparse column, row-oriented, etc.); and an operation identifier 914.The runtime-dominating compute patterns found in some current workloads include variations of matrix multiplication against a vector in row-oriented and column-oriented fashion. They work on well-known matrix formats: compressed sparse row (CSR) and compressed sparse column (CSC). Figure 10a depicts an example of a multiplication between a sparse matrix A against a vector x to produce a vector y. Figure 10b illustrates the CSR representation of matrix A in which each value is stored as a (value, row index) pair. For example, the (3,2) for row0 indicates that a value of 3 is stored in element position 2 for row 0. Figure 10c illustrates a CSC representation of matrix A which uses a (value, column index) pair. Figures 11a, 11b, and 11c illustrate pseudo code of each compute pattern, which is described below in detail. In particular, Figure 11a illustrates a row-oriented sparse matrix dense vector multiply (spMdV_csr); Figure 11b illustrates a column-oriented sparse matrix sparse vector multiply (spMspC_csc); and Figure 11c illustrates a scale and update operation (scale_update).A. Row-Oriented Sparse Matrix Dense Vector Multiplication (spMdV csr)This is a well-known compute pattern that is important in many application domains such as high-performance computing. Here, for each row of matrix A, a dot product of that row against vector x is performed, and the result is stored in the y vector element pointed to by the row index. This computation is used in a machine-learning (ML) algorithm that performs analysis across a set of samples (i.e., rows of the matrix). It may be used in techniques such as "mini-batch." There are also cases where ML algorithms perform only a dot product of a sparse vector against a dense vector (i.e., an iteration of the spMdV_csr loop), such as in the stochastic variants of learning algorithms.A known factor that can affect performance on this computation is the need to randomly access sparse x vector elements in the dot product computation. For a conventional server system, when the x vector is large, this would result in irregular accesses (gather) to memory or last level cache.To address this, one implementation of a processing element divides matrix A into column blocks and the x vector into multiple subsets (each corresponding to an A matrix column block). The block size can be chosen so that the x vector subset can fit on chip. Hence, random accesses to it can be localized on-chip.B. Column-Oriented Sparse Matrix Sparse Vector Multiplication (spMspV csc)This pattern that multiplies a sparse matrix against a sparse vector is not as well-known as spMdV_csr. However, it is important in some ML algorithms. It is used when an algorithm works on a set of features, which are represented as matrix columns in the dataset (hence, the need for column-oriented matrix accesses).In this compute pattern, each column of the matrix A is read and multiplied against the corresponding non-zero element of vector x. The result is used to update partial dot products that are kept at the y vector. After all the columns associated with non-zero x vector elements have been processed, the y vector will contain the final dot products.While accesses to matrix A is regular (i.e., stream in columns of A), the accesses to the y vector to update the partial dot products is irregular. The y element to access depends on the row index of the A vector element being processed. To address this, the matrix A can be divided into row blocks. Consequently, the vector y can be divided into subsets corresponding to these blocks. This way, when processing a matrix row block, it only needs to irregularly access (gather/scatter) its y vector subset. By choosing the block size properly, the y vector subset can be kept on-chip.C. Scale and Update (scale update)This pattern is typically used by ML algorithms to apply scaling factors to each sample in the matrix and reduced them into a set of weights, each corresponding to a feature (i.e., a column in A). Here, the x vector contains the scaling factors. For each row of matrix A (in CSR format), the scaling factors for that row are read from the x vector, and then applied to each element of A in that row. The result is used to update the element of y vector. After all rows have been processed, the y vector contains the reduced weights.Similar to prior compute patterns, the irregular accesses to the y vector could affect performance when y is large. Dividing matrix A into column blocks and y vector into multiple subsets corresponding to these blocks can help localize the irregular accesses within each y subset.One implementation includes a hardware accelerator 1000 that can efficiently perform the compute patterns discussed above. The accelerator 1000 is a hardware IP block that can be integrated with general purpose processors, similar to those found in existing accelerator-based solutions (e.g., IBM® PowerEN, Oracle® M7). In one implementation, the accelerator 700 independently accesses memory 750 through an interconnect shared with the processors to perform the compute patterns. It supports any arbitrarily large matrix datasets that reside in off-chip memory. Figure 12 illustrates the processing flow for one implementation of the data management unit 705 and the processing elements 701-702. In this implementation, the data management unit 705 includes a processing element scheduler 1201, a read buffer 1202, a write buffer 1203 and a reduction unit 1204. Each PE 701-702 includes an input buffer 1205-1206, a multiplier 1207-1208, an adder 1209-1210, a local RAM 1221-1222, a sum register 1211-1212, and an output buffer 1213-1214.The accelerator supports the matrix blocking schemes discussed above (i.e., row and column blocking) to support any arbitrarily large matrix data. The accelerator is designed to process a block of matrix data. Each block is further divided into sub-blocks which are processed in parallel by the Pes 701-702.In operation, the data management unit 705 reads the matrix rows or columns from the memory subsystem into its read buffer 1202, which is then dynamically distributed by the PE scheduler 1201 across PEs 701-702 for processing. It also writes results to memory from its write buffer 1203.Each PE 701-702 is responsible for processing a matrix sub-block. A PE contains an on-chip RAM 1221-1222 to store the vector that needs to be accessed randomly (i.e., a subset of x or y vector, as described above). It also contains a floating point multiply-accumulate (FMA) unit including multiplier 1207-1208 and adder 1209-1210 and unpack logic within input buffers 1205-1206 to extract matrix elements from input data, and a sum register 1211-1212 to keep the accumulated FMA results.One implementation of the accelerator achieves extreme efficiencies because (1) it places irregularly accessed (gather/scatter) data in on-chip PE RAMs 1221-1222, (2) it utilizes a hardware PE scheduler 1201 to ensure PEs are well utilized, and (3) unlike with general purpose processors, the accelerator consists of only the hardware resources that are essential for sparse matrix operations. Overall, the accelerator efficiently converts the available memory bandwidth provided to it into performance.Scaling of performance can be done by employing more PEs in an accelerator block to process multiple matrix subblocks in parallel, and/or employing more accelerator blocks (each has a set of PEs) to process multiple matrix blocks in parallel. A combination of these options is considered below. The number of PEs and/or accelerator blocks should be tuned to match the memory bandwidth.One implementation of the accelerator 700 can be programmed through a software library (similar to Intel® Math Kernel Library). Such library prepares the matrix data in memory, sets control registers in the accelerator 700 with information about the computation (e.g., computation type, memory pointer to matrix data), and starts the accelerator. Then, the accelerator independently accesses matrix data in memory, performs the computation, and writes the results back to memory for the software to consume.The accelerator handles the different compute patterns by setting its PEs to the proper datapath configuration, as depicted in Figures 13a-13b . In particular, Figure 13a highlights paths (using dotted lines) for spMspV_csc and scale_update operations and Figure 13b illustrates paths for a spMdV_csr operation. The accelerator operation to perform each compute pattern is detailed below.For spMspV_csc, the initial y vector subset is loaded in to PE's RAM 1221 by the DMU 705. It then reads x vector elements from memory. For each x element, the DMU 705 streams the elements of the corresponding matrix column from memory and supplies them to the PE 701. Each matrix element contains a value (A.val) and an index (A.idx) which points to the y element to read from PE's RAM 1221. The DMU 1005 also provides the x vector element (x.val) that is multiplied against A.val by the multiply-accumulate (FMA) unit. The result is used to update the y element in the PE's RAM pointed to by A.idx. Note that even though not used by our workloads, the accelerator also supports column-wise multiplication against a dense x vector (spMdV_csc) by processing all matrix columns instead of only a subset (since x is dense).The scale_update operation is similar to the spMspV_csc, except that the DMU 705 reads the rows of an A matrix represented in a CSR format instead of a CSC format. For the spMdV_csr, the x vector subset is loaded in to the PE's RAM 1221. DMU 705 streams in matrix row elements (i.e., {A.val,A.idx} pairs) from memory. A.idx is used to read the appropriate x vector element from RAM 1221, which is multiplied against A.val by the FMA. Results are accumulated into the sum register 1212. The sum register is written to the output buffer each time a PE sees a marker indicating an end of a row, which is supplied by the DMU 705. In this way, each PE produces a sum for the row sub-block it is responsible for. To produce the final sum for the row, the sub-block sums produced by all the PEs are added together by the Reduction Unit 1204 in the DMU (see Figure 12 ). The final sums are written to the output buffer 1213-1214, which the DMU 1005 then writes to memory.Graph Data ProcessingIn one implementation, the accelerator architectures described herein are configured to process graph data. Graph analytics relies on graph algorithms to extract knowledge about the relationship among data represented as graphs. The proliferation of graph data (from sources such as social media) has led to strong demand for and wide use of graph analytics. As such, being able to do graph analytics as efficient as possible is of critical importance.To address this need, one implementation automatically maps a user-defined graph algorithm to a hardware accelerator architecture "template" that is customized to the given input graph algorithm. The accelerator may comprise the architectures described above and may be implemented as a FPGA/ASIC, which can execute with extreme efficiency. In summary, one implementation includes:(1) a hardware accelerator architecture template that is based on a generalized sparse matrix vector multiply (GSPMV) accelerator. It supports arbitrary graph algorithm because it has been shown that graph algorithm can be formulated as matrix operations.(2) an automatic approach to map and tune a widely-used "vertex centric" graph programming abstraction to the architecture template.There are existing sparse matrix multiply hardware accelerators, but they do not support customizability to allow mapping of graph algorithms.One implementation of the design framework operates as follows.(1) A user specifies a graph algorithm as "vertex programs" following vertex-centric graph programming abstraction. This abstraction is chosen as an example here due to its popularity. A vertex program does not expose hardware details, so users without hardware expertise (e.g., data scientists) can create it.(2) Along with the graph algorithm in (1), one implementation of the framework accepts the following inputs:a. The parameters of the target hardware accelerator to be generated (e.g., max amount of on-chip RAMs). These parameters may be provided by a user, or obtained from an existing library of known parameters when targeting an existing system (e.g., a particular FPGA board).b. Design optimization objectives (e.g., max performance, min area).c. The properties of the target graph data (e.g., type of graph) or the graph data itself. This is optional, and is used to aid in automatic tuning.(3) Given above inputs, one implementation of the framework performs auto-tuning to determine the set of customizations to apply to the hardware template to optimize for the input graph algorithm, map these parameters onto the architecture template to produce an accelerator instance in synthesizable RTL, and conduct functional and performance validation of the generated RTL against the functional and performance software models derived from the input graph algorithm specification.In one implementation, the accelerator architecture described above is extended to support execution of vertex programs by (1) making it a customizable hardware template and (2) supporting the functionalities needed by vertex program. Based on this template, a design framework is described to map a user-supplied vertex program to the hardware template to produce a synthesizable RTL (e.g., Verilog) implementation instance optimized for the vertex program. The framework also performs automatic validation and tuning to ensure the produced RTL is correct and optimized. There are multiple use cases for this framework. For example, the produced synthesizable RTL can be deployed in an FPGA platform (e.g., Xeon-FPGA) to efficiently execute the given vertex program. Or, it can be refined further to produce an ASIC implementation.It has been shown that graphs can be represented as adjacency matrices, and graph processing can be formulated as sparse matrix operations. Figures 14a-14b show an example of representing a graph as an adjacency matrix. Each non-zero in the matrix represents an edge among two nodes in the graph. For example, a 1 in row 0 column 2 represents an edge from node A to C.One of the most popular models for describing computations on graph data is the vertex programming model. One implementation supports the vertex programming model variant from Graphmat software framework, which formulates vertex programs as generalized sparse matrix vector multiply (GSPMV). As shown in Figure 14c , a vertex program consists of the types of data associated with edges/vertices in the graph (edata/vdata), messages sent across vertices in the graph (mdata), and temporary data (tdata) (illustrated in the top portion of program code); and stateless user-defined compute functions using pre-defined APIs that read and update the graph data (as illustrated in the bottom portion of program code). Figure 14d illustrates exemplary program code for executing a vertex program. Edge data is represented as an adjacency matrix A (as in Figure 14b ), vertex data as vector y, and messages as sparse vector x. Figure 14e shows the GSPMV formulation, where the multiply() and add() operations in SPMV is generalized by user-defined PROCESS_MSG() and REDUCE().One observation here is that the GSPMV variant needed to execute vertex program performs a column-oriented multiplication of sparse matrix A (i.e., adjacency matrix) against a sparse vector x (i.e., messages) to produce an output vector y (i.e., vertex data). This operation is referred to as col_spMspV (previously described with respect to the above accelerator).Design Framework. One implementation of the framework is shown in Figure 15 which includes a template mapping component 1511, a validation component 1512 and an automatic tuning component 1513. Its inputs are a user-specified vertex program 1501, design optimization goals 1503 (e.g., max performance, min area), and target hardware design constraints 1502 (e.g., maximum amount of on-chip RAMs, memory interface width). As an optional input to aid automatic-tuning, the framework also accepts graph data properties 1504 (e.g., type=natural graph) or a sample graph data.Given these inputs, the template mapping component 1511 of the framework maps the input vertex program to a hardware accelerator architecture template, and produces an RTL implementation 1505 of the accelerator instance optimized for executing the vertex program 1501. The automatic tuning component 1513 performs automatic tuning 1513 to optimize the generated RTL for the given design objectives, while meeting the hardware design constraints. Furthermore, the validation component 1512 automatically validates the generated RTL against functional and performance models derived from the inputs. Validation test benches 1506 and tuning reports 1507 are produced along with the RTL.Generalized Sparse Matrix Vector Multiply (GSPMV) Hardware Architecture TemplateOne implementation of an architecture template for GSPMV is shown in Figure 16 , which is based on the accelerator architecture described above (see, e.g., Figure 12 and associated text). Many of the components illustrated in Figure 16 are customizable (as highlighted with grey lines). In one implementation, the architecture to support execution of vertex programs has been extended as follows.As illustrated in Figure 16 , customizable logic blocks are provided inside each PE to support PROCESS_MSG() 1910, REDUCE() 1611, APPLY 1612, and SEND_MSG() 1613 needed by the vertex program. In addition, one implementation provides customizable on-chip storage structures and pack/unpack logic 1605 to support user-defined graph data (i.e., vdata, edata, mdata, tdata). The data management unit 705 illustrated in Figure 16 includes a PE scheduler 1201 (for scheduling PEs as described above), aux buffers 1601 for storing active column, x data), a read buffer 1202, a memory controller 1603 for controlling access to system memory, and a write buffer 1204. In addition, in the implementation shown in Figure 16 old and new vdata and tdata is stored within the local PE memory 1221. Various control state machines may be modified to support executing vertex programs, abiding to the functionalities specified by the algorithms in Figure 14d and 14e . The operation of each accelerator tile is summarized in Figure 17 . At 1701, the y vector (vdata) is loaded to the PE RAM 1221. At 1702, the x vector and column pointers are loaded to the aux buffer 1601. At 1703, for each x vector element, the A column is streamed in (edata) and the PEs execute PROC_MSG() 1610 and REDUCE() 1611. At 1704, the PEs execute APPLY() 1612. At 1705, the PEs execute SEND_MSG() 1613, producing messages, and the data management unit 705 writes them as x vectors in memory. At 1706, the data management unit 705 writes the updated y vectors (vdata) stored in the PE RAMs 1221 back to memory. The above techniques conform to the vertex program execution algorithm shown in Figures 14d and 14e . To scale up performance, the architecture allows increasing the number of PEs in a tile and/or the number of tiles in the design. This way, the architecture can take advantage of multiple levels of parallelisms in the graph (i.e., across subgraphs (across blocks of adjacency matrix) or within each subgraph). The Table in Figure 18a summarizes the customizable parameters of one implementation of the template. It is also possible to assign asymmetric parameters across tiles for optimization (e.g., one tile with more PEs than another tile).Automatic mapping, validation, and tuningTuning.Based on the inputs, one implementation of the framework performs automatic tuning to determine the best design parameters to use to customize the hardware architecture template in order to optimize it for the input vertex program and (optionally) graph data. There are many tuning considerations, which are summarized in the table in Figure 18b . As illustrated, these include locality of data, graph data sizes, graph compute functions, graph data structure, graph data access attributes, graph data types, and graph data patterns.Template Mapping.In this phase, the framework takes the template parameters determined by the tuning phase, and produces an accelerator instance by "filling" in the customizable portions of the template. The user-defined compute functions (e.g., Figure 14c ) may be mapped from the input specification to the appropriate PE compute blocks using existing High-Level Synthesis (HLS) tools. The storage structures (e.g., RAMs, buffers, cache) and memory interfaces are instantiated using their corresponding design parameters. The pack/unpack logic may automatically be generated from the data type specifications (e.g., Figure 14a ). Parts of the control finite state machines (FSMs) are also generated based on the provided design parameters (e.g., PE scheduling schemes).Validation.In one implementation, the accelerator architecture instance (synthesizable RTL) produced by the template mapping is then automatically validated. To do this, one implementation of the framework derives a functional model of the vertex program to be used as the "golden" reference. Test benches are generated to compare the execution of this golden reference against simulations of the RTL implementation of the architecture instance. The framework also performs performance validation by comparing RTL simulations against analytical performance model and cycle-accurate software simulator. It reports runtime breakdown and pinpoint the bottlenecks of the design that affect performance. Accelerator Architecture for Processing Sparse Data INTRODUCTIONComputations on sparse datasets - vectors or matrices most of whose values are zero - are critical to an increasing number of commercially-important applications, but typically achieve only a few percent of peak performance when run on today's CPUs. In the scientific computing arena, sparse-matrix computations have been key kernels of linear solvers for decades. More recently, the explosive growth of machine learning and graph analytics has moved sparse computations into the mainstream. Sparse-matrix computations are central to many machine-learning applications and form the core of many graph algorithms.Sparse-matrix computations tend to be memory bandwidth-limited rather than compute-limited, making it difficult for CPU changes to improve their performance. They execute few operations per matrix data element and often iterate over an entire matrix before re-using any data, making caches ineffective. In addition, many sparse-matrix algorithms contain significant numbers of data-dependent gathers and scatters, such as the result[row] += matrix[row][i].value * vector[matrix[row][i].index] operation found in sparse matrix-vector multiplication, which are hard to predict and reduce the effectiveness of prefetchers.To deliver better sparse-matrix performance than conventional microprocessors, a system must provide significantly higher memory bandwidth than current CPUs and a very energy- efficient computing architecture. Increasing memory bandwidth makes it possible to improve performance, but the high energy/bit cost of DRAM accesses limits the amount of power available to process that bandwidth. Without an energy-efficient compute architecture, a system might find itself in the position of being unable to process the data from a high-bandwidth memory system without exceeding its power budget.One implementation comprises an accelerator for sparse-matrix computations which uses stacked DRAM to provide the bandwidth that sparse-matrix algorithms require combined with a custom compute architecture to process that bandwidth in an energy-efficient manner.SPARSE - MATRIX OVERVIEWMany applications create data sets where the vast majority of the values are zero. Finite-element methods model objects as a mesh of points where the state of each point is a function of the state of the points near it in the mesh. Mathematically, this becomes a system of equations that is represented as a matrix where each row describes the state of one point and the values in the row are zero for all of the points that do not directly affect the state of the point the row describes. Graphs can be represented as an adjacency matrix, where each element {i,j} in the matrix gives the weight of the edge between vertices i and j in the graph. Since most vertexes connect to only a small fraction of the other vertices in the graph, the vast majority of the elements in the adjacency matrix are zeroes. In machine learning, models are typically trained using datasets that consist of many samples, each of which contains a set of features (observations of the state of a system or object) and the desired output of the model for that set of features. It is very common for most of the samples to only contain a small subset of the possible features, for example when the features represent different words that might be present in a document, again creating a dataset where most of the values are zero.Datasets where most of the values are zero are described as "sparse," and it is very common for sparse datasets to be extremely sparse, having non-zero values in less than 1% of their elements. These datasets are often represented as matrices, using data structures that only specify the values of the non-zero elements in the matrix. While this increases the amount of space required to represent each non-zero element, since it is necessary to specify both the element's location and its value, the overall space (memory) savings can be substantial if the matrix is sparse enough. For example, one of the most straightforward representations of a sparse matrix is the coordinate list (COO) representation, in which each non-zero is specified by a {row index, column index, value} tuple. While this triples the amount of storage required for each non-zero value, if only 1% of the elements in a matrix have non-zero values, the COO representation will take up only 3% of the space that a dense representation (one that represents the value of each element in the matrix) would take. Figure 19 illustrates one of the most common sparse-matrix formats, the compressed row storage (CRS, sometimes abbreviated CSR) format. In CRS format, the matrix 1900 is described by three arrays: a values array 1901, which contains the values of the non-zero elements, an indices array 1902, which specifies the position of each non-zero element within its row of the matrix, and a row starts array 1903, which specifies where each row of the matrix starts in the lists of indices and values. Thus, the first non-zero element of the second row of the example matrix can be found at position 2 in the indices and values arrays, and is described by the tuple {0, 7}, indicating that the element occurs at position 0 within the row and has value 7. Other commonly-used sparse-matrix formats include compressed sparse column (CSC), which is the column-major dual to CRS, and ELLPACK, which represents each row of the matrix as a fixed-width list of non-zero values and their indices, padding with explicit zeroes when a row has fewer non-zero elements than the longest row in the matrix.Computations on sparse matrices have the same structure as their dense-matrix counterparts, but the nature of sparse data tends to make them much more bandwidth-intensive than their dense-matrix counterparts. For example, both the sparse and dense variants of matrix-matrix multiplication find C = A · B by computing Ci,j = Ai, · B,j for all i, j. In a dense matrix-matrix computation, this leads to substantial data re-use, because each element of A participates in N multiply-add operations (assuming N × N matrices), as does each element of B. As long as the matrix-matrix multiplication is blocked for cache locality, this re-use causes the computation to have a low bytes/op ratio and to be compute-limited. However, in the sparse variant, each element of A only participates in as many multiply-add operations as there are non-zero values in the corresponding row of B, while each element of B only participates in as many multiply-adds as there are non-zero elements in the corresponding column of A. As the sparseness of the matrices increases, so does the bytes/op ratio, making the performance of many sparse matrix-matrix computations limited by memory bandwidth in spite of the fact that dense matrix-matrix multiplication is one of the canonical compute-bound computations.Four operations make up the bulk of the sparse-matrix computations seen in today's applications: sparse matrix-dense vector multiplication (SpMV), sparse matrix-sparse vector multiplication, sparse matrix-sparse matrix multiplication, and relaxation/smoother operations, such as the Gauss-Seidel smoother used in Intel's implementation of the High-Performance Conjugate Gradient benchmark. These operations share two characteristics that make a sparse-matrix accelerator practical. First, they are dominated by vector dot-products, which makes it possible to implement simple hardware that can implement all four important computations. For example, a matrix-vector multiplication is performed by taking the dot-product of each row in the matrix with the vector, while a matrix-matrix multiplication takes the dot- product of each row of one matrix with each column of the other. Second, applications generally perform multiple computations on the same matrix, such as the thousands of multi- plications of the same matrix by different vectors that a support vector machine algorithm performs with training a model. This repeated use of the same matrix makes it practical to transfer matrices to/from an accelerator during program execution and/or to re-format the matrix in a way that simplifies the hardware's task, since the cost of data transfers/transformations can be amortized across many operations on each matrix.Sparse-matrix computations typically achieve only a few percent of the peak performance of the system they run on. To demonstrate why this occurs, Figure 20 shows the steps 2001-2004 involved in an implementation of sparse matrix-dense vector multiplication using the CRS data format. First, at 2001, the data structure that represents a row of the matrix is read out of memory, which usually involves a set of sequential reads that are easy to predict and prefetch. Second, at 2002, the indices of the non-zero elements in the matrix row are used to gather the corresponding elements of the vector, which requires a number of data-dependent, hard-to-predict memory accesses (a gather operation). Moreover, these memory accesses often touch only one or two words in each referenced cache line, resulting in significant wasted bandwidth when the vector does not fit in the cache.Third, at 2003, the processor computes the dot-product of the non-zero elements of the matrix row and the corresponding elements of the vector. Finally, at 2004, the result of the dot-product is written into the result vector, which is also accessed sequentially, and the program proceeds to the next row of the matrix. Note that this is a conceptual/algorithmic view of the computation, and the exact sequence of operations the program executes will depend on the processor's ISA and vector width.This example illustrates a number of important characteristics of sparse-matrix computations. Assuming 32-bit data types and that neither the matrix nor the vector fit in the cache, computing the first element of the output row requires reading 36 bytes from DRAM, but only five compute instructions (three multiplies and two adds), for a bytes/op ratio of 7.2:1.Memory bandwidth is not the only challenge to high-performance sparse-matrix computations, however. As Figure 20 shows, the accesses to the vector in SpMV are data-dependent and hard to predict, exposing the latency of vector accesses to the application. If the vector does not fit in the cache, SpMV performance becomes sensitive to DRAM latency as well as bandwidth unless the processor provides enough parallelism to saturate the DRAM bandwidth even when many threads are stalled waiting for data.Thus, an architecture for sparse-matrix computations must provide several things to be effective. It must deliver high memory bandwidth to meet the bytes/op needs of sparse computations. It must also support high-bandwidth gathers out of large vectors that may not fit in the cache. Finally, while performing enough arithmetic operations/second to keep up with DRAM bandwidth is not a challenge in and of itself, the architecture must perform those operations and all of the memory accesses they require in an energy-efficient manner in order to remain within system power budgets.IMPLEMENTATIONSOne implementation comprises an accelerator designed to provide the three features necessary for high sparse- matrix performance: high memory bandwidth, high-bandwidth gathers out of large vectors, and energy-efficient computation. As illustrated in Figure 21 , one implementation of the accelerator includes an accelerator logic die 2105 and one of more stacks 2101-2104 of DRAM die. Stacked DRAM, which is described in more detail below, provides high memory bandwidth at low energy/bit. For example, stacked DRAMs are expected to deliver 256-512 GB/sec at 2.5 pJ/bit, while LPDDR4 DIMMs are only expected to deliver 68 GB/sec and will have an energy cost of 12 pJ/bit.The accelerator logic chip 2105 at the bottom of the accelerator stack is customized to the needs of sparse-matrix computations, and is able to consume the bandwidth offered by a DRAM stack 2101-2104 while only expending 2-4 Watts of power, with energy consumption proportional to the bandwidth of the stack. To be conservative, a stack bandwidth of 273 GB/sec is assumed (the expected bandwidth of WIO3 stacks) for the remainder of this application. Designs based on higher-bandwidth stacks would incorporate more parallelism in order to consume the memory bandwidth. Figure 22 illustrates one implementation of the accelerator logic chip 2105, oriented from a top perspective through the stack of DRAM die 2101-2104. The stack DRAM channel blocks 2205 towards the center of the diagram represent the through-silicon vias that connect the logic chip 2105 to the DRAMs 2101-2104, while the memory controller blocks 1210 contain the logic that generates the control signals for the DRAM channels. While eight DRAM channels 2205 are shown in the figure, the actual number of channels implemented on an accelerator chip will vary depending on the stacked DRAMs used. Most of the stack DRAM technologies being developed provide either four or eight channels.The dot-product engines (DPEs) 2220 are the computing elements of the architecture. In the particular implementation shown in Figure 22 , each set of eight DPEs is associated with a vector cache 2215. Figure 23 provides a high-level overview of a DPE which contains two buffers 2305-2306, two 64-bit multiply-add ALUs 2310, and control logic 2300. During computations, the chip control unit 2300 streams chunks of the data being processed into the buffer memories 2305-2306. Once each buffer is full, the DPE's control logic sequences through the buffers, computing the dot-products of the vectors they contain and writing the results out to the DPE's result latch 2310, which is connected in a daisy-chain with the result latches of the other DPE's to write the result of a computation back to the stack DRAM 2101-2104.In one implementation, the accelerator logic chip 2405 operates at approximately 1GHz and 0.65V to minimize power consumption (although the particular operating frequency and voltage may be modified for different applications). Analysis based on 14nm design studies shows that 32-64 KB buffers meet this frequency spec at that voltage, although strong ECC may be required to prevent soft errors. The multiply-add unit may be operated at half of the base clock rate in order to meet timing with a 0.65V supply voltage and shallow pipeline. Having two ALUs provides a throughput of one double-precision multiply-add/cycle per DPE.At 273 GB/second and a clock rate of 1.066 MHz, the DRAM stack 2101-2104 delivers 256 bytes of data per logic chip clock cycle. Assuming that array indices and values are at least 32-bit quantities, this translates to 32 sparse-matrix elements per cycle (4 bytes of index + 4 bytes of value = 8 bytes/element), requiring that the chip perform 32 multiply-adds per cycle to keep up. (This is for matrix-vector multiplication and assumes a high hit rate in the vector cache so that 100% of the stack DRAM bandwidth is used to fetch the matrix.)The 64 DPEs shown in Figure 22 provide 2-4x the required compute throughput, allowing the chip to process data at the peak stack DRAM bandwidth even if the ALUs 2310 are not used 100% of the time.In one implementation, the vector caches 2215 cache elements of the vector in a matrix-vector multiplication. This significantly increases the efficiency of the matrix-blocking scheme described below. In one implementation, each vector cache block contains 32-64KB of cache, for a total capacity of 256-512KB in an eight-channel architecture.The chip control unit 2201 manages the flow of a computation and handles communication with the other stacks in an accelerator and with other sockets in the system. To reduce complexity and power consumption, the dot-product engines never request data from memory. Instead, the chip control unit 2201 manages the memory system, initiating transfers that push the appropriate blocks of data to each of the DPEs.In one implementation, the stacks in a multi-stack accelerator communicate with each other via a network of KTI links 2230 that is implemented using the neighbor connections 2231 shown in the figure. The chip also provides three additional KTI links that are used to communicate with the other socket(s) in a multi-socket system. In a multi-stack accelerator, only one of the stacks' off-package KTI links 2230 will be active. KTI transactions that target memory on the other stacks will be routed to the appropriate stack over the on-package KTI network.Implementing Sparse-matrix OperationsIn this section, we describe the techniques and hardware required to implement sparse matrix-dense vector and sparse matrix-sparse vector multiplication on one implementation of the accelerator. This design is also extended to support matrix-matrix multiplication, relaxation operations, and other important functions to create an accelerator that supports all of the key sparse-matrix operations.While sparse-sparse and sparse-dense matrix-vector multiplications execute the same basic algorithm (taking the dot product of each row in the matrix and the vector), there are significant differences in how this algorithm is implemented when the vector is sparse as compared to when it is dense, which are summarized in Table 1 below.TABLE 1 Size of Vector Typically SmallOften large (5-10% of matrix size) Location of Vector Elements UnpredictableDetermined by Index Number of operations per matrix element UnpredictableFixedIn a sparse matrix-dense vector multiplication, the size of the vector is fixed and equal to the number of columns in the matrix. Since many of the matrices found in scientific computations average approximately 10 non-zero elements per row, it is not uncommon for the vector in a sparse matrix-dense vector multiplication to take up 5-10% as much space as the matrix itself. Sparse vectors, on the other hand, are often fairly short, containing similar numbers of non-zero values to the rows of the matrix, which makes them much easier to cache in on-chip memory.In a sparse matrix-dense vector multiplication the location of each element in the vector is determined by its index, making it feasible to gather the vector elements that correspond to the non-zero values in a region of the matrix and to pre-compute the set of vector elements that need to be gathered for any dense vector that the matrix will be multiplied by. The location of each element in a sparse vector, however is unpredictable and depends on the distribution of non-zero elements in the vector. This makes it necessary to examine the non- zero elements of the sparse vector and of the matrix to determine which non-zeroes in the matrix correspond to non-zero values in the vector.It is helpful to compare the indices of the non-zero elements in the matrix and the vector because the number of instructions/operations required to compute a sparse matrix-sparse vector dot-product is unpredictable and depends on the structure of the matrix and vector. For example, consider taking the dot-product of a matrix row with a single non-zero element and a vector with many non-zero elements. If the row's non-zero has a lower index than any of the non-zeroes in the vector, the dot-product only requires one index comparison. If the row's non-zero has a higher index than any of the non-zeroes in the vector, computing the dot- product requires comparing the index of the row's non-zero with every index in the vector. This assumes a linear search through the vector, which is common practice. Other searches, such as binary search, would be faster in the worst case, but would add significant overhead in the common case where the non-zeroes in the row and the vector overlap. In contrast, the number of operations required to perform a sparse matrix-dense vector multiplication is fixed and determined by the number of non-zero values in the matrix, making it easy to predict the amount of time required for the computation.Because of these differences, one implementation of the accelerator uses the same high-level algorithm to implement sparse matrix-dense vector and sparse matrix-sparse vector multiplication, with differences in how the vector is distributed across the dot-product engines and how the dot-product is computed. Because the accelerator is intended for large sparse-matrix computations, it cannot be assumed that either the matrix or the vector will fit in on-chip memory. Instead, one implementation uses the blocking scheme outlined in Figure 24 .In particular, in this implementation, the accelerator will divide matrices into fixed-size blocks of data 2401-2402, sized to fit in the on-chip memory, and will multiply the rows in the block by the vector to generate a chunk of the output vector before proceeding to the next block. This approach poses two challenges. First, the number of non-zeroes in each row of a sparse matrix varies widely between datasets, from as low as one to as high as 46,000 in the datasets studied. This makes it impractical to assign one or even a fixed number of rows to each dot-product engine. Therefore, one implementation assigns fixed-size chunks of matrix data to each dot product engine and handles the case where a chunk contains multiple matrix rows and the case where a single row is split across multiple chunks.The second challenge is that fetching the entire vector from stack DRAM for each block of the matrix has the potential to waste significant amounts of bandwidth (i.e., fetching vector elements for which there is no corresponding non-zero in the block). This is particularly an issue for sparse matrix-dense vector multiplication, where the vector can be a significant fraction of the size of the sparse matrix. To address this, one implementation constructs a fetch list 2411-2412 for each block 2401-2402 in the matrix, which lists the set of vector 2410 elements that correspond to non-zero values in the block, and only fetch those elements when processing the block. While the fetch lists must also be fetched from stack DRAM, it has been determined that the fetch list for most blocks will be a small fraction of the size of the block. Techniques such as run-length encodings may also be used to reduce the size of the fetch list.Thus, a matrix-vector multiplication on Accelerator will involve the following sequence of operations:1. Fetch a block of matrix data from the DRAM stack and distribute it across the dot-product engines;2. Generate fetch list based on non-zero elements in the matrix data;3. Fetch each vector element in the fetch list from stack DRAM and distribute it to the dot-product engines;4. Compute the dot-product of the rows in the block with the vector and write the results out to stack DRAM; and5. In parallel with the computation, fetch the next block of matrix data and repeat until the entire matrix has been processed.When an accelerator contains multiple stacks, "partitions" of the matrix may be statically assigned to the different stacks and then the blocking algorithm may be executed in parallel on each partition. This blocking and broadcast scheme has the advantage that all of the memory references originate from a central control unit, which greatly simplifies the design of the on-chip network, since the network does not have to route unpredictable requests and replies between the dot product engines and the memory controllers. It also saves energy by only issuing one memory request for each vector element that a given block needs, as opposed to having individual dot product engines issue memory requests for the vector elements that they require to perform their portion of the computation. Finally, fetching vector elements out of an organized list of indices makes it easy to schedule the memory requests that those fetches require in a way that maximizes page hits in the stacked DRAM and thus bandwidth usage.Implementing Sparse Matrix-Dense Vector MultiplicationOne challenge in implementing sparse matrix-dense vector multiplication on the accelerator implementations described herein is matching the vector elements being streamed from memory to the indices of the matrix elements in each dot-product engine's buffers. In one implementation, 256 bytes (32-64 elements) of the vector arrive at the dot-product engine per cycle, and each vector element could correspond to any of the non-zeroes in the dot-product engine's matrix buffer since fixed-size blocks of matrix data were fetched into each dot-product engine's matrix buffer.Performing that many comparisons each cycle would be prohibitively expensive in area and power. Instead, one implementation takes advantage of the fact that many sparse-matrix applications repeatedly multiply the same matrix by either the same or different vectors and pre-compute the elements of the fetch list that each dot-product engine will need to process its chunk of the matrix, using the format shown in Figure 25 . In the baseline CRS format, a matrix is described by an array of indices 2502 that define the position of each non-zero value within its row, an array containing the values of each non-zero 2503, and an array 2501 that indicates where each row starts in the index and values arrays. To that, one implementation adds an array of block descriptors 2505 that identify which bursts of vector data each dot-product engine needs to capture in order to perform its fraction of the overall computation.As shown in Figure 25 , each block descriptor consists of eight 16-bit values and a list of burst descriptors. The first 16-bit value tells the hardware how many burst descriptors are in the block descriptor, while the remaining seven identify the start points within the burst descriptor list for all of the stack DRAM data channels except the first. The number of these values will change depending on the number of data channels the stacked DRAM provides. Each burst descriptor contains a 24-bit burst count that tells the hardware which burst of data it needs to pay attention to and a "Words Needed" bit-vector that identifies the words within the burst that contain values the dot-processing engine needs.The other data structure included in one implementation is an array of matrix buffer indices (MBIs) 2504, one MBI per non-zero in the matrix. Each MBI gives the position at which the dense vector element that corresponds to the non-zero will be stored in the relevant dot-product engine's vector value buffer (see, e.g., Figure 27 ). When performing a sparse matrix-dense vector multiplication, the matrix buffer indices, rather than the original matrix indices, are loaded into the dot-product engine's matrix index buffer 2504, and serve as the address used to look up the corresponding vector value when computing the dot product. Figure 26 illustrates how this works for a two-row matrix that fits within the buffers of a single dot-product engine, on a system with only one stacked DRAM data channel and four-word data bursts. The original CRS representation including row start values 2601, matrix indices 2602 and matrix values 2603 are shown on the left of the figure. Since the two rows have non-zero elements in columns {2, 5, 6} and {2, 4, 5}, elements 2, 4, 5, and 6 of the vector are required to compute the dot-products. The block descriptors reflect this, indicating that word 2 of the first four-word burst (element 2 of the vector) and words 0, 1, and 2 of the second four-word burst (elements 4-6 of the vector) are required. Since element 2 of the vector is the first word of the vector that the dot-product engine needs, it will go in location 0 in the vector value buffer. Element 4 of the vector will go in location 1, and so on.The matrix buffer index array data 2604 holds the location within the vector value buffer where the hardware will find the value that corresponds to the non-zero in the matrix. Since the first entry in the matrix indices array has value "2", the first entry in the matrix buffer indices array gets the value "0", corresponding to the location where element 2 of the vector will be stored in the vector value buffer. Similarly, wherever a "4" appears in the matrix indices array, a "1" will appear in the matrix buffer indices, each "5" in the matrix indices array will have a corresponding "2" in the matrix buffer indices, and each "6" in the matrix indices array will correspond to a "3" in the matrix buffer indices.One implementation of the invention performs the pre-computations required to support fast gathers out of dense vectors when a matrix is loaded onto the accelerator, taking advantage of the fact that the total bandwidth of a multi-stack accelerator is much greater than the bandwidth of the KTI links used to transfer data from the CPU to the accelerator. This pre-computed information increases the amount of memory required to hold a matrix by up to 75%, depending on how often multiple copies of the same matrix index occur within the chunk of the matrix mapped onto a dot-product engine. However, because the 16-bit matrix buffer indices array is fetched instead of the matrix indices array when a matrix-vector multiplication is performed, the amount of data fetched out of the stack DRAMs will often be less than in the original CRS representation, particularly for matrices that use 64-bit indices.Figure 27 illustrates one implementation of the hardware in a dot-product engine that uses this format. To perform a matrix-vector multiplication, the chunks of the matrix that make up a block are copied into the matrix index buffer 3003 and matrix value buffer 3005 (copying the matrix buffer indices instead of the original matrix indices), and the relevant block descriptor is copied into the block descriptor buffer 3002. Then, the fetch list is used to load the required elements from the dense vector and broadcast them to the dot-product engines. Each dot-product engine counts the number of bursts of vector data that go by on each data channel. When the count on a given data channel matches the value specified in a burst descriptor, the match logic 3020 captures the specified words and stores them in its vector value buffer 3004. Figure 28 shows the contents of the match logic 3020 unit that does this capturing. A latch 3105 captures the value on the data channel's wires when the counter matches the value in the burst descriptor. A shifter 3106 extracts the required words 3102 out of the burst 3101 and routes them to the right location in a line buffer 3107 whose size matches the rows in the vector value buffer. A load signal is generated when the burst count 3101 is equal to an internal counter 3104. When the line buffer fills up, it is stored in the vector value buffer 3004 (through mux 3108). Assembling the words from multiple bursts into lines in this way reduces the number of writes/cycle that the vector value buffer needs to support, reducing its size.Once all of the required elements of the vector have been captured in the vector value buffer, the dot-product engine computes the required dot-product(s) using the ALUs 3010. The control logic 3001 steps through the matrix index buffer 3003 and matrix value buffer 3004 in sequence, one element per cycle. The output of the matrix index buffer 3003 is used as the read address for the vector value buffer 3004 on the next cycle, while the output of the matrix value buffer 3004 is latched so that it reaches the ALUs 3010 at the same time as the corresponding value from the vector value buffer 3004. For example, using the matrix from Figure 26 , on the first cycle of the dot-product computation, the hardware would read the matrix buffer index "0" out of the matrix index buffer 3003 along with the value "13" from the matrix value buffer 3005. On the second cycle, the value "0" from the matrix index buffer 3003 acts as the address for the vector value buffer 3004, fetching the value of vector element "2", which is then multiplied by "13" on cycle 3.The values in the row starts bit-vector 2901 tell the hardware when a row of the matrix ends and a new one begins. When the hardware reaches the end of the row, it places the accumulated dot-product for the row in its output latch 3011 and begins accumulating the dot-product for the next row. The dot-product latches of each dot-product engine are connected in a daisy chain that assembles the output vector for writeback.Implementing Sparse Matrix-Sparse Vector MultiplicationIn sparse matrix-sparse vector multiplication, the vector tends to take up much less memory than in sparse matrix-dense vector multiplication, but, because it is sparse, it is not possible to directly fetch the vector element that corresponds to a given index. Instead, the vector must be searched, making it impractical to route only the elements that each dot-product engine needs to the dot-product engine and making the amount of time required to compute the dot-products of the matrix data assigned to each dot-product engine unpredictable. Because of this, the fetch list for a sparse matrix-sparse vector multiplication merely specifies the index of the lowest and highest non-zero elements in the matrix block and all of the non-zero elements of the vector between those points must be broadcast to the dot-product engines. Figure 29 shows the details of a dot-product engine design to support sparse matrix-sparse vector multiplication. To process a block of matrix data, the indices (not the matrix buffer indices used in a sparse-dense multiplication) and values of the dot-product engine's chunk of the matrix are written into the matrix index and value buffers, as are the indices and values of the region of the vector required to process the block. The dot-product engine control logic 2940 then sequences through the index buffers 2902-2903, which output blocks of four indices to the 4x4 comparator 2920. The 4x4 comparator 2920 compares each of the indices from the vector 2902 to each of the indices from the matrix 2903, and outputs the buffer addresses of any matches into the matched index queue 2930. The outputs of the matched index queue 2930 drive the read address inputs of the matrix value buffer 2905 and vector value buffer 2904, which output the values corresponding to the matches into the multiply-add ALU 2910. This hardware allows the dot-product engine to consume at least four and as many as eight indices per cycle as long as the matched index queue 2930 has empty space, reducing the amount of time required to process a block of data when index matches are rare.As with the sparse matrix-dense vector dot-product engine, a bit-vector of row starts 2901 identifies entries in the matrix buffers 2992-2903 that start a new row of the matrix. When such an entry is encountered, the control logic 2940 resets to the beginning of the vector index buffer ATA3202 and starts examining vector indices from their lowest value, comparing them to the outputs of the matrix index buffer 2903. Similarly, if the end of the vector is reached, the control logic 2940 advances to the beginning of the next row in the matrix index buffer 2903 and resets to the beginning of the vector index buffer 2902. A "done" output informs the chip control unit when the dot-product engine has finished processing a block of data or a region of the vector and is ready to proceed to the next one. To simplify one implementation of the accelerator, the control logic 2940 will not proceed to the next block/region until all of the dot-product engines have finished processing.In many cases, the vector buffers will be large enough to hold all of the sparse vector that is required to process the block. In one implementation, buffer space for 1,024 or 2,048 vector elements is provided, depending on whether 32- or 64-bit values are used.When the required elements of the vector do not fit in the vector buffers, a multipass approach may be used. The control logic 2940 will broadcast a full buffer of the vector into each dot-product engine, which will begin iterating through the rows in its matrix buffers. When the dot-product engine reaches the end of the vector buffer before reaching the end of the row, it will set a bit in the current row position bit-vector 2911 to indicate where it should resume processing the row when the next region of the vector arrives, will save the partial dot-product it has accumulated in the location of the matrix values buffer 2905 corresponding to the start of the row unless the start of the row has a higher index value than any of the vector indices that have been processed so far, and will advance to the next row. After all of the rows in the matrix buffer have been processed, the dot-product engine will assert its done signal to request the next region of the vector, and will repeat the process until the entire vector has been read. Figure 30 illustrates an example using specific values. At the start of the computation 3001, a four-element chunk of the matrix has been written into the matrix buffers 2903, 2905, and a four-element region of the vector has been written into the vector buffers 2902, 2904. The row starts 2901 and current row position bit-vectors 2911 both have the value "1010," indicating that the dot-product engine's chunk of the matrix contains two rows, one of which starts at the first element in the matrix buffer, and one of which starts at the third.When the first region is processed, the first row in the chunk sees an index match at index 3, computes the product of the corresponding elements of the matrix and vector buffers (4 × 1 = 4) and writes that value into the location of the matrix value buffer 2905 that corresponds to the start of the row. The second row sees one index match at index 1, computes the product of the corresponding elements of the vector and matrix, and writes the result (6) into the matrix value buffer 2905 at the position corresponding to its start. The state of the current row position bit-vector changes to "0101," indicating that the first element of each row has been processed and the computation should resume with the second elements. The dot-product engine then asserts its done line to signal that it is ready for another region of the vector.When the dot-product engine processes the second region of the vector, it sees that row 1 has an index match at index 4, computes the product of the corresponding values of the matrix and vector (5 × 2 = 10), adds that value to the partial dot-product that was saved after the first vector region was processed, and outputs the result (14). The second row finds a match at index 7, and outputs the result 38, as shown in the figure. Saving the partial dot-products and state of the computation in this way avoids redundant work processing elements of the matrix that cannot possibly match indices in later regions of the vector (because the vector is sorted with indices in ascending order), without requiring significant amounts of extra storage for partial products.Unified Dot-Product Engine Design Figure 31 shows how the sparse-dense and sparse-sparse dot-product engines described above are combined to yield a dot-product engine that can handle both types of computations. Given the similarity between the two designs, the only required changes are to instantiate both the sparse-dense dot-product engine's match logic 3111 and the sparse-sparse dot-product engine's comparator 3120 and matched index queue 3130, along with a set of multiplexors 3150 that determine which modules drive the read address and write data inputs of the buffers 2904-2905 and a multiplexor 3151 that selects whether the output of the matrix value buffer or the latched output of the matrix value buffer is sent to the multiply-add ALUs 2910. In one implementation, these multiplexors are controlled by a configuration bit in the control unit 2940 that is set at the beginning of a matrix-vector multiplication and remain in the same configuration throughout the operation. Instruction Sets An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64and IA-32 Architectures Software Developer's Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014). Exemplary Register Architecture Figure 32 is a block diagram of a register architecture 3200 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 3210 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15.Write mask registers 3215 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 3215 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 3225 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 3245, on which is aliased the MMX packed integer flat register file 3250 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers. Exemplary Core Architectures, Processors, and Computer Architectures Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures. Exemplary Core Architectures In-order and out-of-order core block diagram Figure 33A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 33B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 33A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 33A , a processor pipeline 3300 includes a fetch stage 3302, a length decode stage 3304, a decode stage 3306, an allocation stage 3308, a renaming stage 3310, a scheduling (also known as a dispatch or issue) stage 3312, a register read/memory read stage 3314, an execute stage 3316, a write back/memory write stage 3318, an exception handling stage 3322, and a commit stage 3324. Figure 33B shows processor core 3390 including a front end unit 3330 coupled to an execution engine unit 3350, and both are coupled to a memory unit 3370. The core 3390 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 3390 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 3330 includes a branch prediction unit 3332 coupled to an instruction cache unit 3334, which is coupled to an instruction translation lookaside buffer (TLB) 3336, which is coupled to an instruction fetch unit 3338, which is coupled to a decode unit 3340. The decode unit 3340 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 3340 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 3390 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 3340 or otherwise within the front end unit 3330). The decode unit 3340 is coupled to a rename/allocator unit 3352 in the execution engine unit 3350.The execution engine unit 3350 includes the rename/allocator unit 3352 coupled to a retirement unit 3354 and a set of one or more scheduler unit(s) 3356. The scheduler unit(s) 3356 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 3356 is coupled to the physical register file(s) unit(s) 3358. Each of the physical register file(s) units 3358 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 3358 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 3358 is overlapped by the retirement unit 3354 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 3354 and the physical register file(s) unit(s) 3358 are coupled to the execution cluster(s) 3360. The execution cluster(s) 3360 includes a set of one or more execution units 3362 and a set of one or more memory access units 3364. The execution units 3362 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 3356, physical register file(s)unit(s) 3358, and execution cluster(s) 3360 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 3364). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 3364 is coupled to the memory unit 3370, which includes a data TLB unit 3372 coupled to a data cache unit 3374 coupled to a level 2 (L2) cache unit 3376. In one exemplary embodiment, the memory access units 3364 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 3372 in the memory unit 3370. The instruction cache unit 3334 is further coupled to a level 2 (L2) cache unit 3376 in the memory unit 3370. The L2 cache unit 3376 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 3300 as follows: 1) the instruction fetch 3338 performs the fetch and length decoding stages 3302 and 3304; 2) the decode unit 3340 performs the decode stage 3306; 3) the rename/allocator unit 3352 performs the allocation stage 3308 and renaming stage 3310; 4) the scheduler unit(s) 3356 performs the schedule stage 3312; 5) the physical register file(s) unit(s) 3358 and the memory unit 3370 perform the register read/memory read stage 3314; the execution cluster 3360 perform the execute stage 3316; 6) the memory unit 3370 and the physical register file(s) unit(s) 3358 perform the write back/memory write stage 3318; 7) various units may be involved in the exception handling stage 3322; and 8) the retirement unit 3354 and the physical register file(s) unit(s) 3358 perform the commit stage 3324.The core 3390 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 3390 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 3334/3374 and a shared L2 cache unit 3376, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In-Order Core Architecture Figures 34A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application. Figure 34A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 3402 and with its local subset of the Level 2 (L2) cache 3404, according to embodiments of the invention. In one embodiment, an instruction decoder 3400 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 3406 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 3408 and a vector unit 3410 use separate register sets (respectively, scalar registers 3412 and vector registers 3414) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 3406, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 3404 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 3404. Data read by a processor core is stored in its L2 cache subset 3404 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 3404 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction. Figure 34B is an expanded view of part of the processor core in Figure 34A according to embodiments of the invention. Figure 34B includes an L1 data cache 3406A part of the L1 cache 3404, as well as more detail regarding the vector unit 3410 and the vector registers 3414. Specifically, the vector unit 3410 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 3428), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 3420, numeric conversion with numeric convert units 3422A-B, and replication with replication unit 3424 on the memory input. Write mask registers 3426 allow predicating resulting vector writes. Figure 35 is a block diagram of a processor 3500 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 35 illustrate a processor 3500 with a single core 3502A, a system agent 3510, a set of one or more bus controller units 3516, while the optional addition of the dashed lined boxes illustrates an alternative processor 3500 with multiple cores 3502A-N, a set of one or more integrated memory controller unit(s) 3514 in the system agent unit 3510, and special purpose logic 3508.Thus, different implementations of the processor 3500 may include: 1) a CPU with the special purpose logic 3508 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 3502A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 3502A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 3502A-N being a large number of general purpose in-order cores. Thus, the processor 3500 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 3500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 3506, and external memory (not shown) coupled to the set of integrated memory controller units 3514. The set of shared cache units 3506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 3512 interconnects the special purpose logic 3508 (e.g., integrated graphics logic), the set of shared cache units 3506, and the system agent unit 3510/integrated memory controller unit(s) 3514, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 3506 and cores 3502-A-N.In some embodiments, one or more of the cores 3502A-N are capable of multithreading. The system agent 3510 includes those components coordinating and operating cores 3502A-N. The system agent unit 3510 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 3502A-N and the integrated graphics logic 3508. The display unit is for driving one or more externally connected displays.The cores 3502A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 3502A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. Exemplary Computer Architectures Figures 36-39 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 36 , shown is a block diagram of a system 3600 in accordance with one embodiment of the present invention. The system 3600 may include one or more processors 3610, 3615, which are coupled to a controller hub 3620. In one embodiment the controller hub 3620 includes a graphics memory controller hub (GMCH) 3690 and an Input/Output Hub (IOH) 3650 (which may be on separate chips); the GMCH 3690 includes memory and graphics controllers to which are coupled memory 3640 and a coprocessor 3645; the IOH 3650 couples input/output (I/O) devices 3660 to the GMCH 3690. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 3640 and the coprocessor 3645 are coupled directly to the processor 3610, and the controller hub 3620 in a single chip with the IOH 3650.The optional nature of additional processors 3615 is denoted in Figure 36 with broken lines. Each processor 3610, 3615 may include one or more of the processing cores described herein and may be some version of the processor 3500.The memory 3640 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3620 communicates with the processor(s) 3610, 3615 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 3695.In one embodiment, the coprocessor 3645 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 3620 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 3610, 3615 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 3610 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 3610 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 3645. Accordingly, the processor 3610 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 3645. Coprocessor(s) 3645 accept and execute the received coprocessor instructions.Referring now to Figure 37 , shown is a block diagram of a first more specific exemplary system 3700 in accordance with an embodiment of the present invention. As shown in Figure 37 , multiprocessor system 3700 is a point-to-point interconnect system, and includes a first processor 3770 and a second processor 3780 coupled via a point-to-point interconnect 3750. Each of processors 3770 and 3780 may be some version of the processor 3500. In one embodiment of the invention, processors 3770 and 3780 are respectively processors 3610and 3615, while coprocessor 3738 is coprocessor 3645. In another embodiment, processors 3770 and 3780 are respectively processor 3610 coprocessor 3645.Processors 3770 and 3780 are shown including integrated memory controller (IMC) units 3772 and 3782, respectively. Processor 3770 also includes as part of its bus controller units point-to-point (P-P) interfaces 3776 and 3778; similarly, second processor 3780includes P-P interfaces 3786 and 3788. Processors 3770, 3780 may exchange information via a point-to-point (P-P) interface 3750 using P-P interface circuits 3778, 3788. As shown in Figure 37 , IMCs 3772 and 3782 couple the processors to respective memories, namely a memory 3732 and a memory 3734, which may be portions of main memory locally attached to the respective processors.Processors 3770, 3780 may each exchange information with a chipset 3790 via individual P-P interfaces 3752, 3754 using point to point interface circuits 3776, 3794, 3786, 3798. Chipset 3790 may optionally exchange information with the coprocessor 3738 via a high-performance interface 3792. In one embodiment, the coprocessor 3738 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 3790 may be coupled to a first bus 3716 via an interface 3796. In one embodiment, first bus 3716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in Figure 37 , various I/O devices 3714 may be coupled to first bus 3716, along with a bus bridge 3718 which couples first bus 3716 to a second bus 3720. In one embodiment, one or more additional processor(s) 3715, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 3716. In one embodiment, second bus 3720 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 3720 including, for example, a keyboard and/or mouse 3722, communication devices 3727 and a storage unit 3728 such as a disk drive or other mass storage device which may include instructions/code and data 3730, in one embodiment. Further, an audio I/O 3724 may be coupled to the second bus 3720. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 37 , a system may implement a multi-drop bus or other such architecture.Referring now to Figure 38 , shown is a block diagram of a second more specific exemplary system 3800 in accordance with an embodiment of the present invention. Like elements in Figures 37 and 38 bear like reference numerals, and certain aspects of Figure 37 have been omitted from Figure 38 in order to avoid obscuring other aspects of Figure 38 . Figure 38 illustrates that the processors 3770, 3780 may include integrated memory and I/O control logic ("CL") 3772 and 3782, respectively. Thus, the CL 3772, 3782 include integrated memory controller units and include I/O control logic. Figure 38 illustrates that not only are the memories 3732, 3734 coupled to the CL 3772, 3782, but also that I/O devices 3814 are also coupled to the control logic 3772, 3782. Legacy I/O devices 3815 are coupled to the chipset 3790.Referring now to Figure 39 , shown is a block diagram of a SoC 3900 in accordance with an embodiment of the present invention. Similar elements in Figure 35 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 39 , an interconnect unit(s) 3902 is coupled to: an application processor 3910 which includes a set of one or more cores 3502A-N, which include cache units 3504A-N, and shared cache unit(s) 3506; a system agent unit 3510; a bus controller unit(s) 3516; an integrated memory controller unit(s) 3514; a set or one or more coprocessors 3920 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 3930; a direct memory access (DMA) unit 3932; and a display unit 3940 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 3920 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 3730 illustrated in Figure 37 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Emulation (including binary translation, code morphing, etc.) In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor. Figure 40 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 40 shows a program in a high level language 4002 may be compiled using an x86 compiler 4004 to generate x86 binary code 4006 that may be natively executed by a processor with at least one x86 instruction set core 4016. The processor with at least one x86 instruction set core 4016 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 4004 represents a compiler that is operable to generate x86 binary code 4006 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 4016. Similarly, Figure 40 shows the program in the high level language 4002 may be compiled using an alternative instruction set compiler 4008 to generate alternative instruction set binary code 4010 that may be natively executed by a processor without at least one x86 instruction set core 4014 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 4012 is used to convert the x86 binary code 4006 into code that may be natively executed by the processor without an x86 instruction set core 4014. This converted code is not likely to be the same as the alternative instruction set binary code 4010 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 4012 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 4006.Though the flow diagrams in the figures show a particular order of operations performed by certain embodiments, it should be understood that such order is exemplary. Thus, alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.Additionally, although the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
Terminal connection routing on top of a substrate surface connects to component terminals to and from PMIC devices and provides a novel structure to connect surface mount technology (SMT) passive device terminals on an SMT layer (such as a Cu bar mesh) that uses the 3D space available near to components to lower resistance / lower inductive path and provides a shorter path, SIP form factor reduction, a component placement density increase, creates an additional PDN layer for connectivity and, if the routing is encapsulated in a mold, protects the metal in the connection from oxidation.
CLAIMSWHAT IS CLAIMED IS:1. An apparatus comprising: a substrate; a first device attached to a first surface of the substrate near a center of the substrate; a second device attached to the first surface of the substrate near an edge of the substrate; and a connection located on the first surface of the substrate, the connection coupled between the first device and the second device.2. The apparatus of claim 1, wherein the first device is an active device.3. The apparatus of claim 1, wherein the first device is an active device and the second device is a passive device.4. The apparatus of claim 1, wherein the first device is one of an active device or a passive device and the second device is one of an active device or a passive device.5. The apparatus of claim 1, wherein the connection has a length parallel to the first surface of the substrate, a width parallel to the first surface of the substrate and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than approximately 50pm.6. The apparatus of claim 1, wherein the apparatus is incorporated into a device selected from a group including a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle.7. An apparatus comprising: means for supporting; a first device attached to a first surface of the means for supporting near a center of the means for supporting; a second device attached to the first surface of the means for supporting near an edge of the means for supporting; and means for connection located on the first surface of the means for supporting, the means for connection coupled between the first device and the second device.8. The apparatus of claim 7, wherein the first device is an active device.9. The apparatus of claim 7, wherein the first device is an active device and the second device is a passive device.10. The apparatus of claim 7, wherein the first device is one of an active device or a passive device and the second device is one of an active device or a passive device.11. The apparatus of claim 7, wherein the connection has a length parallel to the first surface of the means for supporting, a width parallel to the first surface of the means for supporting and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than approximately 50pm.12. The apparatus of claim 7, wherein the apparatus is incorporated into a device selected from a group including a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle.13. A method for manufacturing an apparatus, the method comprising: providing a substrate; attaching a first device to a first surface of the substrate near a center of the substrate; 14 ataching a second device to the first surface of the substrate near an edge of the substrate; and connecting a connection located on the first surface of the substrate between the first device and the second device.14. The method of claim 13, wherein the first device is an active device.15. The method of claim 13, wherein the first device is an active device and the second device is a passive device.16. The method of claim 13, wherein the first device is one of an active device or a passive device and the second device is one of an active device or a passive device.17. The method of claim 13, wherein the connection has a length parallel to the first surface of the substrate, a width parallel to the first surface of the substrate and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than approximately 50pm.18. The method of claim 13, further comprising incorporating the apparatus into a device selected from a group including a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle.19. A non-transitory computer-readable medium comprising instructions that when executed by a processor cause the processor to perform a method, the method comprising: providing a substrate; ataching a first device to a first surface of the substrate near a center of the substrate; attaching a second device to the first surface of the substrate near an edge of the substrate; and 15 connecting a connection located on the first surface of the substrate between the first device and the second device.20. The non-transitory computer-readable medium of claim 19, wherein the first device is an active device.21. The non-transitory computer-readable medium of claim 19, wherein the first device is an active device and the second device is a passive device.22. The non-transitory computer-readable medium of claim 19, wherein the first device is one of an active device or a passive device and the second device is one of an active device or a passive device.23. The non-transitory computer-readable medium of claim 19, wherein the connection has a length parallel to the first surface of the substrate, a width parallel to the first surface of the substrate and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than approximately 50pm.24. The non-transitory computer-readable medium of claim 19, wherein the method further comprises incorporating the substrate into a device selected from a group including a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle.
SUBSTRATE WITH ACTIVE AND/OR PASSIVE DEVICES ATTACHED THERETOCROSS REFRENCE TO RELATED APPLICATION[0001] The present Application for Patent claims the benefit of U. S. Non-Provisional Application No. 17/038,173 entitled “TERMINAL CONNECTION ROUTING”, filed September 30, 2020, which is assigned to the assignee hereof, and is expressly incorporated herein by reference in its entirety.FIELD OF DISCLOSURE[0002] This disclosure relates generally to system in package (SIP) applications, and more specifically, but not exclusively, to terminal connection routing for a SIP.BACKGROUND[0003] Power management integrated circuits (PMICs) are integrated circuits for power management. Although PMIC refers to a wide range of chips (or modules in SIP devices), most include several voltage converters or their control part. A PMIC is often included in battery-operated devices such as mobile phones and portable media players to decrease the amount of space required. However, in the substrate routing for PMIC devices, all inductors must be near the PMIC device for lower direct current resistance (DCR). This puts a limitation on the efficient utilization of space on the PMIC substrate and leads to substrate floor plan area wastage. Thus, conventional PMIC systems need to place some inductors in far comer regions to make compact modules since the comers generally have some unused space. Unfortunately, placing inductors in the comer regions increases the DCR.[0004] Accordingly, there is a need for systems, apparatus, and methods that overcome the deficiencies of conventional approaches including the methods, system and apparatus provided hereby.SUMMARY[0005] The following presents a simplified summary relating to one or more aspects associated with the apparatus and methods disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects or to delineate the scope associated with any particular aspect. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects relating to the apparatus and methods disclosed herein in a simplified form to precede the detailed description presented below.[0006] In one aspect, an apparatus includes: a substrate; a first device attached to a first surface of the substrate near a center of the substrate; a second device attached to the first surface of the substrate near an edge of the substrate; and a connection located on the first surface of the substrate, the connection coupled between the first device and the second device.[0007] In another aspect, an apparatus includes: means for supporting; a first device attached to a first surface of the means for supporting near a center of the means for supporting; a second device attached to the first surface of the means for supporting near an edge of the means for supporting; and means for connection located on the first surface of the means for supporting, the means for connection coupled between the first device and the second device.[0008] In still another aspect, a method for manufacturing an apparatus that includes: providing a substrate; attaching a first device to a first surface of the substrate near a center of the substrate; attaching a second device to the first surface of the substrate near an edge of the substrate; and connecting a connection located on the first surface of the substrate between the first device and the second device.[0009] In still another aspect, a non-transitory computer-readable medium comprising instructions that when executed by a processor cause the processor to perform a method, the method includes: providing a substrate; attaching a first device to a first surface of the substrate near a center of the substrate; attaching a second device to the first surface of the substrate near an edge of the substrate; and connecting a connection located on the first surface of the substrate between the first device and the second device.[0010] Other features and advantages associated with the apparatus and methods disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0011] A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure, and in which: [0012] Figure 1 illustrates a block diagram of connections between devices in accordance with some aspects of the disclosure;[0013] Figure 2 illustrates a three dimensional (3D) view of connections between devices in accordance with some aspects of the disclosure;[0014] Figures 3A-G illustrate a method for manufacturing connections between devices in accordance with some aspects of the disclosure;[0015] Figure 4 illustrates a method for manufacturing an apparatus in accordance with some aspects of the disclosure;[0016] Figure 5 illustrates a mobile device in accordance with some aspects of the disclosure; and[0017] Figure 6 illustrates various electronic devices that may be integrated with any of the aforementioned methods, devices, semiconductor devices, integrated circuits, die, interposers, packages, or package-on-packages (PoPs) in accordance with some aspects of the disclosure.[0018] In accordance with common practice, the features depicted by the drawings may not be drawn to scale. Accordingly, the dimensions of the depicted features may be arbitrarily expanded or reduced for clarity. In accordance with common practice, some of the drawings are simplified for clarity. Thus, the drawings may not depict all components of a particular apparatus or method. Further, like reference numerals denote like features throughout the specification and figures.DETAILED DESCRIPTION[0019] The methods, apparatus, and systems disclosed herein mitigate shortcomings of the conventional methods, apparatus, and systems, as well as other previously unidentified needs. Among the various technical advantages the various aspects disclosed provide, in at least some aspects, the features of connection routing on top of a substrate surface with the routing going through empty space available between components and connects to component terminals to and from PMIC devices provides a novel structure to connect surface mount technology (SMT) passive device terminals on an SMT layer (such as a Cu bar mesh) that uses the 3D space available near to components to lower resistance / lower inductive path and provides a shorter path, SIP form factor reduction, a component placement density increase, creates an additional power delivery network (PDN) layer for connectivity and, if the routing is encapsulated in a mold, protects the metal in the connection from oxidation. Such as in an apparatus that includes: a substrate; a first device attached to a first surface of the substrate near a center of the substrate; a second device attached to the first surface of the substrate near an edge of the substrate; and a connection located on the first surface of the substrate, the connection coupled between the first device and the second device.[0020] Figure 1 illustrates a block diagram of connections between devices in accordance with some aspects of the disclosure. As shown in Figure 1, an apparatus 100 may include a substrate 110; a first device 120 (such as a memory, logic, passive, or an active device including a PMIC) attached to a first surface 130 of the substrate near a center 140 of the substrate 110; a second device 150 (such as a memory, logic, active, or passive device including an inductor) attached to the first surface 130 of the substrate 110 near an edge 160 of the substrate 110; and a connection 170 located on the first surface 130 of the substrate 110, the connection 170 coupled between the first device 120 and the second device 150. As can be seen, Figure 1 illustrates that more or less than a single second device 150 and connection 170 may be used.[0021] With regard to connection 170, may have a length parallel to the first surface 130 of the substrate 110, a width parallel to the first surface 130 of the substrate 110 and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than the width. It should be understood that the connection 170 may be configured to transfer signals between the first device 120 and the second device 150. In addition, it should be understood that connection 170 may be composed of copper or similar conductive material, have a width of approximately 5 pm to 50 pm, a height of approximately 50 pm to 200 pm, and be one of a plate, wire, strip, bar, or mesh shape.[0022] Figure 2 illustrates a three dimensional (3D) view of connections between devices in accordance with some aspects of the disclosure. As shown in Figure 2, the connection 170 may have a length 172 parallel to the first surface 130 of the substrate 110, a width 174 parallel to the first surface 130 of the substrate 110 and perpendicular to the length 172, and a height 176 perpendicular to the length 172 and the width 174, the height 176 greater than the width 174 to increase the capacity/reduce the resistance. While Figure 2 illustrates that the height 176 is the same or less than the height of the devices on the substrate 110, it should be understood that the height 176 may be greater than a height of any of the other elements on the substrate and, although shown as a plate, may be one of a plate, wire, strip, bar, mesh shape or similar. [0023] Figures 3A-G illustrate a method for manufacturing connections between devices in accordance with some aspects of the disclosure. As shown in Figure 3A, the method may being with providing a substrate 310, applying a dielectric 312 (such as a photo-imageable dielectric or similar) to a first surface 330 of the substrate 310, and forming openings 314 in the dielectric 312 for placement of devices. The method may continue in Figure 3B with applying a film 316 (such as a dry film) to encapsulate the substrate 310. The method may continue in Figure 3C with forming opening 318 in the film 316.[0024] The method may continue in Figure 3D with forming connections 370 in the openings 318 (such as by plating a copper or similar material). The method may continue in Figure 3E with removing the film 316 and applying a solder resist film 322. The method may continue in Figure 3F with attaching a first device 320 (such as first device 120), a second device 350 (such as second device 150), and a plurality of additional devices 324. The devices may be attached using surface mount technology (SMT) or similar techniques as well as conventional techniques to couple the connections 370 to a respective device. In addition, a plurality of vias 326 may be formed in the connections 370 extending through the connections 370 to the substrate 310. The method may conclude in Figure 3G with encapsulating the substrate 310 with a mold compound 328 to protect the components on the substrate 310 from damage such as oxidation. It should be understood that additional steps may be included such as attaching external connections (such as a ball grid array) to the substrate, package singulation, and similar package processes.[0025] Figure 4 illustrates a method in accordance with some aspects of the disclosure. As shown in Figure 4, the method 400 may begin in block 402 with providing a substrate. The method 400 may continue in block 404 with attaching a first device to a first surface of the substrate near a center of the substrate. The method 400 may continue in block 406 with attaching a second device to the first surface of the substrate near an edge of the substrate. The method 400 may conclude in block 408 with connecting a connection located on the first surface of the substrate between the first device and the second device. The method 400 may alternatively include incorporating the apparatus into a device selected from a group including a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle. [0026] Alternatively, the method 400 may include depositing a silicon nitride film before forming the shallow trench isolation region; masking an N-type metal-oxide- semiconductor (NMOS) region before etching the P-type metal-oxide-semiconductor (PMOS) spacer; wherein the source region and the drain region for the PMOS region and the NMOS region are epitaxial grown, and incorporating the transistor circuit into a device selected from the group including of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in an automotive vehicle. In addition, the method 400 may also include the following: the first device is a power management integrated circuit; the first device is a power management integrated circuit and the second device is a passive device; the first device is a power management integrated circuit and the second device is an inductor; or the connection has a length parallel to the first surface of the substrate, a width parallel to the first surface of the substrate and perpendicular to the length, and a height perpendicular to the length and the width, the height greater than the width or approximately 50pm.[0027] Figure 5 illustrates a mobile device in accordance with some aspects of the disclosure. Referring now to Figure 5, a block diagram of a mobile device that is configured according to some aspects is depicted and generally designated 500. In some aspects, mobile device 500 may be configured as a wireless communication device. As shown, mobile device 500 includes processor 501, which may be configured to implement the methods described herein in some aspects. Processor 501 is shown to include instruction pipeline 512, buffer processing unit (BPU) 508, branch instruction queue (BIQ) 511, and throttler 510. Other well-known details (e.g., counters, entries, confidence fields, weighted sum, comparator, etc.) of these blocks have been omitted from this view of processor 501 for the sake of clarity.[0028] Processor 501 may be communicatively coupled to memory 532 over a link, which may be a die-to-die or chip-to-chip link. Mobile device 500 also include display 528 and display controller 526, with display controller 526 coupled to processor 501 and to display 528.[0029] In some aspects, Figure 5 may include coder/decoder (CODEC) 534 (e.g., an audio and/or voice CODEC) coupled to processor 501; speaker 536 and microphone 538 coupled to CODEC 534; and wireless controller 540 (which may include a modem) coupled to wireless antenna 542 and to processor 501.[0030] In a particular aspect, where one or more of the above-mentioned blocks are present, processor 501, display controller 526, memory 532, CODEC 534, and wireless controller 540 can be included in a system-in-package or system-on-chip device 522. Input device 530 (e.g., physical or virtual keyboard), power supply 544 (e.g., battery), display 528, input device 530, speaker 536, microphone 538, wireless antenna 542, and power supply 544 may be external to system-on-chip device 522 and may be coupled to a component of system-on-chip device 522, such as an interface or a controller.[0031] It should be noted that although Figure 5 depicts a mobile device, processor 501 and memory 532 may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a computer, a laptop, a tablet, a communications device, a mobile phone, or other similar devices.[0032] Figure 6 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP) in accordance with some aspects of the disclosure. In one aspect, a mobile phone device 602, a laptop computer device 604, and a fixed location terminal device 606 may include an integrated device 600 as described herein. The integrated device 600 may be any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, device packages, integrated circuit (IC) packages, package-on-package devices described herein. The devices 602, 604, 606 illustrated in Figure 6 are only a few of the devices that feature the integrated device 600. Other electronic devices may also feature the integrated device 600 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof. [0033] It will be appreciated that various aspects disclosed herein can be described as functional equivalents to the structures, materials and/or devices described and/or recognized by those skilled in the art. It should furthermore be noted that methods, systems, and apparatus disclosed in the description or in the claims can be implemented by a device comprising means for performing the respective actions of this method. In one aspect, an apparatus may include a means for supporting (such as a substrate); a first device attached to a first surface of the means for supporting near a center of the means for supporting; a second device attached to the first surface of the means for supporting near an edge of the means for supporting; and means for connection (such as a connection) located on the first surface of the means for supporting, the means for connection coupled between the first device and the second device. It will be appreciated that the aforementioned aspects are merely provided to illustrate the features herein and the various aspects claimed are not limited to the specific references and/or illustrations cited. [0034] One or more of the components, processes, features, and/or functions illustrated in Figures 1-6 may be rearranged and/or combined into a single component, process, feature or function or incorporated in several components, processes, or functions. Additional elements, components, processes, and/or functions may also be added without departing from the disclosure. It should also be noted that Figures 1-6 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, Figures 1-6 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer. An active side of a device, such as a die, is the part of the device that contains the active components of the device (e.g. transistors, resistors, capacitors, inductors etc.), which perform the operation or function of the device. The backside of a device is the side of the device opposite the active side. As used herein, a metallization structures may include metal layers, vias, pads, or traces with dielectric between, such as a redistribution layer (RDL).[0035] As used herein, the terms “user equipment” (UE), “user device,” “user terminal,” “client device,” “communication device,” “wireless device,” “wireless communications device,” “handheld device,” “mobile device,” “mobile terminal,” “mobile station,” “handset,” “access terminal,” “subscriber device,” “subscriber terminal,” “subscriber station,” “terminal,” and variants thereof may interchangeably refer to any suitable mobile or stationary device that can receive wireless communication and/or navigation signals. These terms include, but are not limited to, a music player, a video player, an entertainment unit, a navigation device, a communications device, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, an automotive device in an automotive vehicle, and/or other types of portable electronic devices typically carried by a person and/or having communication capabilities (e.g., wireless, cellular, infrared, short-range radio, etc.). These terms are also intended to include devices which communicate with another device that can receive wireless communication and/or navigation signals such as by short-range wireless, infrared, wire line connection, or other connection, regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the other device. In addition, these terms are intended to include all devices, including wireless and wire line communication devices, that are able to communicate with a core network via a radio access network (RAN), and through the core network the UEs can be connected with external networks such as the Internet and with other UEs. Of course, other mechanisms of connecting to the core network and/or the Internet are also possible for the UEs, such as over a wired access network, a wireless local area network (WLAN) (e.g., based on Institute of Electrical and Electronics Engineers (IEEE) 802.11, etc.) and so on. UEs can be embodied by any of a number of types of devices including but not limited to printed circuit (PC) cards, compact flash devices, external or internal modems, wireless or wire line phones, smartphones, tablets, tracking devices, asset tags, and so on. A communication link through which UEs can send signals to a RAN is called an uplink channel (e.g., a reverse traffic channel, a reverse control channel, an access channel, etc.). A communication link through which the RAN can send signals to UEs is called a downlink or forward link channel (e.g., a paging channel, a control channel, a broadcast channel, a forward traffic channel, etc.). As used herein the term traffic channel (TCH) can refer to an uplink / reverse or downlink / forward traffic channel.[0036] The terminology used herein is for the purpose of describing particular aspects and is not intended to be limiting of aspects of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, actions, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, actions, operations, elements, components, and/or groups thereof.[0037] It should be noted that the terms "connected," "coupled," or any variant thereof, mean any connection or coupling, either direct or indirect, between elements, and can encompass a presence of an intermediate element between two elements that are "connected" or "coupled" together via the intermediate element.[0038] Any reference herein to an element using a designation such as "first," "second," and so forth does not limit the quantity and/or order of those elements. Rather, these designations are used as a convenient method of distinguishing between two or more elements and/or instances of an element. Also, unless stated otherwise, a set of elements can include one or more elements.[0039] Nothing stated or illustrated depicted in this application is intended to dedicate any component, action, feature, benefit, advantage, or equivalent to the public, regardless of whether the component, action, feature, benefit, advantage, or the equivalent is recited in the claims.[0040] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be incorporated directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of storage medium known in the art including non-transitory types of memory or storage mediums. A storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0041] Although some aspects have been described in connection with a device, it goes without saying that these aspects also constitute a description of the corresponding method, and so a block or a component of a device should also be understood as a corresponding method action or as a feature of a method action. Analogously thereto, aspects described in connection with or as a method action also constitute a description of a corresponding block or detail or feature of a corresponding device. Some or all of the method actions can be performed by a hardware apparatus (or using a hardware apparatus), such as a microprocessor, a programmable computer or an electronic circuit. In some aspects, some or a plurality of the most important method actions can be performed by such an apparatus.[0042] In the detailed description above it can be seen that different features are grouped together. This manner of disclosure should not be understood as an intention that the claimed aspects have more features than are explicitly mentioned in the respective claim. Rather, the disclosure may include fewer than all features of an individual aspect disclosed. Therefore, the following claims should hereby be deemed to be incorporated in the description, where each claim by itself can stand separate. Although each claim by itself can stand separate, it should be noted that-although a dependent claim can refer in the claims to a specific combination with one or a plurality of claims-other aspects can also encompass or include a combination of said dependent claim with the subject matter of any other dependent claim or a combination of any feature with other dependent and independent claims. Such combinations are proposed herein, unless it is explicitly expressed that a specific combination is not intended. Furthermore, it is also intended that features of a claim can be included in any other independent claim, even if said claim is not directly dependent on the independent claim.[0043] Furthermore, in some aspects, an individual action can be subdivided into a plurality of sub-actions or contain a plurality of sub-actions. Such sub-actions can be contained in the disclosure of the individual action and be part of the disclosure of the individual action.[0044] While the foregoing disclosure shows illustrative aspects of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects disclosed herein. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Technologies for identification of a potential root cause of a use-after-free memory corruption bug of a program include a computing device to replay execution of the execution of the program based on an execution log of the program. The execution log comprises an ordered set of executed instructions of the program that resulted in the use-after-free memory corruption bug. The computing device compares a use-after-free memory address access of the program to a memory address associated with an occurrence of the use-after-free memory corruption bug in response to detecting the use-after-free memory address access and records the use-after-free memory address access of the program as a candidate for a root cause of the use-after-free memory corruption bug to a candidate list in response to detecting a match between the use-after-free memory address access of the program and the memory address associated with the occurrence of the use-after-free memory corruption bug.
1.A computing device used to identify the potential root cause of a memory corruption vulnerability used after a program is released, the computing device includes:A memory allocation module for compiling the program using a least recently used (LRU) memory allocator;The replay module is used to replay the execution of the program in response to the compilation of the program and based on the execution log of the program, wherein the execution log includes the memory corruption vulnerability that caused the use after the release An ordered set of executed instructions of the program;Damage candidate identification module for (i) in response to detecting the use of memory address access after release, access to the program using memory address after release and the memory address associated with the occurrence of the memory damage vulnerability after release. Compare, and (ii) in response to detecting a match between a post-release use memory address access to the program and the memory address associated with the occurrence of the post-release use memory corruption vulnerability, the program will be The candidate who uses the memory address access after the release as the root cause of the memory corruption vulnerability after the release is recorded in the candidate list.2.The computing device according to claim 1, further comprising a candidate filtering module for filtering the candidate list to reduce the number of candidates in the candidate list.3.The computing device of claim 2, wherein filtering the candidate list comprises:Selecting a candidate from the candidate list;Remove one or more instructions associated with the selected candidate from the execution of the program to generate a modified program;Replay the modified program; andIn response to determining that replaying the modified program results in the use of the post-release memory corruption vulnerability, the selected candidate is removed from the candidate list.4.The computing device of claim 3, wherein the one or more instructions are composed of a single instruction of the program.5.The computing device according to claim 3, wherein the one or more instructions are composed of the most recent branch instruction of the program.6.The computing device of claim 3, wherein the one or more instructions are composed of instructions corresponding to memory accesses of the selected candidate.7.The computing device of claim 3, wherein the one or more instructions include an instruction set of the program.8.8. The computing device of claim 7, wherein the instruction set includes an instruction set defined between the most recent branch instruction and the instruction corresponding to the memory access of the selected candidate.9.The computing device of claim 1, wherein compiling the program includes overloading a memory allocator of the program.10.The computing device according to claim 1, wherein replaying the execution of the program includes replaying the execution of the program using a binary instrumentation technique.11.A computing device used to identify the potential root cause of a memory corruption vulnerability used after a program is released, the computing device includes:A unit for compiling the program with a least recently used (LRU) memory allocator;A unit for replaying the execution of the program in response to the compilation of the program and based on the execution log of the program, wherein the execution log includes the executed instructions of the program that caused the use of the memory corruption vulnerability after release Ordered set ofA unit for comparing the post-release use memory address access of the program with the memory address associated with the occurrence of the post-release use memory corruption vulnerability in response to detection of the post-release use memory address access; andUsed to respond to the detection of a match between the post-release use memory address access to the program and the memory address associated with the occurrence of the post-release use memory corruption vulnerability, to perform the post-release use memory address on the program Access to the unit recorded in the candidate list as the candidate for the root cause of the memory corruption vulnerability used after the release.12.The computing device of claim 11, further comprising:A unit for selecting a candidate from the candidate list;A unit for removing one or more instructions associated with the selected candidate from the execution of the program to generate a modified program;A unit for replaying the modified program; andThe unit for removing the selected candidate from the candidate list in response to determining that the replay of the modified program results in the use of the memory corruption vulnerability after release.13.The computing device of claim 12, wherein the one or more instructions are composed of the most recent branch instruction of the program.14.The computing device of claim 12, wherein the one or more instructions are composed of instructions corresponding to memory accesses of the selected candidate.15.The computing device according to claim 12, wherein the one or more instructions include an instruction set of the program, and the instruction set includes a recent branch instruction and a memory access corresponding to the selected candidate The set of instructions defined between the instructions.16.A method for identifying the potential root cause of the use of memory corruption vulnerabilities after the release of a program, the method includes:Compile the program by a computing device using a least recently used (LRU) memory allocator;In response to the compilation of the program and based on the execution log of the program, the execution of the program is replayed by the computing device, wherein the execution log includes the history of the program that caused the memory corruption vulnerability after release. An ordered collection of execution instructions;In response to detecting the memory address access after release, the computing device compares the memory address access after release of the program with the memory address associated with the occurrence of the memory corruption vulnerability after release;In response to detecting a match between the post-release used memory address access of the program and the memory address associated with the occurrence of the post-release use memory corruption vulnerability, the post-release of the program by the computing device Candidates who use memory address access as the root cause of the memory corruption vulnerability after release are recorded in the candidate list.17.The method of claim 16, further comprising filtering the candidate list to reduce the number of candidates in the candidate list.18.The method of claim 17, wherein filtering the candidate list comprises:Selecting a candidate from the candidate list;Remove one or more instructions associated with the selected candidate from the execution of the program to generate a modified program;Replay the modified program; andIn response to determining that replaying the modified program results in the use of memory corruption vulnerability after release, the selected candidate is removed from the candidate list.19.The method of claim 16, further comprising compiling the program by the computing device using a least recently used (LRU) memory allocator,Wherein, replaying the execution of the program includes replaying the execution of the program in response to compiling the program with the LRU memory allocator.20.A computing device for recording program execution, the computing device includes:A memory allocation module for compiling the program using a least recently used (LRU) memory allocator;A memory recording module for recording the execution of the program in the execution log in response to the compilation of the program; andThe memory corruption detection module is used to (i) monitor the execution of the program for the occurrence of the memory corruption vulnerability after the release, and (ii) record the memory address associated with the occurrence of the memory corruption vulnerability after the release to the memory log.21.22. The computing device of claim 20, wherein monitoring the execution of the program comprises monitoring the execution of the program for the occurrence of a memory corruption detection interrupt triggered as a result of the occurrence of the use memory corruption vulnerability after the release.22.The computing device of claim 20, wherein recording the memory address includes identifying all instructions based on the number of instructions of the program executed before the memory corruption instruction associated with the occurrence of the memory corruption vulnerability after the release. The memory address.23.One or more machine-readable storage media, including a plurality of instructions stored thereon, in response to a computing device executing the plurality of instructions, causing the computing device to execute the computing device according to any one of claims 16-19 Methods.24.A computing device, comprising a memory storing instructions and a processor, wherein when the instructions are executed by the processor, the computing device executes the method according to any one of claims 16-19.
Equipment, methods and methods to identify the root cause of the memory corruption vulnerability after release mediumCross-references to related applicationsThis application claims the priority of the U.S. utility model patent application named "TECHNOLOGY FOR ROOT CAUSEIDENTIFICATION OF USE-AFTER-FREE MEMORY CORRUPTION BUGS" filed on March 27, 2015. The serial number is 14/670,863.Background techniqueIdentifying memory corruption vulnerabilities is very challenging, and identifying the root causes of these vulnerabilities is even more challenging. For example, the use of memory corruption vulnerability after release is caused by the use of a pointer to the memory after the memory is released (ie, deleted). Although use-after-free vulnerabilities may sometimes cause the computing device to crash, the computing device usually continues to execute. Therefore, it is very difficult or even impossible for programmers to use common software techniques (for example, unit or integration testing) to detect the existence of some post-release vulnerabilities.Some debugging systems allow programmers to capture the use of deleted memory, leading to observable post-release vulnerabilities. However, the determination of the root cause of the vulnerability after release in such a system is obviously restricted or completely non-existent. Therefore, the root cause usually still exists, which may introduce inconsistent behavior and/or security weaknesses to the computing device.Description of the drawingsThe concepts described here are illustrated in the drawings by way of example rather than limitation. For simplicity and clarity of description, the elements shown in the drawings are not necessarily drawn to scale. Where appropriate, reference labels have been repeated in the figures to indicate corresponding or similar components.Figure 1 is a simplified block diagram of at least one embodiment of a system for identifying the root cause of a memory corruption vulnerability used after release;Figure 2 is a simplified block diagram of at least one embodiment of the environment of the deployed software computing device of Figure 1;3 is a simplified block diagram of at least one embodiment of the environment of the playback computing device of FIG. 1;4 is a simplified flowchart of at least one embodiment of a method for recording the execution of a program that may be executed by the deployed software computing device of FIG. 1;5 is a simplified flowchart of at least one embodiment of a method for identifying a potential root cause of a memory corruption vulnerability after the release of a program that can be executed by the playback computing device of FIG. 1; and6 is a simplified flowchart of at least one embodiment of a method for filtering a candidate list of root cause candidates that may be performed by the playback computing device of FIG. 1.Detailed waysAlthough the concept of the present disclosure is susceptible to various modifications and alternative forms, its specific embodiments have been shown by way of example in the drawings, and will be described in detail herein. However, it should be appreciated that the concept of the present disclosure is not intended to be limited to the specific form disclosed, on the contrary, the present invention is intended to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.References in the specification to "one embodiment", "an embodiment", "an illustrative embodiment", etc. indicate that the described embodiment may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include A specific feature, structure, or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, when a specific feature, structure, or characteristic is described in conjunction with an embodiment, it is considered that implementing such a feature, structure, or characteristic in combination with other embodiments is within the scope of knowledge of those skilled in the art, regardless of whether it is explicitly described. In addition, it should be appreciated that items included in the list in the form of "at least one of A, B and C" can mean (A); (B); (C); (A and B); (A and C) ; (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B or C" can mean (A); (B); (C); (A and B); (A and C); (B And C); or (A, B and C).In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as instructions carried or stored by a transient or non-transitory machine-readable (eg, computer-readable) storage medium that can be read and executed by one or more processors. A machine-readable storage medium can be implemented as any storage device, mechanism, or other physical device for storing or transmitting information in a machine-readable form (for example, volatile or non-volatile memory, media disk, or other media device). structure).In the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be appreciated that this specific arrangement and/or ordering may not be required. Conversely, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative drawings. In addition, including structural or method features in a particular figure does not mean that such features are required in all embodiments, and in some embodiments, these features may not be included or may be combined with other features.Referring now to FIG. 1, a system 100 for identifying the root cause of a memory corruption vulnerability used after release includes a deployed software computing device 102, a network 104 and a playback computing device 106. Although there is only one deployed software computing device 102, one network 104, and one playback computing device 106 is schematically shown in FIG. 1, in other embodiments, the system 100 may include any number of deployed software computing devices. The device 102, the network 104, and/or the playback computing device 106. Furthermore, in some embodiments, the deployed software computing device 102 and the playback computing device 106 may be implemented as the same computing device.As described in detail below, the system 100 identifies and filters the potential root causes of memory corruption vulnerabilities after release. In doing so, in the illustrative embodiment, the deployed software computing device 102 executes the software program and records the execution of the program to the execution log. The deployed software computing device 102 monitors the execution of the program for the occurrence of the memory corruption vulnerability after the release, and records the memory address associated with the occurrence of the memory corruption vulnerability after the release to the memory log. The replay computing device 106 receives execution logs and memory logs from the deployed software computing device 102 for replay and root cause identification. The replay computing device 106 replays the execution of the program based on the received execution log. In addition, the replay computing device 106 compares the memory address access of the program with the memory address (from the memory log) associated with the occurrence of the use memory corruption vulnerability after release, and if there is a match, records the memory address access (for example, to Candidate list) as a candidate for the root cause of the vulnerability after release. The replay computing device 106 may further filter the candidate list to reduce the number of potential root causes of the use vulnerability after release (for example, automatically eliminate false alarms). It should be realized that doing so can significantly reduce the work required by the programmer to identify the root cause of the use of the vulnerability after release.The illustrative deployed software computing device 102 may be implemented as any type of computing device capable of performing the functions described herein. For example, the deployed software computing device 102 can be implemented as a desktop computer, server, router, switch, laptop computer, tablet computer, notebook computer, netbook, UltrabookTM, cellular phone, smart phone, wearable computing device, personal digital assistant , Mobile internet devices, hybrid devices and/or any other computing/communication devices. As shown in FIG. 1, an illustrative deployed software computing device 102 includes a processor 110, an input/output ("I/O") subsystem 112, a memory 114, a data storage 116, a communication circuit 118, and one or more peripherals Equipment 120. Of course, in other embodiments, the deployed software computing device 102 may include other or additional components, such as components commonly found in typical computing devices (eg, various input/output devices and/or other components). Additionally, in some embodiments, one or more illustrative components may be incorporated into another component or otherwise form a part of another component. For example, in some embodiments, the memory 114 or a portion thereof may be incorporated into the processor 110.The processor 110 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor 110 may be implemented as a single-core or multi-core processor, a digital signal processor, a microcontroller, or other processors or processing/control circuits. Similarly, the memory 114 may be implemented as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 114 may store various data and software used during the operation of the deployed software computing device 102, such as operating systems, applications, programs, libraries, and drivers. The memory 114 is communicatively coupled to the processor 110 via the I/O subsystem 112. The I/O subsystem 112 may be implemented as circuits and/or components to facilitate communication with the processor 110, the memory 114, and other deployment software computing devices 102. The input/output operation of the component. For example, the I/O subsystem 112 may be implemented as or include a memory controller hub, an input/output control hub, a firmware device, a communication link (ie, a point-to-point link, a bus link, wires, cables, light guides, Printed circuit board traces, etc.) and/or other components and subsystems to facilitate input/output operations. In some embodiments, the I/O subsystem 112 may form part of a system on chip (SoC) and be incorporated on a single integrated circuit chip along with the processor 110, memory 114, and other components of the deployed software computing device 102.The data storage 116 may be implemented as any type of device or devices configured for short-term or long-term storage of data, such as memory devices and circuits, memory cards, hard drives, solid state drives, or other data storage devices. During the operation of the deployed software computing device 102, the data store 116 and/or the memory 114 may store various data useful for performing the functions described herein. For example, the deployed software computing device 102 may log data to the execution log 210 and the memory log 212 as described herein.The communication circuit 118 may be implemented as any communication circuit, device, or collection thereof (for example, the playback computing device 106) that enables communication between the deployed software computing device 102 and other remote devices through the network 104. The communication circuit 118 may be configured to use any one or more communication technologies (for example, wireless or wired communication) and related protocols (for example, Ethernet, WiMAX, etc.) to realize such communication.The peripheral device 120 may include any number of additional peripheral devices or interface devices, such as speakers, microphones, additional storage devices, and so on. The specific devices included in the peripheral device 120 may depend on, for example, the type and/or intended use of the software computing device 102 deployed.The network 104 may be implemented as any type of communication network capable of facilitating communication between the deployed software computing device 102 and the playback computing device 106. Therefore, the network 104 may include one or more networks, routers, switches, computers, and/or other intermediate devices. For example, the network 104 may be embodied as or include one or more cellular networks, telephone networks, local or wide area networks, publicly available global networks (eg, the Internet), ad hoc networks, or any combination thereof.The playback computing device 106 may be implemented as any computing device capable of performing the functions described herein. For example, the playback computing device 106 can be implemented as a cellular phone, a smart phone, a wearable computing device, a personal digital assistant, a mobile Internet device, a laptop computer, a tablet computer, a notebook computer, a netbook, an UltrabookTM, a desktop computer, a server, Routers, switches, hybrid equipment and/or any other computing/communication equipment.As shown in FIG. 1, the illustrative playback computing device 106 includes a processor 150, an input/output ("I/O") subsystem 152, a memory 154, a data storage 156, a communication circuit 158, and one or more peripheral devices 160. Of course, in other embodiments, the playback computing device 106 may include other or additional components, such as components common in typical computing devices (eg, various input/output devices and/or other components). Additionally, in some embodiments, one or more illustrative components may be incorporated into another component or otherwise form a part of another component. In some embodiments, the components of the playback computing device 106 are similar to the corresponding components of the software computing device 102 deployed above. Therefore, for clarity of description, the description of these components will not be repeated here.Referring now to FIG. 2, in use, the deployed software computing device 102 establishes an environment 200 for recording the execution of the deployed software program. The illustrative environment 200 of the deployed software computing device 102 includes a memory allocation module 202, a memory recording module 204, a memory corruption detection module 206, and a communication module 208. The various modules of the environment 200 may be implemented as hardware, software, firmware, or a combination thereof. For example, the various modules, logic, and other components of the environment 200 may form part of the processor 110 or other hardware components of the software computing device 102 deployed, or the processor 110 or other hardware of the software computing device 102 deployed in other ways. Component establishment. In this way, in some embodiments, one or more modules of the environment 200 may be implemented as a circuit or collection of electrical devices (for example, a memory allocation circuit, a memory recording circuit, a memory corruption circuit, and/or a communication circuit). In addition, in some embodiments, one or more illustrative modules may form part of another module, and/or one or more illustrative modules may be implemented as separate modules or independent modules.The memory allocation module 202 is configured to allocate memory based on the memory request and deletion of the execution program. In addition, in some embodiments, the memory allocation module 202 can compile a program to ensure that a specific memory allocator or allocation algorithm is utilized. For example, in an illustrative embodiment, memory allocation module 202 may compile a program to ensure that "least recently used" (LRU) memory allocators are used for such memory requests and deletions. In order to do so, the memory allocation module 202 may, for example, overload the existing memory allocator of the program. However, in other embodiments, the memory allocation module 202 may allocate memory based on a "most recently used" (MRU) memory allocation algorithm.It should be appreciated that most memory allocators (eg, "free banks") use MRU memory allocation algorithms to allocate memory. Memory allocation algorithms usually involve the following general steps: memory request (for example, the call to "malloc()"), the memory used (for example, the return of "malloc()"), and the memory return (for example, the call to "free()" Call"). Memory allocation algorithms also often use vectors of linked lists, where each linked list in the vector is used to store blocks of memory of different sizes. Each linked list can contain, for example, the remaining available blocks of a certain memory size. When the memory is deleted/released (for example by calling "free()"), the memory block is returned to the corresponding linked list (for example, to the "freestore"). With MRU memory allocation, the memory will be returned to the "top" or "front" of the linked list or free storage bank so that the memory can be immediately reallocated according to the next memory request for that size of memory. It should be appreciated that in order to improve efficiency, and more specifically, to improve cache efficiency, MRU memory allocation is often used. For example, if the computing device is instructed to allocate memory of the same size as the recently released memory, it usually makes sense to reuse the newly released memory because there is a significant possibility that the memory is still valid in the L1 cache. If so, it can be avoided Cache miss penalty. However, MRU usually involves a large amount of memory reallocation to a specific memory address, which can lead to a large number of false positives when trying to identify the root cause of the post-release memory corruption vulnerability associated with a specific memory address. On the other hand, the LRU memory allocation algorithm returns the memory to the "bottom" or "tail" (instead of the "top") of the linked list or free storage bank to ensure that memory reallocation occurs at a lower frequency than traditional MRU memory allocation (For example, the lowest frequency possible). It should be appreciated that using LRU memory allocation instead of MRU memory allocation reduces the number of false alarms of the root cause caused by memory reallocation. For example, in some embodiments, the LRU memory allocation algorithm can reduce the amount of memory reallocation by an amount proportional to the size of the corresponding linked list of a specific memory size.The memory recording module 204 records the execution of the program to the execution log 210. In some embodiments, the execution log 210 may be implemented as an ordered set of instructions that are executed to cause a program that uses memory corruption vulnerabilities after release. In an illustrative embodiment, the memory recording module 204 utilizes a memory race recorder (eg, MRR) to deterministically record the execution of a program that may be single-threaded or multi-threaded at runtime. However, in other embodiments, the memory recording module 204 may record the execution of the program to the execution log 210 in a manner suitable for performing the functions described herein. In some embodiments, the memory recording module 204 may use a block-based MRR system to deterministically record a logical grouping of multiple sequential instructions. It should be appreciated that when recording the execution of the program, the memory recording module 204 can handle various non-deterministic sources. For example, in some embodiments, the memory recording module 204 can handle input non-determinism (e.g., associated with read and replay clocks) and shared memory non-determinism (e.g., associated with multiple threads sharing the same memory). ). In an illustrative embodiment, it should be appreciated that the execution log 210 may be sent to the replay computing device 106 (eg, via the communication module 208) to allow the replay computing device 106 to deterministically replay the execution of the program.The memory corruption detection module 206 monitors the execution of the program for the occurrence of the memory corruption vulnerability after release, and records the memory address associated with the memory corruption to the memory log 212. For example, the memory corruption detection module 206 can use the memory corruption detection (MCD) technology to "capture" the impact of the memory corruption vulnerability after release and record the corresponding memory address. In some embodiments, the occurrence of a post-release usage vulnerability may cause the memory corruption detection module 206 (for example, MCD technology) to be interrupted, which indicates the memory address associated with the post-release usage vulnerability. In particular, the memory corruption detection module 206 may process the execution log 210 to determine the instruction count across threads (eg, the total instruction count for memory corruption instructions) and/or otherwise determine the corresponding memory access that caused the post-release usage vulnerability . It should be appreciated that each of the execution log 210 and the memory log 212 may be implemented as any data structure suitable for performing the functions described herein. Furthermore, in some embodiments, the execution log 210 and/or the memory log 212 may be stored remotely (eg, for subsequent access by the replay computing device 106).The communication module 208 handles communication between the deployed software computing device 102 and a remote device (for example, the playback computing device 106) through the network 104. For example, as described herein, the communication module 208 sends the execution log 210 and the memory log 212 to the replay computing device 106. Additionally, in some embodiments, the deployed software computing device 102 may also send the deployed software program (e.g., in a compiled or uncompiled format) to the playback computing device 106 for playback. In other embodiments, the deployed software computing device 102 can identify the program (or the location of the program) so that the playback computing device 106 can instead retrieve the program itself for analysis.Referring now to FIG. 3, in use, the playback computing device 106 establishes an environment 300 for identifying the potential root cause of the use of memory corruption vulnerabilities after the release of the program. The illustrative environment 300 of the playback computing device 106 includes a root cause detection module 302 and a communication module 304. In addition, the illustrative root cause detection module 302 includes a memory allocation module 306, a replay module 308, a damaged candidate identification module 310, and a candidate filtering module 312. The various modules of the environment 300 may be implemented as hardware, software, firmware, or a combination thereof. For example, the various modules, logic, and other components of the environment 300 may form part of, or otherwise be established by, the processor 150 or other hardware components of the playback computing device 106. Therefore, in some embodiments, one or more modules of the environment 300 may be implemented as a circuit or collection of electrical equipment (eg, root cause detection circuit, communication circuit, memory allocation circuit, playback circuit, damage candidate identification circuit And/or candidate filter circuit). Additionally, in some embodiments, one or more illustrative modules may form part of another module, and/or one or more illustrative modules may be implemented as separate modules or independent modules.The root cause detection module 302 is configured to replay the program of the deployed software computing device 102 and identify potential root cause candidates (ie, instructions) that use the memory corruption vulnerability after release. As described above, in the illustrative embodiment, the root cause detection module 302 includes a memory allocation module 306, a replay module 308, a damaged candidate identification module 310, and a candidate filtering module 312.The memory allocation module 306 may be similar to the memory allocation module 202 of the software computing device 102 deployed. Therefore, in the illustrative embodiment, the memory allocation module 306 is configured to allocate memory based on the memory request and deletion of the execution program. Furthermore, in some embodiments, the memory allocation module 306 may compile the program to ensure that a particular memory allocator or allocation algorithm (e.g., LRU) is utilized, for example, by overloading the existing memory allocator of the program. It should be appreciated that in some embodiments, the deployed software program may be received from the deployed software computing device 102 in a compiled format with desired characteristics (eg, LRU memory allocator), in which case the recompiled program may not necessary.The replay module 308 is configured to replay the execution of the program based on the execution log 210 received from the deployed software computing device 102. As described above, the execution log 210 allows the replay module 308 to execute the program in the same manner as the deployed software computing device 102 executes the execution program (e.g., deterministically). In some embodiments, the replay module 308 may use binary instrumentation technology (for example, PIN technology) to replay the execution of the program.The damage candidate identification module 310 monitors the memory operation (ie, during replay), and performs access to the memory address of the executed program and the memory address associated with the occurrence of the memory corruption vulnerability after release (for example, from the memory log 212) Compare. In the illustrative embodiment, if there is a match, the damage candidate identification module 310 records the candidate whose memory address access is the root cause in the candidate list 314. For example, if the damage candidate identification module 310 determines that the deleted memory location overlaps the memory location associated with the memory log 212 and the post-release use vulnerability, the damage candidate identification module 310 records the corresponding memory access (eg, a specific instruction) as Potential candidates for the root cause of the use of the vulnerability after release (for example, a potential "release" that caused the vulnerability). It should be appreciated that, according to certain embodiments, the damage candidate identification module 310 may record various information about memory access. For example, the damage candidate identification module 310 may record the corresponding call stack, instruction pointer, instruction type, memory address, symbol information, and/or other useful information. It should also be appreciated that the candidate list 314 may be implemented as any data structure suitable for performing the functions described herein.It should be appreciated that the candidate list 314 may be provided to the programmer to reduce the work required by the programmer to identify the root cause of the use of memory corruption vulnerabilities after release. For example, a programmer may only need to analyze a few lines of code instead of the hundreds, thousands, or even millions of lines of code that are often needed. Furthermore, in an illustrative embodiment, the candidate filtering module 312 may filter the candidate list 314 in order to reduce the number of root cause candidates (e.g., automatically). For example, the candidate filtering module 312 may iteratively replay the program and remove those candidates that are determined to be not the root cause of the vulnerability. Specifically, in an illustrative embodiment, the candidate filtering module 312 may remove one or more instructions associated with a specific root cause candidate (for example, a branch instruction in or near the memory access itself), and replay the modified (I.e., there are no instructions removed), and determine whether post-release vulnerabilities still exist in the modified program as described herein. If there are still post-release use vulnerabilities, the selected candidate can be removed from the candidate list 314 (or otherwise annotated) because these instructions have nothing to do with post-release use vulnerabilities.The communication module 304 handles the communication between the playback computing device 106 and a remote device (eg, the deployed software computing device 102) through the network 104. For example, as described herein, the communication module 304 may receive the execution log 210, the memory log 212, and/or the deployed software program from the deployed software computing device 102.Referring now to FIG. 4, in use, the deployed software computing device 102 may execute a method 400 for recording the execution of a program. The illustrative method 400 begins at block 402, where the computing device 102 determines whether to begin recording the execution of a program. If so, the computing device 102 may utilize the LRU memory allocator in block 406 to compile the program. In order to do so, in some embodiments, the computing device 102 may overload the memory allocator of the program. As described above, in some embodiments, the program may have been compiled to utilize the LRU memory allocator or another desired memory allocator, in which case the computing device 102 may determine not to recompile the program.In block 408, the computing device 102 records the execution of the program to the execution log 210. In doing so, in block 410, the computing device 102 may record the execution with a memory race recorder (e.g., MRR). As mentioned above, it should be appreciated that the execution log 210 may allow the replaying computing device 106 to replay the execution of the program in the same manner as the deployed software computing device 102 executes. In block 412, the computing device 102 monitors the execution program for the occurrence of a use vulnerability after release. For example, in block 414, the computing device 102 may monitor the execution of the program (e.g., using MCD) to determine whether to trigger an interrupt associated with a post-release exploit.In block 416, the computing device 102 determines whether the post-release memory corruption vulnerability has been determined to be used (e.g., based on a triggered interrupt). If so, the computing device 102 records the memory address associated with the post-release use of the memory corruption vulnerability to the memory log 212 in block 418. It should be appreciated that the computing device 102 may use any suitable technology and/or algorithm according to specific embodiments to identify the occurrence of a usage vulnerability after release and record the corresponding memory address (ie, the memory address affected by the vulnerability). For example, in block 420, the computing device 102 may identify the memory address of the free-use vulnerability based on the instruction count of the memory corruption instruction (eg, the total instruction count across all threads). According to certain embodiments, the computing device 102 may continue execution to try to identify other post-release usage vulnerabilities or terminate the method 400 (eg, to resolve the identified vulnerabilities before proceeding with the post-release debugging).In block 422, the computing device 102 may send the execution log 210, the memory log 212, and/or the program to the replay computing device 106. As mentioned above, the program can be sent in a compiled or uncompiled format, depending on the specific implementation. In addition, in some embodiments, the execution log 210, the memory log 212, and/or the program may be sent to a different remote computing device (for example, in a cloud computing environment) for subsequent access by the computing device 106 to be replayed.Referring now to FIG. 5, in use, the playback computing device 106 can execute the method 500 to identify the potential root cause of the memory corruption vulnerability after the program is released. The illustrative method 500 begins at block 502, where the computing device 106 receives the execution log 210, the memory log 212, and the deployed software program. As described above, in some embodiments, the computing device 106 may receive one or more of the logs 210, 212 or programs directly from the deployed software computing device 102, while in other embodiments, the computing device 106 may receive one or more Log 210, 212 or a program from another remote computing device. Furthermore, in embodiments where the computing devices 102, 106 are embodied as the same device, the logs 210, 212 and the program can be retrieved from the corresponding memory or data storage of the device.In block 504, the computing device 106 identifies whether to identify one or more potential root causes of the post-release use vulnerability. If so, then in block 506, the computing device 106 may compile the program using the LRU memory allocator. As described above, in doing so, the computing device 106 may overload the current memory allocator of the program with the required LRU memory allocator in block 508. Of course, in some embodiments, the computing device 106 may wish to utilize a different memory allocator. In this case, the program can be recompiled using the specific memory allocator. Furthermore, in some embodiments, the program may be received in a compiled format that has already utilized the desired compiler, in which case the computing device 106 may determine not to recompile.In block 510, the computing device 106 begins to replay the program. It should be appreciated that the computing device 106 may use any technology or mechanism suitable for performing the functions described herein to replay the execution of the program. For example, in block 512, the computing device 106 may use binary instrumentation techniques such as PIN technology (e.g., to allow controlled execution of the program) to replay the execution of the program. As described above, in the illustrative embodiment, the computing device 106 replays the execution of the program in the same manner as the deployed software computing device 102 (e.g., the same memory access, etc.).In block 514, the computing device 106 determines whether a memory access (e.g., load or store memory operation) has been reached/detected during program execution. If not, the method 500 proceeds to block 522 (described below), where the computing device 106 determines whether to continue playback of the program. However, if a memory access has been reached or detected, the computing device 106 compares the accessed memory address with the memory address using the memory corruption vulnerability after the release indicated in the memory log 212 in block 516. Of course, in some embodiments, the deployed software computing device 102 may have identified multiple post-release memory corruption vulnerabilities. In this case, the replaying computing device 106 can compare the accessed memory address with the memory log. The memory addresses (for example, each of them) associated with the multiple vulnerabilities indicated in 212 are compared.In block 518, the computing device 106 determines whether a match is detected between the accessed memory address and the memory address corresponding to the memory corruption vulnerability used after release. If so, in block 520, the computing device 106 records the memory access (e.g., a specific instruction) as a candidate for the root cause of the use of the memory corruption vulnerability after release. As described above, the computing device 106 may record the memory access to the candidate list 314. In doing so, the computing device 106 may record the corresponding call stack, instruction pointer, instruction type, memory address, symbol information, and/or other useful information associated with the matching memory access.Regardless of whether a match is detected, in block 522, the computing device 106 determines whether to continue the execution of the playback program. If so, the method 500 returns to block 510, where the computing device 106 continues to replay the program. Therefore, in some embodiments, the computing device 106 replays the execution of the program until all potential candidates (eg, matching memory accesses) that use the root cause of the memory corruption vulnerability after release have been identified and recorded in the candidate list 314 . In this way, the candidate list 314 can be used as a list of potential root causes for the use of memory corruption vulnerabilities after release. As mentioned above, in some embodiments, the candidate list 314 may be provided to the programmer, which may reduce the programmer's work necessary to identify the root cause of the memory corruption vulnerability used after release (for example, by reducing the programmer's need to see The number of lines of code). However, in the illustrative embodiment, in block 524, the computing device 106 filters the root cause candidates of the candidate list 314 to reduce the total number of root cause candidates. In other words, the computing device 106 can automatically reduce the number of root cause candidates to further reduce the workload required by the programmer to use the vulnerability after debugging and release.In order to do so, the computing device 106 may execute the method 600 as described in FIG. 6. In the illustrative embodiment, method 600 begins at block 602, where computing device 106 selects the next root cause candidate from candidate list 314. That is, the computing device 106 selects one of the memory addresses determined to point to the memory address corresponding to the memory corruption vulnerability used after release. It should be appreciated that the computing device 106 can select any (e.g., the first) root cause candidate from the candidate list 314 in the first occurrence of block 602, and what constitutes the "next" candidate can be implemented according to the specific implementation. Examples vary.In block 604, the computing device 106 removes from program execution or otherwise prevents the execution of one or more instructions associated with the selected candidate. It should be appreciated that the computing device 106 may use any suitable technique or algorithm to remove instructions and/or prevent execution of instructions. For clarity of description, preventing the execution of selected instructions may be referred to herein as "removing" instructions, regardless of the technique used to prevent the execution of those instructions. In addition, the computing device 106 may similarly use any suitable technique or algorithm to determine which instructions to remove. For example, in block 606, the computing device 106 may only remove a single instruction. Specifically, in block 608, the computing device 106 may remove the memory access instruction itself or the previous branch instruction (eg, the most recent branch instruction executed before the memory access instruction of the selected candidate). Alternatively, in block 610, the computing device 106 may remove the instruction set (ie, more than one instruction) associated with the selected candidate. For example, in block 612, the computing device 106 may remove instructions defined between the previous branch instruction and the candidate's memory access instruction. As mentioned above, the specific instructions removed by the computing device 106 may vary according to the specific embodiment. In some embodiments, the computing device 106 identifies operations "chunks" related to each other (eg, related to memory "release" or "delete" operations when determining the instruction set to be removed associated with the memory access instruction The code group of the association).It should be appreciated that the program in which the selected instruction is removed may be referred to herein as a "modified" program or a modified version of the program. In block 614, the computing device 106 replays the execution of the program based on the modified version of the program. In the illustrative embodiment, the program is executed in the same manner as the unmodified program, except that the removed instructions are not executed. In block 616, the computing device 106 determines whether the replay of the modified program resulted in the same use-after-free memory corruption vulnerability. It should be realized that depending on the specific instructions removed, replay may result in the same post-release memory corruption vulnerability, different vulnerabilities (for example, different post-release memory corruption vulnerabilities), or no vulnerability. If the computing device 106 determines that the replay of the modified program results in the same post-release memory corruption vulnerability, then in block 618, the computing device 106 removes the root cause candidate from the candidate list 314. In other words, if the same post-release vulnerability occurs with or without execution of the selected instructions, it is assumed that these instructions have nothing to do with the occurrence of the vulnerability. However, if post-release vulnerabilities still occur, the instructions may be related to the root cause of the vulnerabilities in some way, although they are not necessarily the root cause itself. Therefore, the candidate is saved in the candidate list 314. In addition, if the replay of the modified program results in different loopholes, it is not clear whether the instructions are related to the use loopholes after release, and therefore the candidates are also saved in the candidate list 314.In block 620, the computing device 106 determines whether to additionally or alternatively remove different instructions from the playback of the program. If so, the method 600 returns to block 604, where the computing device 106 selects one or more instructions associated with the selected candidate for removal, and replays the modified program again to determine whether these instructions are related to the release. After the use of vulnerabilities related. However, if the computing device 106 determines not to remove a different instruction, then in block 622, the computing device 106 determines whether there are any root cause candidates in the candidate list 314. If so, the method 600 returns to block 602, where the computing device 106 selects the next root cause candidate from the candidate list 314. In other words, the computing device 106 iteratively replays the program based on the removal of various instructions related to the root cause candidates identified in the candidate list 314 to automatically determine (ie, without programmer intervention) which candidates may be It has nothing to do with the appearance of the memory corruption vulnerability after release. In some embodiments, the computing device 106 may traverse each single instruction omission before traversing the instruction set (e.g., sequentially removing memory accesses and/or previous branch instructions for each candidate). Furthermore, in some embodiments, the computing device 106 may simultaneously remove instructions associated with multiple candidates of the candidate list 314 and replay the execution of the modified program as described herein. It should be appreciated that the execution of methods 500 and 600 often makes the number of root cause candidates significantly smaller than the number of memory accesses that programmers must consider without using the techniques described herein.ExampleIllustrative examples of the technology disclosed herein are provided below. An embodiment of the technology may include any one or more of the examples described below, and any combination thereof.Example 1 includes a computing device used to identify the potential root cause of a memory corruption vulnerability after the release of a program, the computing device includes: a replay module for replaying the execution of the program based on the execution log of the program, Wherein the execution log includes an ordered set of executed instructions of the program that led to the use of memory corruption vulnerability after release; a damage candidate identification module for (i) in response to detecting the use of memory address access after release, Compare the use of memory address access to the program after release with the memory address associated with the occurrence of the use of memory corruption vulnerability after release, and (ii) the use of memory address access and release in response to detection of the release of the program After using the matching between the memory addresses associated with the occurrence of the memory corruption vulnerability, the candidates who used the memory address access after the release of the program as the root cause of the memory corruption vulnerability after the release are recorded in the candidate list.Example 2 includes the subject of Example 1, and also includes a candidate filtering module for filtering the candidate list to reduce the number of candidates in the candidate list.Example 3 includes the subject matter of any one of Examples 1 and 2, and wherein filtering the candidate list includes selecting a candidate from the candidate list; removing one or more associated with the selected candidate from the execution of the program Instructions to generate a modified program; replay the modified program; and, in response to determining that replaying the modified program results in the use of memory corruption vulnerability after release, remove the selected from the candidate list Candidate.Example 4 includes the subject matter of any of Examples 1-3, and wherein the one or more instructions consist of a single instruction of the program.Example 5 includes the subject matter of any of Examples 1-4, and where one or more instructions consist of the most recent branch instruction of the program.Example 6 includes the subject matter of any of Examples 1-5, and wherein the one or more instructions consist of instructions corresponding to the memory access of the selected candidate.Example 7 includes the subject matter of any of Examples 1-6, and wherein the one or more instructions include the instruction set of the program.Example 8 includes the subject matter of any of Examples 1-7, and wherein the instruction set includes the instruction set defined between the most recent branch instruction and the instruction corresponding to the memory access of the selected candidate.Example 9 includes the subject matter of any one of Examples 1-8, and also includes a memory allocation module that compiles a program with a least recently used (LRU) memory allocator, wherein replaying the execution of the program includes responding to the use of LRU memory allocation The compiler compiles the program to replay the execution of the program.Example 10 includes the subject matter of any of Examples 1-9, and wherein compiling the program includes overloading the memory allocator of the program.Example 11 includes the subject matter of any one of Examples 1-10, and wherein replaying the execution of the program includes replaying the execution of the program using a binary instrumentation technique.Example 12 includes a method for identifying the potential root cause of a memory corruption vulnerability after the release of a program. The method includes: based on the execution log of the program, the execution of the program is replayed by the computing device, wherein the execution log includes the cause An ordered set of executed instructions of the program that uses the memory corruption vulnerability after the release; in response to detecting the use of the memory address access after the release, the computing device will use the memory address to access and AND the program after the release of the program. The memory address associated with the occurrence of the memory corruption vulnerability is used for comparison after release; and the match between the memory address access used after the release of the program is detected and the memory address associated with the occurrence of the memory corruption vulnerability after the release is detected , The candidate who uses the memory address access after the release of the program as the root cause of the memory corruption vulnerability after the release is recorded in the candidate list by the computing device.Example 13 includes the subject of Example 12, and also includes filtering the candidate list to reduce the number of candidates in the candidate list.Example 14 includes the subject matter of any one of Examples 12 and 13, and wherein filtering the candidate list includes selecting a candidate from the candidate list; removing one or more instructions associated with the selected candidate from the execution of the program To generate a modified program; to replay the modified program; and, in response to determining that replaying the modified program causes the post-release memory corruption vulnerability, remove from the candidate list Except for selected candidates.Example 15 includes the subject matter of any of Examples 12-14, and wherein removing one or more instructions includes removing a single instruction from the program.Example 16 includes the subject matter of any of Examples 12-15, and wherein removing one or more instructions includes removing the most recent branch instruction from the program.Example 17 includes the subject matter of any of Examples 12-16, and wherein removing one or more instructions includes removing instructions corresponding to memory accesses of the selected candidate.Example 18 includes the subject matter of any of Examples 12-17, and wherein removing one or more instructions includes removing a set of instructions from the program.Example 19 includes the subject matter of any of Examples 12-18, and wherein removing the instruction set includes removing the instruction set defined between the most recent branch instruction and the instruction corresponding to the memory access of the selected candidate.Example 20 includes the subject matter of any of Examples 12-19, and further includes compiling a program by a computing device using a least recently used (LRU) memory allocator, wherein replaying the execution of the program includes in response to compiling with the LRU memory allocator Program to replay the execution of the program.Example 21 includes the subject matter of any of Examples 12-20, and wherein compiling the program includes overloading the memory allocator of the program.Example 22 includes the subject matter of any one of Examples 12-21, and wherein replaying the execution of the program includes replaying the execution of the program using a binary instrumentation technique.Example 23 includes a computing device including: a processor; and a memory in which a plurality of instructions are stored, and when the plurality of instructions are executed by the processor, the computing device is caused to execute the method in any one of Examples 12-22.Example 24 includes one or more machine-readable storage media including a plurality of instructions stored thereon, in response to the computing device executing the plurality of instructions, causing the computing device to perform the method of any of Examples 12-22.Example 25 includes a computing device for identifying a potential root cause of a memory corruption vulnerability used after the release of a program, the computing device includes: a unit for replaying the execution of the program based on the execution log of the program, wherein the execution log An ordered set of executed instructions including the program that caused the release of the memory corruption vulnerability; used to respond to the detection of the release of the memory address access to the program after the release of the memory address access and the associated The unit for comparing the memory address associated with the occurrence of the memory corruption vulnerability after the release; and the memory address access used in response to detecting the release of the program and associated with the occurrence of the memory corruption vulnerability after the release The candidate who uses the memory address access after the release of the program as the root cause of the use of the memory corruption vulnerability after the release is recorded in the unit of the candidate list.Example 26 includes the subject of Example 25, and also includes a unit for filtering the candidate list to reduce the number of candidates in the candidate list.Example 27 includes the subject matter of any one of Examples 25 and 26, and wherein the means for filtering the candidate list includes: a unit for selecting a candidate from the candidate list; for removing from the execution of the program One or more instructions associated with the selected candidate to generate a modified program unit; a unit for replaying the modified program; and a unit for replaying the modified program in response to a determination The release causes the memory corruption vulnerability to be used after the release, and the selected candidate's unit is removed from the candidate list.Example 28 includes the subject matter of any of Examples 25-27, and wherein one or more instructions consist of a single instruction from the program.Example 29 includes the subject matter of any of Examples 25-28, and wherein the one or more instructions consist of the most recent branch instruction from the program.Example 30 includes the subject matter of any of Examples 25-29, and wherein the one or more instructions consist of instructions corresponding to the memory access of the selected candidate.Example 31 includes the subject matter of any of Examples 25-30, and wherein the one or more instructions include a set of instructions from a program.Example 32 includes the subject matter of any of Examples 25-31, and wherein the instruction set includes an instruction set defined between the most recent branch instruction and the instruction corresponding to the memory access of the selected candidate.Example 33 includes the subject matter of any one of Examples 25-32, and further includes a unit for compiling a program using a least recently used (LRU) memory allocator, wherein the unit for replaying the execution of the program includes a unit for responding to The unit that compiles the program with the LRU memory allocator to replay the execution of the program.Example 34 includes the subject matter of any of Examples 25-33, and wherein the unit for compiling the program includes a unit for overloading the memory allocator of the program.Example 35 includes the subject matter of any one of Examples 25-34, and wherein the means for replaying the execution of the program includes a means for replaying the execution of the program using a binary instrumentation technique.Example 36 includes a computing device for recording the execution of a program, the computing device including: a memory recording module for recording the execution of the program to an execution log; and a memory damage detection module for (i) using the memory after release The occurrence of the damage vulnerability monitors the execution of the program, and (ii) the memory address associated with the occurrence of the use memory damage vulnerability after release is recorded in the memory log.Example 37 includes the subject matter of Example 36, and wherein recording the execution of the program includes using a memory race recorder to record the execution of the program.Example 38 includes the subject matter of any one of Examples 36 and 37, and wherein monitoring the execution of the program includes monitoring the execution of the program for the occurrence of a memory corruption detection interrupt triggered as a result of the occurrence of the use memory corruption vulnerability after release.Example 39 includes the subject matter of any of Examples 36-38, and wherein recording the memory address includes determining the memory address based on the number of instructions of the program executed before the memory corruption instruction associated with the occurrence of the memory corruption vulnerability after release.Example 40 includes the subject matter of any one of Examples 36-39, and also includes a memory allocation module that compiles a program using a least recently used (LRU) memory allocator, wherein recording the execution of the program includes responding to using the LRU memory allocator Compile the program to record the execution of the program.Example 41 includes the subject matter of any of Examples 36-40, and wherein compiling the program includes overloading the memory allocator of the program.Example 42 includes a method for recording the execution of a program of a computing device, the method including: the computing device records the execution of the program to an execution log; and the computing device addresses the occurrence of a memory corruption vulnerability after release And monitoring the execution of the program; and the computing device records the memory address associated with the occurrence of the memory corruption vulnerability after the release to the memory log.Example 43 includes the subject matter of Example 42, and wherein recording the execution of the program includes using a memory race recorder to record the execution of the program.Example 44 includes the subject matter of any one of Examples 42 and 43, and wherein monitoring the execution of the program includes monitoring the execution of the program for the occurrence of a memory corruption detection interrupt triggered as a result of the occurrence of the use memory corruption vulnerability after release.Example 45 includes the subject matter of any of Examples 42-44, and wherein recording the memory address includes determining the memory address based on the number of instructions of the program executed before the associated memory corruption instruction with the occurrence of the use memory corruption vulnerability after release.Example 46 includes the subject matter of any one of Examples 42-45, and further includes compiling a program by a computing device using a least recently used (LRU) memory allocator, wherein recording the execution of the program includes: in response to compiling with the LRU memory allocator Program to record the execution of the program.Example 47 includes the subject matter of any of Examples 42-46, and wherein compiling the program includes overloading the memory allocator of the program.Example 48 includes a computing device including: a processor; and a memory in which a plurality of instructions are stored, and when the plurality of instructions are executed by the processor, the computing device is caused to perform the method in any one of Examples 42-47.Example 49 includes one or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed by a computing device, cause the computing device to perform any of Examples 42-47 method.Example 50 includes a computing device for recording the execution of a program, the computing device including: a unit for recording the execution of the program to an execution log; a unit for monitoring the execution of the program for the occurrence of a memory corruption vulnerability after release ; And a unit for recording the memory address associated with the occurrence of the memory corruption vulnerability after the release to the memory log.Example 51 includes the subject matter of Example 50, and wherein the unit for recording the execution of the program includes a unit that uses a memory race recorder to record the execution of the program.Example 52 includes the subject matter of any one of Examples 50 and 51, and wherein the unit for monitoring the execution of the program includes means for monitoring the program for the occurrence of a memory corruption detection interrupt triggered by the occurrence of a memory corruption vulnerability after release. Unit of execution.Example 53 includes the subject matter of any one of Examples 50-52, and wherein the unit for recording the memory address includes instructions for executing a program based on the program executed before the memory corruption instruction associated with the occurrence of the memory corruption vulnerability after release. The number of units to determine the memory address.Example 54 includes the subject matter of any one of Examples 50-53, and also includes a unit for compiling a program with a least recently used (LRU) memory allocator, wherein the unit for recording the execution of the program includes a unit for responding It is a unit that uses the LRU memory allocator to compile the program and record the execution of the program.Example 55 includes the subject matter of any of Examples 50-54, and wherein the unit for compiling the program includes a unit for overloading the memory allocator of the program.
A microprocessor or microcontroller device may have a central processing unit (CPU), a data memory coupled with the CPU, wherein the data memory is divided into a plurality of memory banks, wherein a bank select register determines which memory bank is currently coupled with the CPU. Furthermore, a first and second set of special function registers are provided, wherein upon occurrence of a context switch either the first or the second set of special function register are selected as active context registers for the CPU and the respective other set of special function registers are selected as inactive context registers, wherein at least some of the registers of the active context registers are memory mapped to more than two memory banks of the data memory and wherein all registers of the inactive context registers are memory mapped to at least one memory location within the data memory.
CLAIMS WHAT IS CLAIMED IS: 1. A microprocessor or microcontroller device comprising: a central processing unit (CPU); a data memory coupled with the CPU, wherein the data memory is divided into a plurality of memory banks, wherein a bank select register determines which memory bank is currently coupled with the CPU; and a first set of special function registers and a second set of special function registers, wherein upon occurrence of a context switch either the first or the second set of special function register are selected as active context registers for the CPU and the respective other set of special function registers are selected as inactive context registers, wherein at least some of the registers of the active context registers are memory mapped to more than two memory banks of said data memory and wherein all registers of the inactive context registers are memory mapped to at least one memory location within the data memory. 2. The device according to claim 1, wherein all registers of the inactive context registers are memory mapped to only one memory bank of said plurality of memory banks. 3. The device according to claim 1, wherein at least some of the registers of the active context registers are memory mapped to all memory banks of said data memory. 4. The device according to claim 1 , wherein the context register comprise a working register, a status register, a file select register for defining an indirect address and a bank select register. 5. The device according to claim 4, wherein only the status register and the file select register of the active context registers are memory mapped to all memory banks of said data memory and the working register and the bank select register are non-memory mapped registers. 6. The device according to claim 1, wherein the inactive context registers are memory mapped to the last memory bank of said data memory. 7. The device according to claim 1 , further comprising an interrupt unit coupled with the CPU, wherein said context switch is induced by an interrupt. 8. The device according to claim 1 , wherein said context switch is software induced. 9. The device according to claim 1, wherein the device comprises four memory banks. 10. The device according to claim 9, wherein the inactive context registers are memory mapped only into the fourth bank. 1 1. A method of operating a microprocessor or microcontroller device comprising a central processing unit (CPU); a data memory coupled with the CPU, wherein the data memory is divided into a plurality of memory banks; a first and second set of special function registers wherein either said first or said second set of special function registers forms an active context and the respective other set an inactive context, the method comprising the steps of: selecting either said first or said second set of registers as an active context and said respective other set of registers as an inactive context, wherein at least some of the registers of the active context registers are memory mapped to more than two memory banks of said data memory and wherein all registers of the inactive context registers are memory mapped to at least one memory location within said data memory; upon occurrence of a context switch, switching between said first and second set of registers as active and inactive context, respectively. 12. The method according to claim 11, wherein all registers of the inactive context registers are memory mapped to only one memory bank of said plurality of memory banks. 13. The method according to claim 11, further comprising inducing said context switch by an interrupt. 14. The method according to claim 11, wherein said context switch is software induced. 15. The method according to claim 1 1 , wherein the device comprises four memory banks. 16. The device according to claim 15, wherein the inactive context registers are memory mapped only into the fourth bank. 17. The method according to claim 1 1 , wherein at least some of the registers of the active context are memory mapped to all memory banks of said data memory. 18. The method according to claim 1 1, wherein the context register comprise a working register, a status register, a file select register for defining an indirect address and a bank select register. 19. The method according to claim 18, wherein only the status register and the file select register of the active context registers are memory mapped to all memory banks of said data memory and the working register and the bank select register are non-memory mapped registers. 20. The method according to claim 1 1, wherein the inactive context registers are memory mapped to the last memory bank of said data memory. 21. The method according to claim 1 1 , further comprising accessing the registers of the inactive context by selecting the respective memory bank through the active context and accessing the inactive context registers. 22. The method according to claim 11, generating a plurality of interrupts, wherein upon occurrence of an interrupt, a context switch takes place, wherein an interrupt routine is executed and wherein the interrupt routine uses the values stored in the selected register set during a previous execution of the interrupt routine.
MICROCONTROLLER WITH CONTEXT SWITCH CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 61/613,802 filed on March 21, 2012, which is incorporated herein in its entirety. TECHNICAL FIELD The present disclosure relates to a microcontroller, in particular a microcontroller with automatic context switching capabilities. BACKGROUND Microcontrollers generally are systems on a single chip and comprise a microcontroller core or central processing unit and a plurality of peripheral components. A wide variety of such microcontrollers exist having 8-bit, 16-bit and 32-bit architecture. Existing microcontrollers such as 8-bit microcontrollers manufactured by the Applicant Microchip Technology Inc. provide for a flexible architecture wherein a plurality of families are available, each family having a different complexity. Such microcontrollers may for example comprise a Harvard architecture in which program and data memories are separated. Microcontrollers of this type may further comprise a specific banking system that allows access to the data memory without a complex decoding circuitry. Generally, the data memory is divided in a plurality of banks and a bank select register defines which of the banks is currently selected and accessible. To access other banks, the bank select register has to be re-programmed. Even though such a banking scheme only allows access to a single defined memory bank, these controllers may include instructions that force a switch to a predefined bank. This provides for improved and powerful performance despite the general access limitations. According to a product palette, different families of microcontrollers in an 8-bit family can be provided as mentioned above. For example, a baseline family might provide for only essential functionalities which allows manufacture of such devices at very low cost. For example, such a baseline product may not support interrupts wherein more advanced families may have these functionalities. Interrupt functionality can add significant circuitry which does not allow to manufacture such devices at very low costs. As mentioned above, many microcontroller designs, in particular 8-bit microcontrollers, have a reduced functionality and therefore simplified architecture to save valuable silicon real estate and allow for a reduced chip size and thus for a higher number of chips per wafer. For example, according to Applicant Microchip Technology Inc.'s product line, many of so-called baseline 8-bit microcontroller's code execution is limited by lack of interrupt functions. Fig. 1, shows a simplified block diagram of such a conventional microcontroller with a data memory that can be accessed with a banking mechanism. A program memory 1 10 stores a plurality of instructions forming an executable program. Program counter 1 15 may be designed to have for example 11 bits for addressing a 2k linear program memory. A stack 120 may be provided to store program counter values when subroutines are executed. The shown exemplary microcontroller is an 8-bit Harvard-type microcontroller that operates with 12-bit instruction words stored in program memory 1 10. Thus, a central 8-bit data bus 105 may be used to couple various functional elements within the microcontroller, such as for example timer unit 0 and external port B 130. The data memory 125 is coupled with this bus 105 and receives for example an 8-bit address from address multiplexer 140. For direct addressing, address multiplexer 140 combines an address from address data supplied by the instruction register 135 and address data supplied by special function register 145. In direct addressing mode, the instruction register 135, thus, supplies the lower 5 bits and the special function register 145 the upper 3 bits. Thus, according to an embodiment, special function register 145 operates as a bank select register capable of selecting one of 8 different memory banks. In indirect addressing, special function register 145 provides for a complete address with all bits 0-7. Indirect addressing is implemented by accessing special function register INDF which is a virtual register and therefore not a physically implemented. Any read or write access to this register INDF forces that an indirect access is applied to the data memory 125 via special function register 145. Thus, instead of reading or writing register INDF, an indirect data memory access is performed. According to this type of architecture, instruction register 135 receives an instruction directly from program memory 1 10 and is coupled with an instmction decode & control unit 180, for example, through another internal 8 bit bus. Instruction decode & control unit 180 is furthermore coupled with certain internal function provided by unit 175. For example, this functional unit 175 may include a device reset timer, a power-on reset, a watchdog timer, an internal RC clock, etc.. Other functions can be integrated and/or certain functions may be omitted. Timing generation unit 185 may provide for internal timing signals and can also be coupled with unit 175. The conventional 8-bit microcontroller core shown in Figure 1 has an arithmetic logic unit 160 (ALU) coupled with a status register 150. The ALU 160 is further coupled with a working register 165 and receives data from the instruction register 135 and the 8 -bit data bus through multiplexer 155 on one hand and on the other hand from working register 165. Figure 1, thus, merely shows some essential structure of a so-called baseline microcontroller core. Figure 2 shows an example of another block diagram of a microcontroller core that provides for more functionality. Generally, similar elements carry the same reference symbol. The data memory RAM 225 shown in Figure 2 can be identical to the memory as shown in Figure 1. However, a different reference symbol is used to indicate that this RAM 225 is differently mapped as will be explained below in more detail. This data memory now comprises a linear memory block consisting of a plurality of sequential memory banks to which no special function registers are mapped. An additional bank select register (BSR) 210 is provided wherein this register is accessible through a dedicated instruction and therefore may not be memory mapped. The content of this register 210 provides for the upper 3 bits of an address provided by address multiplexer 220 which receives the lower 5 bits from instruction register 135. The special function register FSR 145 may now be an 8-bit register which can be used for indirect addressing of the entire linear data memory independent of the currently selected memory bank. In other embodiments, this register can be limited to access the upper 4 banks that form the linear data memory by setting bit 7 permanently to "1". However, this register does not provide for the bank select function per se anymore. Bank selection is effected only by writing a respective bank number into the non-memory mapped bank select register 210. Thus, even when a memory bank within the linear memory block is selected, the dedicated instruction allows for change to any other memory bank. Other internal structures of low cost microcontroller cores are possible and can be combined with the specific embodiments disclosed in the various embodiments as will be explained in more detail below. As mentioned above, many low cost microcontroller cores do not provide for an interrupt functionality due to the increase in core logic. A simple interrupt logic 250 can be added to the architectures mentioned above as shown in Fig. 2, for example a single interrupt input INT can be provided which may initiate an interrupt from various sources, wherein software has to handle identification and management of interrupt related tasks. If such a simple interrupt logic 250 is implemented, then an interrupt service routine code must share common special function registers with main line code. Thus, certain registers, such as register 245, 165 and 150 need to be manually saved when entering an interrupt routine. Certain microcontrollers, for example, Applicant's microcontroller series PlC16Flxxx provide for an automatic save and restore function of context registers using so-called shadow registers. The shadow registers are special function registers merely for the purpose to save the current context. They are overwritten each time an interrupt is initiated and their content is written back to the respective context registers upon return from the interrupt routine. However, while this is an improvement, when adding interrupt capability, there exists a need for an even more improved automatic context switching that prevents the need to manually store and restore those registers and allows for further use of the saved context. SUMMARY According to an embodiment, a microprocessor or microcontroller device may comprise a central processing unit (CPU); a data memory coupled with the CPU, wherein the data memory is divided into a plurality of memory banks, wherein a bank select register determines which memory bank is currently coupled with the CPU; and a first set of special function registers and a second set of special function registers, wherein upon occurrence of a context switch either the first or the second set of special function register are selected as active context registers for the CPU and the respective other set of special function registers are selected as inactive context registers, wherein at least some of the registers of the active context registers are memory mapped to more than two memory banks of the data memory and wherein all registers of the inactive context registers are memory mapped to at least one memory location within the data memory. According to a further embodiment, all registers of the inactive context registers can be memory mapped to only one memory bank of the plurality of memory banks. According to a further embodiment, at least some of the registers of the active context registers can be memory mapped to all memory banks of the data memory. According to a further embodiment, the context register may comprise a working register, a status register, a file select register for defining an indirect address and a bank select register. According to a further embodiment, only the status register and the file select register of the active context registers may be memory mapped to all memory banks of the data memory and the working register and the bank select register are non-memory mapped registers. According to a further embodiment, the inactive context registers can be memory mapped to the last memory bank of the data memory. According to a further embodiment, the device may further comprise an interrupt unit coupled with the CPU, wherein the context switch is induced by an interrupt. According to a further embodiment, the context switch can be software induced. According to a further embodiment, the device may comprise four memory banks. According to a further embodiment, the inactive context registers can be memory mapped only into the fourth bank. According to another embodiment, a method of operating a microprocessor or microcontroller device comprising a central processing unit (CPU); a data memory coupled with the CPU, wherein the data memory is divided into a plurality of memory banks; a first and second set of special function registers wherein either the first or the second set of special function registers forms an active context and the respective other set an inactive context, may comprise the steps of: selecting either the first or the second set of registers as an active context and the respective other set of registers as an inactive context, wherein at least some of the registers of the active context registers are memory mapped to more than two memory banks of the data memory and wherein all registers of the inactive context registers are memory mapped to at least one memory location within the data memory; upon occurrence of a context switch, switching between the first and second set of registers as active and inactive context, respectively. According to a further embodiment of the method, all registers of the inactive context registers can be memory mapped to only one memory bank of the plurality of memory banks. According to a further embodiment of the method, the method may further comprise inducing the context switch by an interrupt. According to a further embodiment of the method, the context switch can be software induced. According to a further embodiment of the method, the device may comprise four memory banks. According to a further embodiment of the method, the inactive context registers can be memory mapped only into the fourth bank. According to a further embodiment of the method, at least some of the registers of the active context can be memory mapped to all memory banks of the data memory. According to a further embodiment of the method, the context register may comprise a working register, a status register, a file select register for defining an indirect address and a bank select register. According to a further embodiment of the method, only the status register and the file select register of the active context registers are memory mapped to all memory banks of the data memory and the working register and the bank select register are non-memory mapped registers. According to a further embodiment of the method, the inactive context registers are memory mapped to the last memory bank of the data memory. According to a further embodiment of the method, the method may further comprise accessing the registers of the inactive context by selecting the respective memory bank through the active context and accessing the inactive context registers. According to a further embodiment of the method, the method may comprise generating a plurality of interrupts, wherein upon occurrence of an interrupt, a context switch takes place, wherein an interrupt routine is executed and wherein the interrupt routine uses the values stored in the selected register set during a previous execution of the interrupt routine. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 shows a block diagram of a conventional microcontroller; Fig. 2 shows a block diagram of another conventional microcontroller; Fig. 3 shows an embodiment of a swappable shadow register set; Fig. 4 shows another embodiment of a circular buffer for providing a dual register set; Fig. 5 shows a memory mapping according to various embodiments, Fig. 6 and 7 shows a detail special function summary table, and Fig. 8 shows interrupt priorities according to various embodiments. DETAILED DESCRIPTION Baseline CPU's in the above mentioned microcontrollers often do not have interrupt capabilities. Adding interrupts to the baseline CPU presents specific drawbacks and limitations. The context switching according to various embodiments overcomes many of those drawbacks. When a device with interrupt capability vectors to its Interrupt Service Routine (ISR) the values, or context, of various registers must be saved and restored to allow the program to resume from where it left off upon return to mainline code. Other register must be reinitialized each time the device vectors to the ISR. Contexts switching according to various embodiments allows a copy of these critical registers to be maintained for each mainline and ISR execution code, and swapped for use. According to various embodiments, the addition of a circular buffers to context registers is provided that allows contained values to swap on entry and exit of ISR. According to various embodiments, as shown for example in Fig. 3 or Fig. 4 a context switching mechanism can be added to a basic processor logic to provide for interrupt capabilities and associated context switch. Thus, an architecture as shown in Figs. 1 or 2 can be improved by providing an additional set of registers wherein a swapping function is added. Hence, instead of saving the current context into a shadow register set, an entire second set of registers is provided and during execution of an interrupt, this second set is used. Thus, an interrupt context is provided and the content of the interrupt context remains stored in the associated interrupt context registers while a "normal" context is maintained in the same way by means of the regular context registers. This type of double register set can be implemented in particular by the use of a circular buffer which uses respective pointers that wrap around a block of data to provide for the circular buffer function. As will be explained in more detail below. Fig. 3 shows an exemplary embodiment of a circular buffer which can be used to provide context switching functionality. The circular buffer 300 may be designed to provide storage capacity for two register sets used to store the active and the inactive context. For example a context may comprise 4, 8 or 16 registers, thus, the circular buffer 300 would provide for 8, 16 or 32 memory locations or registers. The circular buffer 300 can be at least partially memory mapped into the main data memory 225. For example, the first half of circular buffer 300 may be used as the main context and the second half may be used as the interrupt context. Only one of the two contexts is "active" at any time whereas the respective other context is "inactive." Thus, certain registers of the first half of circular buffer can be memory mapped to all memory banks wherein all registers of the second half may be memory mapped only to a single, preferably the last memory bank. Thus, memory mapping of the active register set to the banks does not need to be same as memory mapping to of the inactive register set. While the active context can be partially mapped to all or a selected number of banks, all registers of the "inactive" context are memory mapped to a single memory bank, preferably the last memory bank. Certain registers of the active context may not be memory-mapped at all but rather hard wired or mapped to a specific register to perform a specific function. Thus, during operation of the microcontroller, the memory mapped registers of the active context would be accessible in all memory banks whereas other non-memory mapped registers are available only through dedicated instructions. For example, a bank select register may only be available through a specific instruction. A working register may be memory mapped or not according to various embodiments. Certain context registers may be memory mapped to all memory banks, for example a status register STATUS and/or a file select register FSR. However, all registers of an inactive context are memory mapped to at least a single 'memory location'. They do not need to be memory mapped to the same locations of the respective registers of the active context which would make those active registers not available in that memory bank. Also there is no requirement that these registers are placed all in the same bank or located in the last bank according to other embodiments. Memory mapping according to this embodiment can be provided by means of pointers 310, 320 as shown in Fig. 3. Thus, pointer 310 points to a first register of the currently active context whereas pointer 330 points to the first register of the inactive context. Other registers may be memory mapped according to predefined relationship. Thus, memory mapping does not have to be continuous. A table may be defined to memory map each register. Such a table may also be used to define non-memory mapped registers of the active context. Similarly, all registers of the inactive context may be memory mapped by means of a table to a single memory bank. A context switch from A to B as shown in Fig. 3 causes that pointer 310' now points to the bottom half whereas pointer 330' now points to the top half of circular buffer 300. This functionality can be provided by simply adding a constant value to the respective address pointers. By wrapping around the maximum possible address for buffer 300, a circular buffer function is realized. Hence, during normal operation the context defined by the top half of buffer 300 will be selected as the active context as shown with reference symbol A on the left side of Fig. 3 whereas the context of the bottom half of buffer 300 is only memory mapped to the last bank. Upon entering an interrupt routine, the context pointers are positioned according to reference symbol B as shown on the right side of Fig. 3. Now pointer 310' points to the bottom half of buffer 300. Thus, the bottom half registers of buffer 300 are now selected as the active context. Re-entry of an interrupt routine therefore now provides for the same context as it was left by a previous execution of an interrupt routine, wherein the inactive context is fully available through the last memory bank. Fig. 4 shows another embodiment that provides for a similar functionality. Here two buffers 410 and 420 are provided for a normal context and an interrupt context, respectively. Bidirectional multiplexers 430 and 440 are provided to memory map certain registers to various memory banks 450; of data memory 450. For example, the first I/O of multiplexer 430 may memory map certain registers of register set 410 to all memory banks 450i..450 nwhereas the second I/O maps the all registers only to memory bank 450 n. The second multiplexer 440 performs the reverse function as shown in Fig. 4. Thus, either register set 410 or register set 420 are selected as the main register set. Additional circuitry may be provided to provide connection or selection to or of non-memory mapped registers. Fig. 5 shows an implementation of the memory mapping in a baseline microcontroller according to an embodiment. Here, for example, the data memory only provides memory space for four memory banks wherein each memory bank comprises 32 registers. Thus, each memory bank can be fully addressed by only 5 bits. This allows for a reduced instruction size, for example using only 12 bits. As shown in Fig. 5, a context may consist of only a limited number of selected special function registers. According to Fig. 5, a context has four registers: the working register W, the status register STATUS, the indirect address register FSR, and the bank select register BSR. As can be seen, according to this embodiment, two of the four active context registers are not memory mapped at all, namely the working register W and the bank select register BSR. The other two, the status register STATUS and the file select register FSR are memory mapped to all memory banks at address 03h and 04h, respectively. The last memory bank "Oi l" contains the inactive context. As shown the inactive working register I_W is stored at address Olh, the inactive status register I STATUS at address 06h, the inactive file select register at address 07h, and the inactive bank select register I BSR at address 08h. In the embodiment of Fig. 5, registers at address OCh to address OFh are memory mapped to all memory banks whereas each bank has separate general purpose registers at memory locations lOh to lFh. Moreover, memory mapping of special function registers at addresses OOh to OBh is not the same for all banks. Only banks "000" and Ό10" have an identical memory mapping for those addresses. Other registers or more registers may be chosen for a context according to other embodiments. Fig. 6 and 7 shows a more detailed list of only the first 12 memory mapped special function registers. Again, a context consists of four registers, the working register W, the bank select register BSR, the status register STATUS, and the file select register FSR. According to this embodiment, again only two registers, STATUS and FSR of the active context are memory mapped to all memory banks at respective addresses 03 h, 04h as shown in Figs. 6 and 7 whereas four non-memory mapped registers W, TRIS, OPTION and BSR are still provided 'to all banks' as shown in the table of Fig. 6 and 7. The inactive context is only memory mapped to the last memory bank at linearized addresses 61h, 66h, 67h, and 68h. According to various embodiments, additional instructions can be provided for a baseline microcontroller with the enhanced interrupt functionality as explained above. For example, a Return, a Return from Interrupt, and a Move Literal to BSR instructions can be added to such a microcontroller core to further support interrupts and other context switching functionalities. According to various embodiments, context switching of important special function registers can thus be added not only for Interrupt Service Routine entrance and exit but also for other events controlled by software. According to various embodiments, three new instructions may be added to a baseline microcontroller: MOVLB - Move Literal to BSR Register: this instruction directly controls a bank select register by writing a constant value into it thereby forcing a bank switch. A 12-bit Opcode may usel2'h010 - 12'h017. RETURN - Return from CALL: this instruction returns from a subroutine call wherein the baseline only provided RETLW which returns a byte from the last program location into the working register. A 12-bit Opcode may use 12'h01E RETFIE - Return from Interrupt: this instruction returns from an interrupt, wherein as mentioned above, conventional baseline devices did not have interrupts. A 12-bit Opcode may use 12'h01F. Interrupt context switching is implemented according to various embodiments as follows: Second Copy (Context) of selected SFRs is used when executing from the Interrupt Service Routine. For example, the FSR, STATUS, BSR, and W Registers can be swapped on the improved microcontroller device according to various embodiments. While it is known from prior art devices such as the PIC16Flxxx line to use so-called shadow registers to save a current context and restore it upon entry and exit of a service routine, the various embodiments allow to swap a second register set that can be implemented in one of the various memory banks. Hence, a true context switch takes place in which the content of the second context register set is used instead of the main context register set upon a respective trigger. Thus, an interrupt routine may use an entire different set of values for these registers without the need to first initialize these registers. The values of for the main program are handled similarly through the swapping mechanism. There can be two register swap trigger sources: - Vectoring on interrupt - return from interrupt instruction Each context can be triggered by its respective source. This embodiment uses two contexts. According to another embodiment, there could four; InterruptO, Interrupt 1 , Interrupt2, and Main. According to the various embodiments, an inactive context is always visible in Bank 3 of the special function registers via I_W, I STATUS, I FSR, and I BSR registers as shown in Fig. 7. Interrupt function according to various embodiments can be enabled by default. On conventional baseline devices any interrupt sources caused the device to reset. Setting the GIE bit causes the device to instead vector to address 0x004, to allow the execution of an Interrupt Service Routine (ISR). The Return From Interrupt (RETFIE) instruction is used to return from the ISR and sets the GIE bit, enabling subsequent interrupts. While the device is executing from the ISR a secondary set of W, STATUS, FSR, and BSR registers are used by the CPU. These registers are still addressed at the same location, but hold persistent, independent values for use inside the ISR. This allows the contents of these registers to be unaffected by interrupts in main line execution. The contents of the other context's registers are visible in bank 3 of the SFR map via the I_W, I_STATUS, I FSR, and I BSR registers. When executing from the ISR they will show the main line context, and vice versa. According to an embodiment, four interrupt sources may be available; timer TMR0, analog-to-digital converter ADC, comparators, and interrupt on Pin Change. Interrupts are enabled using the xxIE bits in an INTEI REG register. Interrupt on Pin Change can be enabled using the RAWU bit of the Option Register OPTION to allow the RAIF bit to function. The comparator interrupt flag can be used if interrupt generation is enabled in the CM1 CON0 and CM2CON0, Register and Register as shown in Fig. 6. The GIE bit of INTCON enables vectoring to the interrupt service routine. When the WUR bit is set, any enabled interrupt source in sleep will cause the device to wake up and reset. This function is similar to traditional baseline operation. Fig. 8 a possible implementation of different priorities according to internal programming. Here three control bits "In Sleep" indicating a low power mode, "GIE" enabling the interrupt, and "WUR" for indicating a wake up reset are provided. The table in Fig. 8 shows associated function according to different settings of these bits. Thus, either a device reset, a vectoring or continued operation can be caused according to the respective setting.
A method for generating a notification by an electronic device to alert a user of the electronic device is disclosed. In this method, a speech phrase may be received. Then, the received speech phrase may be recognized, by a processor, as a command to generate the notification. In addition, one or more context data of the electronic device may be detected by at least one sensor. It may be determined whether the notification is to be generated at least based on the context data. The notification may be generated, by the processor, based on the context data and the command to generate the notification.
A method for generating a notification by an electronic device, the method comprising:receiving (410) a speech phrase;recognizing (420), by a processor, the speech phrase as a command to generate the notification;upon recognizing the speech phrase, detecting (430) context data using at least one sensor of the electronic device;determining a probability value based on the detected context data;determining (440), by the processor, whether to generate the notification based on a comparison between the probability value and a threshold value;in response to having determined to generate the notification based on the comparison between the probability value and the threshold value, generating (450), by the processor, the notification based on the command; andin response to having determined not to generate the notification based on the comparison between the probability value and the threshold value, not generating (450), by the processor, the notification based on the command.The method of claim 1, wherein the determined probability value and the threshold value are numbers between 0 and 1; wherein the probability value indicates whether the electronic device is in a context in which the notification is to be generated; and wherein the speech phrase comprises one or more keywords.The method of claim 2, wherein the speech phrase comprises at least a first speech phrase and a second speech phrase, wherein each of the first speech phrase and second speech phrase comprises a keyword or phrase, and wherein the speech phrase is recognized based on reception times of the first speech phrase and second speech phrase.The method of claim 3, wherein the speech phrase is recognized as a command to generate the notification in response to determining that the first speech phrase and the second speech phrase are received within a predetermined time period, and wherein the speech phrase is not recognized as a command to generate the notification in response to determining that the first speech phrase and the second speech phrase are not received within a predetermined time period.The method of claim 4, wherein detecting the context data is based on at least one among a user input, movement of the electronic device, timing information, location information of the electronic device, ambient light value, and an input sound; wherein generating the notification comprises determining whether the notification is to be generated at least based on the context data; wherein determining whether the notification is to be generated comprises deactivating a silent mode of the electronic device upon determining that the notification is to be generated; and wherein determining whether the notification is to be generated comprises at least one of:determining whether the timing information is within a predetermined time period during which the notification is not to be generated; ordetermining whether the location information of the electronic device corresponds to a predetermined location where the notification is not to be generated.The method of claim 5, wherein the method further comprises:locking the electronic device to prevent unauthorized access to the electronic device in response to determining that the notification is to be generated; andunlocking the electronic device in response to receiving a user input.The method of claim 6, wherein detecting context data using at least one sensor of the electronic device comprising capturing an image of a user using an image sensor, wherein the image is preferably an image of a face of a user, eyes of a user, lips of a user, or a hand of a user.The method of claim 6 or claim 7, wherein detecting the context data comprises determining whether the speech phrase is spoken in a direction other than a direction toward the electronic device.The method of claim 8, wherein detecting the context data comprises:determining a direction of departure of the speech phrase from a user; anddetermining that the direction of departure is toward the electronic device when the direction of departure is within a predetermined angle or range from the a line between the electronic device and the user.The method of claim 8, wherein detecting the context data comprises:determining a direction of departure of the speech phrase from a user;determining a reference direction between the user and the electronic device;determining an angle between the determined direction of departure and the determined reference direction;compare the determined angle with a predetermined angle within which a speech phrase may be considered to be spoken toward the device; anddetermine, based on the comparison, whether the speech phrase is spoken in a direction other than a direction toward the electronic device.The method of any of claims 7 to 10, wherein generating the notification comprises generating, by an output unit, at least one of the following indicative of the notification: audible sound, vibration, and visible light.An electronic device for generating a notification, the electronic device comprising:a sound sensor configured to receive a speech phrase;a speech recognition unit configured to recognise the speech phrase as a command to generate the notification;a sensor unit, comprising at least one sensor, configured to detect context data using at least one sensor upon recognising the speech phrase;a processor configured to:determine a probability value based on the detected context data; anddetermine whether to generate the notification based on a comparison between the probability value and a threshold value; andan output unit configured to:generate the notification based on the command in response to determining to generate the notification based on the comparison between the probability value and the threshold value; andnot generate the notification based on the command in response to determining not to generate the notification based on the comparison between the probability value and the threshold value.The electronic device of claim 12, wherein the sensor unit comprises a sound sensor configured to receive a first speech phrase and a second speech phrase as the speech phrase; wherein each of the first speech phrase and second speech phrase comprises a keyword or phrase, and wherein the speech recognition unit is configured to recognise the speech phrase based on reception times of the first speech phrase and second speech phrase.The electronic device of claim 12, wherein the speech recognition unit is further configured to recognise the speech phrase as a command to generate the notification in response to determining that the first speech phrase and the second speech phrase are received within a predetermined time period, wherein the sensor unit comprises at least one of an input sensor, movement sensor, clock unit, location sensor, image sensor and a sound sensor.A non-transitory computer-readable storage medium comprising instructions executable to cause at least one processor of an electronic device to perform operations of any of claims 1 to 11.
FIELD OF THE DISCLOSUREThe present disclosure relates to generating a notification by an electronic device, and more specifically, to generating a notification to alert a user of the electronic device.DESCRIPTION OF RELATED ARTRecently, the use of electronic devices such as smartphones, tablet computers, wearable computers, and the like has become widespread. These devices often provide voice and/or data communication functionalities over wireless or wired networks. In addition, such devices may provide a variety of functions designed to enhance user convenience such as sound processing, image or video processing, navigation, reproduction of music or multimedia files, etc.Among such functions, conventional electronic devices are often equipped with a speech recognition function. Such electronic devices may perform a function in response to receiving and recognizing a voice command from a user. For example, an electronic device equipped with a speech recognition function may activate an application, play an audio file, or take a picture in response to a voice command from a user.Occasionally, electronic devices may be lost or misplaced by their users. In such cases, some conventional electronic devices are configured to output an alarm sound or a message to assist the users in finding the electronic devices. For example, an electronic device may alert a user of its location by generating an alarm sound in response to a voice command from the user. The electronic device may also transmit a message to another electronic device of the user to inform the user of the location of the electronic device.In some situations, however, alarm sounds may be generated erroneously. For example, if a voice command to find an electronic device is received by a user's electronic device from another person intended for his or her own electronic device, the user's electronic device may generate an alarm sound in response to the voice command. Further, using audio functions of electronic devices in some locations such as a library, a theater, a meeting room, and the like may be restricted or limited. In such an environment, generating an alarm sound for locating an electronic device in response to a voice command from the user or another person may be undesirable.SUMMARY OF THE INVENTIONThe present disclosure relates to generating a notification to alert a user of the electronic device based on context data of the electronic device and a command to generate the notification.The invention is defined by the independent claims. Features of preferred embodiments are set out in dependent claims.According to one aspect of the present disclosure, a method for generating a notification by an electronic device to alert a user of the electronic device is disclosed. In this method, a speech phrase may be received. Then, the received speech phrase may be recognized, by a processor, as a command to generate the notification. In addition, one or more context data of the electronic device may be detected by at least one sensor. It may be determined whether the notification is to be generated at least based on the context data. The notification may be generated, by the processor, based on the context data and the command to generate the notification. The disclosure also describes a computer-readable medium relating to this method.According to another aspect of the present disclosure, an electronic device for generating a notification to alert a user of the electronic device is disclosed. The electronic device may include a sound sensor, a speech recognition unit, a sensor unit, a processor, and an output unit. The sound sensor may be configured to receive a speech phrase, and the speech recognition unit may be configured to recognize the speech phrase as a command to generate the notification. In addition, the sensor unit may be configured to detect context data of the electronic device. Further, the processor may be configured to generate the notification based on the context data and the command. The output unit may be configured to generate at least one of audible sound, vibration, or visible light indicative of the notification. Additionally, the processor may include a notification processing unit, which is configured to determine whether the notification is to be generated based on the context data.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the inventive aspects of this disclosure will be understood with reference to the following detailed description, when read in conjunction with the accompanying drawings.FIG. 1 illustrates an electronic device configured to generate a notification to alert a user of the electronic device based on context data of the electronic device, according to one embodiment of the present disclosure.FIG. 2 illustrates a block diagram of the electronic device configured to generate a notification for the user based on context data of the electronic device, according to one embodiment of the present disclosure.FIG. 3 illustrates a block diagram of the sensor unit configured to detect context data of the electronic device, according to one embodiment of the present disclosure.FIG. 4 illustrates a flow chart of a method performed by the processor in the electronic device for generating a notification based on context data of the electronic device, according to one embodiment of the present disclosure.FIG. 5 illustrates a flowchart of a method performed by the notification processing unit in the processor for determining whether the notification is to be generated based on the context data, according to one embodiment of the present disclosure.FIG. 6 illustrates an input sound spoken by the user in a direction toward the electronic device, according to one embodiment of the present disclosure.FIG. 7 illustrates an input sound spoken by the user in a direction other than a direction toward the electronic device, according to one embodiment of the present disclosure.FIG. 8 illustrates recognizing a speech phrase as a command to generate the notification based on reception times of a first speech phrase and a second speech phrase, according to one embodiment of the present disclosure.FIG. 9 illustrates the electronic device configured to transmit a notification including location information of the electronic device to an external device of the user, according to one embodiment of the present disclosure.FIG. 10 illustrates a flowchart of a method performed by the processor for locking or unlocking the electronic device, according to one embodiment of the present disclosure.FIG. 11 is a block diagram of an exemplary electronic device in which the methods and apparatus for generating a notification based on the context data and the command to generate the notification may be implemented, according to one embodiment of the present disclosure.DETAILED DESCRIPTIONReference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the inventive aspects of this disclosure. However, it will be apparent to one of ordinary skill in the art that the inventive aspects of this disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, systems, and components have not been described in detail so as not to unnecessarily obscure aspects of the various embodiments.FIG. 1 illustrates an electronic device 120 configured to generate a notification to alert a user 110 of the electronic device 120 based on context data of the electronic device 120, according to one embodiment of the present disclosure. As shown, the user 110 and the electronic device 120 may be located in a room 100. The electronic device may be placed on a desk 102 and covered by a plurality of books 104, so that the user 110 may not be able to find the electronic device 120. As illustrated herein, the electronic device 120 may be any suitable device adapted to receive and process sounds such as a smartphone, a digital camera, a wearable computer (e.g., smart glasses, a smart watch, etc.), a personal computer, a laptop computer, a tablet computer, a gaming device, etc.To locate the electronic device 120, the user 110 may speak a speech phrase indicative of a command to generate the notification to alert the user 110 of the electronic device 120. The electronic device 120 may receive the speech phrase from the user 110 via a sound sensor 130 in the electronic device 120. The speech phrase may be one or more predetermined keywords and/or one or more natural language phrases, as will be described in more detail below with reference to FIG. 8 . Upon receiving the speech phrase, the electronic device 120 may recognize the speech phrase as the command to generate the notification.Upon recognizing the speech phrase as the command to generate the notification, the electronic device 120 may detect context data of the electronic device 120. As used herein, the term "context data" of an electronic device may be any data or information describing or characterizing an environmental condition of the electronic device such as an ambient light level, an ambient sound level, a current time, a current location, etc. of the electronic device, and usage data indicative of whether the electronic device 120 is being used by the user 110 such as data indicative of a movement of the electronic device, an image of the user 110, a user input (e.g., a key input, a touch input, a speech input, etc.) detected by the electronic device, an event indicative of an unacknowledged incoming communication, and/or an input sound (e.g., a speech command) spoken in a direction other than a direction toward the electronic device.Based on the context data and the command to generate the notification, the electronic device 120 may generate the notification. In one embodiment, the electronic device 120 may determine whether the notification is to be generated based on the context data and the command to generate the notification. Upon determining that the notification is to be generated, the electronic device 120 may generate and output the notification adapted to alert the user 110 of the electronic device 120. The notification may be output using any suitable output units such as a speaker, a vibrating unit, a light output unit (e.g., a display screen, an LED flash, etc.), a communication unit, and the like that may provide an output indicative of a location or presence of the electronic device 120 and allow the user 110 to find or locate the electronic device 120.On the other hand, the electronic device 120 may determine that the notification is not to be generated based on the context data of the electronic device 120. For example, if the context data indicates that the electronic device 120 is being used by the user 110 or is located in a library, the electronic device 120 may determine that the notification is not to be generated. In this case, even when the electronic device 120 has recognized the speech phrase as the command to generate the notification, the notification may not be generated. In this manner, generation of the notification may be controlled based on the context data of the electronic device 120 to prevent an undesired or inadvertent notification from being generated and output.FIG. 2 illustrates a block diagram of the electronic device 120 configured to generate a notification for the user 110 based on context data of the electronic device 120, according to one embodiment of the present disclosure. The electronic device 120 may include a sound sensor 130, a sensor unit 210, an output unit 220, a communication unit 230, storage unit 240 and a processor 250. The processor 250 may include a speech recognition unit 252, a voice assistant unit 254, and a notification processing unit 256. The processor 250 may be any suitable processor for managing and operating the electronic device 120, such as an application processor (AP), central processing unit (CPU), digital signal processor (DSP), etc. The sound sensor 130 may be a separate component from the sensor unit 210 or may be included in the sensor unit 210, and may be any suitable device capable of receiving sound and converting the sound into electronic signals indicative of the sound. As used herein, the term "unit" may refer to one or more hardware components, sections, parts, or circuitry capable of performing or adapted to perform one or more functions and may additionally perform such functions in conjunction with or by executing processes, instructions, procedures, subroutines, or the like (e.g., program code, microcode, etc.). In turn, a "unit" may be segmented into smaller units (e.g., sub-units) or two or more units may be combined into a single "unit."In the electronic device 120, the sound sensor 130 may be configured to receive a speech phrase from the user 110. Upon receiving the speech phrase, the sound sensor 130 may provide the speech phrase to the speech recognition unit 252 of the processor 250. The speech recognition unit 252 in the processor 250 may be configured to recognize the speech phrase as a command to perform a function such as a command to generate the notification using any suitable speech recognition schemes such as Hidden Markov Model, Deep Neural Networks, or the like. Once the speech phrase is recognized as the command to generate the notification, the speech recognition unit 252 may provide the command to generate the notification to the notification processing unit 256 in the processor 250. In this case, the notification processing unit 256 may be in a deactivated state and may be activated by the speech recognition unit 252 upon recognizing the command to generate the notification. Alternatively, the notification processing unit 256 may already be activated for receiving the command to generate the notification from the speech recognition unit 252.According to some embodiments, the speech phrase may include at least a first speech phrase and a second speech phrase, each of which may be a predetermined keyword or a phrase. For example, the speech recognition unit 252 may recognize the first phrase (e.g., "Hey Snapdragon") and activate the voice assistant unit 254 in the processor 250. The voice assistant unit 254 may then receive the second speech phrase (e.g., "Where are you?") via the sound sensor 130 and recognize the second speech phrase as a command to generate a notification. Upon recognizing the second speech phrase, the voice assistant unit 254 may activate the notification processing unit 256 and provide the recognized command to generate the notification to the notification processing unit 256.In the electronic device 120, the sensor unit 210 may include any suitable number and types of sensors or devices capable of detecting context data of the electronic devices. For example, the sensor unit 210 may include a sound sensor (e.g., the sound sensor 130), an image sensor, a motion sensor, a clock unit, a location sensor, an input unit, and the like, as will be described in more detail with reference to FIG. 3 . The sensor unit 210 may detect context data such as a user input, an image of the user 110, an environmental condition (e.g., location information, timing information, an ambient light value), a movement of the electronic device 120, an event indicative of an unacknowledged incoming communication, and/or an input sound (e.g., a speech command) spoken in a direction other than a direction toward the electronic device 120, and provide the context data to the notification processing unit 256. In one embodiment, the sensor unit 210 may be configured to monitor context data continuously, periodically, or intermittently. Additionally or alternatively, the sensor unit 210 may be configured to detect context data upon receiving and/or recognizing a speech phrase indicative of a command to generate the notification.Upon receiving the command to generate the notification, the notification processing unit 256 may be configured to determine whether the notification is to be generated based on the context data received from the sensor unit 210 and/or the sound sensor 130. For example, if the context data indicates that the electronic device 120 is likely to be inaccessible to the user 110 (e.g., lost or misplaced), the notification processing unit 256 of the electronic device 120 may determine that the notification is to be generated. On the other hand, if the context data indicates that the electronic device 120 is located at a place such as in a library, a movie theater, etc., where the use of the electronic device 120 may be restricted, the notification processing unit 256 may determine that the notification is not to be generated.The notification processing unit 256 may be configured to instruct the output unit 220 to generate the notification based on the context data and the recognized command to generate the notification. According to one embodiment, in response to determining that the notification is to be generated based on the context data, the notification processing unit 256 may generate one or more signals configured to control generation of the notification by the output unit 220. For example, the notification processing unit 256 may provide one or more signals to activate and/or instruct the output unit 220 to generate the notification upon determining that the notification is to be generated. On the other hand, the notification processing unit 256 may determine that the notification is not to be generated based on the context data. In this case, the notification processing unit 256 may not provide any signals to instruct the output unit 220 for generating the notification or may provide one or more signals to deactivate and/or instruct the output unit 220 to prevent generation of the notification. In this manner, the notification may not be output based on the context data even when the speech phrase received from the user 110 is recognized as a command to generate the notification.The output unit 220 may be configured to generate the notification based on the context data and the command to generate the notification. As described herein, the output unit 220 may be any suitable component capable of outputting notification in response to one or more control signals from the notification processing unit 256. In one embodiment, the output unit 220 may include any one of a speaker 222, a vibrating unit 224, a display screen 226, an LED unit 228, etc., or any combination thereof. For example, the speaker 222 in the electronic device 120 may output an audible sound (e.g., an alarm sound, a ringtone, or the like) to assist the user 110 in finding the electronic device 120. Additionally or alternatively, the vibrating unit 224 may vibrate, or the display screen 226 or the LED unit 228 may output visible light. In an additional or alternative embodiment, the notification processing unit 256 may generate a notification (e.g., a message indicating a location of the electronic device, which may be obtained from a location sensor in the sensor unit 210), and transmit the notification to an external device associated with the user 110 via the communication unit 230.The storage unit 240 in the electronic device 120 may store a command database (not shown) of one or more predetermined speech phrases for the electronic device 120 to generate the notification. The command database may be accessed by the speech recognition unit 252 and/or the voice assistant unit 254 in the processor 250 to recognize a received speech phrase as the command to generate the notification. In some embodiments, the storage unit 240 may store a context database (not shown), which may be accessed by the notification processing unit 256 in the processor 250 for use in determining whether the notification is to be generated based on the context data. The context database may be configured to store any suitable types of data or information that may be used for determining whether the notification is to be generated, such as a predetermined location where the notification is not to be generated, a predetermined time period during which the notification is not to be generated, and the like. In one embodiment, the context database may be updated based on context data received continuously, periodically, or intermittently by the sensor unit 210. The storage unit 240 may be implemented using any suitable storage or memory devices such as a RAM (Random Access Memory), a ROM (Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory, or an SSD (solid state drive).FIG. 3 illustrates a block diagram of the sensor unit 210 configured to detect context data of the electronic device 120, according to one embodiment of the present disclosure. The sensor unit 210 may include a plurality of sensors such as a sound sensor 130, an image sensor 310, a movement sensor 320 (e.g., an accelerometer, a gyroscope, etc.), a clock unit 330, a location sensor 340, and an input unit 350 (e.g., a touch screen, a key or button, etc.). The sensors 130, 310, 320, 330, 340, and 350 may detect one or more inputs as context data, which may be provided to the notification processing unit 256 in the processor 250.The sound sensor 130 may be configured to receive an input sound and convert the input sound into sound data, which may be output as context data to the notification processing unit 256. The sound sensor 130 may include one or more microphones or any other types of sound sensors that can be used to receive, capture, sense, and/or detect an input sound, and may employ any suitable software and/or hardware for performing such functions. In one embodiment, the sound sensor 130 may receive an input sound including a speech phrase spoken from the user 110. The input sound may also include an environmental sound of the electronic device 120 or from the user 110 such as background sound, noise, etc. As the input sound is received, the sound sensor 130 may generate sound data, which may be provided to the notification processing unit 256 as context data.According to one embodiment, the sound sensor 130 may be also configured to receive a speech phrase as a command to generate a notification from the user 110 and provide the speech phrase to the speech recognition unit 252. In another embodiment, the speech phrase may include a first speech phrase and a second speech phrase. In this case, the sound sensor 130 may provide the first speech phrase to the speech recognition unit 252, which may activate the voice assistant unit 254 upon recognizing the first speech phrase as a command to activate the voice assistant unit 254. The voice assistant unit 254 may receive the second speech phrase from the sound senor 130 and recognize the phrase as a command to generate the notification.In the sensor unit 210, the image sensor 310 may be configured to capture one or more images such as a face, eyes, lips, or a hand of a user, etc. The images may also include a background image of the user or the electronic device 120. According to one embodiment, the image sensor 310 may capture an image of a face, an eye (e.g., iris), or any other physical images that can be used to identify a user. According to another embodiment, an ambient light level of the electronic device 120 may be detected by the image sensor 310. The image sensor 310 may then provide the images and/or the ambient light level as context data to the notification processing unit 256 in the processor 250. As described herein, the image sensor 310 may be any suitable image or light sensing device (e.g., a camera, a camera module, a charge-coupled device, etc.) capable of capturing or sensing an image or a light level.The movement sensor 320 may be configured to detect a movement of the electronic device 120. In one embodiment, the movement sensor 320 may be a gyroscope and/or an accelerometer configured to monitor orientations and/or acceleration of the electronic device 120 and generate data indicative of a change in orientation or a motion of the electronic device 120. For example, the gyroscope may detect orientations of the electronic device 120 to track a motion or movement of the electronic device 120. On the other hand, the accelerometer may detect acceleration or orientations of the electronic device 120 to track a motion of the electronic device 120. The generated data indicative of a change in orientation or a motion of the electronic device 120 may be provided to the notification processing unit 256 as context data.The clock unit 330 in the sensor unit 210 may be configured to detect timing information (e.g., a current time) of the electronic device 120 and output the detected timing information as context data. The clock unit 330 may be a timing device or clock embedded in the electronic device 120 and configured to track current time. Additionally or alternatively, the clock unit 330 may be implemented in the processor 250 as a CPU clock, receive timing information from an external network via the communication unit 230, or use GPS time information received via the location sensor 340 to keep track of the current time. The clock unit 330 may provide the timing information to the notification processing unit 256 as context data.The location sensor 340 may be configured to detect location information (e.g., a current location) of the electronic device 120 and output the detected location information as context data. In one embodiment, the location sensor 340 may be a GPS receiver configured to detect GPS location information and timing information based on GPS signals received from a plurality of GPS satellites. Additionally or alternatively, the location sensor 340 may be a wireless receiver configured to receive signals from a plurality of Wi-Fi access points or cell tower base stations and detect location information of the electronic device 120. The location sensor 340 may then provide the location information, which may include a set of latitude, longitude, and altitude of the electronic device 120, to the notification processing unit 256 as context data.The input unit 350 may be configured to detect an input from a user (e.g., a manual input) of the electronic device 120 and output the detected input as context data. In one embodiment, the input unit 350 may be any suitable input devices for receiving an input from a user (e.g., a user input) and may include a touch screen, a button, a keypad, a touchpad, or the like. The input unit 350 may provide the detected input from the user to the notification processing unit 256 as context data.FIG. 4 illustrates a flow chart of a method performed by the processor 250 in the electronic device 120 for generating a notification based on context data of the electronic device 120, according to one embodiment of the present disclosure. Initially, the processor 250 may receive a speech phrase from the user via the sound sensor 130 at 410. In one embodiment, the speech recognition unit 252 in the processor 250 may recognize the received speech phrase as a command to generate the notification. Alternatively, the speech recognition unit 252 unit may receive a first speech phrase as a command to activate the voice assistant unit 254 via the sound sensor 130 and activate the voice assistant unit 254 upon recognizing the first speech phrase as the activation command. The voice assistant unit 254 may then receive the second speech phrase from the sound sensor 130 and recognize the phrase as the command to generate the notification. The command to generate the notification may then be provide to the notification processing unit 256.In response to the command to generate the notification, the notification processing unit 256 may receive context data of the electronic device 120 from one or more sensors in the sensor unit 210 at 430. In one embodiment, the notification processing unit 256 may receive context data at least based on one among a user input, movement of the electronic device, timing information, location information of the electronic device, ambient light value, and an input sound. Additionally or alternatively, the processor 250 may also detect an event indicative of an unacknowledged incoming communication as context data. For example, the processor 250 may receive an incoming communication (e.g., a message, an email, etc.) via the communication unit 230 and store the incoming communication in the storage unit 240. Until the user 110 reviews the incoming communication, the processor 250 may determine that the incoming communication has not been acknowledged (e.g., reviewed) by the user 110 and thus detect the unacknowledged incoming communication as context data, which may be provided to the notification processing unit 256 in the processor 250. Additionally or alternatively, the sensor unit 210 may include a separate processing unit that may detect an event indicative of an unacknowledged incoming communication as context data.At 440, the notification processing unit 256 may determine whether to generate the notification based on the context data and the command to generate the notification. In one embodiment, in response to the recognized command to generate the notification, the notification processing unit 256 may determine whether the notification is to be generated based on the context data. In this case, the notification processing unit 256 may analyze one or more context data from the sensor unit 210 and/or the processor 250 or any combination thereof, such as a user input, an image of the user 110, an environmental condition (e.g., location information, timing information, an ambient light value), a movement of the electronic device 120, an event indicative of an unacknowledged incoming communication, and/or an input sound (e.g., a speech command). In the case of the image of the user 110, the notification processing unit 256 may apply any suitable facial recognition techniques to identify the face of the user 110 in one or more images that may be received from the image sensor 310 in the sensor unit 210. In the case of the input sound, the notification processing unit 256 may determine whether the input sound is spoken in a direction other than a direction toward the electronic device 120, which may also be used as context data as will be described in more detail with reference to FIGs. 6 and 7 .The various types of context data may be processed by the notification processing unit 256 to determine whether to generate the notification as will be described in more detail with reference to FIG. 5 . In one embodiment, one or more types of context data may be given a higher or highest priority so that the notification may be generated based on detecting such types of context data despite detecting other types of context data. Additionally or alternatively, a context score may be determined based on the various types of context data, each of which may be weighted and combined. Once the notification processing unit 256 determines that the notification is to be generated, it may provide a control signal to the output unit 220 to generate the notification at 450.Upon receiving the control signal, the output unit 220 may output the notification via the speaker 222, the vibrating unit 224, the display screen 226, and/or the LED unit 228. For example, the speaker 222 in the output unit 220 may output an audible sound (e.g., an alarm sound, a ringtone, or the like). Additionally or alternatively, the vibrating unit 224 in the output unit 220 may vibrate, or visible light may be output via the display screen 226 or the LED unit 228.According to one embodiment, the electronic device 120 may be configured to be in a silent mode in which the electronic device 120 may be configured to disable output of sound via the speaker 222. In this case, if the electronic device 120 determines that the notification is to be generated, it may deactivate the silent mode so that the notification may be output via the speaker 222. For example, if the electronic device 120 is in a vibrating mode in which vibration may be output via the vibrating unit 224 and output of sound via the speaker 222 is disabled, it may deactivate the vibrating mode to allow output of the notification via the speaker 222.FIG. 5 illustrates a flowchart of a method performed by the notification processing unit 256 in the processor 250 for determining whether the notification is to be generated based on the context data, according to one embodiment of the present disclosure. For determining whether the notification is to be generated, the notification processing unit 256 may analyze and/or process context data from any one or more sensors or units in the sensor unit 210. In some embodiments, the notification processing unit 256 may assign a higher or highest priority to certain types of context data.Initially, the notification processing unit 256 may determine at 510 whether a user input is detected in the context data received from the sensor unit 210. For example, the user input may indicate that the electronic device 120 is being used by or is accessible to the user 110. In one embodiment, if context data is determined to include the user input (e.g., manual input) at 510, the notification processing unit 256 may determine that no notification is to be generated at 560. Alternatively or additionally, the notification processing unit 256 may determine whether the electronic device 120 is being operated in response to a user input received as context data. For example, the electronic device 120 may be displaying video on a display of the electronic device 120 or playing a song in response to an input or command from the user 110. In this case, the notification processing unit 256 may determine that no notification is to be generated at 560.On the other hand, if it is determined that no user input has been received at 510, the notification processing unit 256 may determine whether a current location or a current time of the electronic device 120 is within a predetermined location or a predetermined time, respectively, at 520. In some embodiments, the electronic device 120 may receive and store one or more time periods and/or locations for which the notification is not to be generated from the user 110. Upon determining that the current location or the current time of the electronic device is within a predetermined location or a predetermined time, respectively, the notification processing unit 256 may determine that the notification is not to be generated at 560. Otherwise, the notification processing unit 256 may proceed to determine a context score for generating the notification based on other types of context data at 530.In one embodiment, the notification processing unit 256 may receive the current time as context data from the sensor unit 210 and determine whether the current time is within a predetermined time period during which the notification is not to be generated such as when the user 110 may be inactive (e.g., asleep, night time, etc.) or may not be able to access the electronic device 120 (e.g., during a meeting). The predetermined time period during which the notification is not to be generated may be determined based on usage history of the electronic device 120 or scheduled tasks in a calendar application of the electronic device. For example, the notification processing unit 256 may access the calendar application and determine that the current time is within a time period during which a meeting is scheduled at 520 and thus proceed to determine that no notification is to be generated at 560.In another embodiment, the notification processing unit 256 may receive the current location of the electronic device 120 as context data from the sensor unit 210 and determine whether the current location corresponds to a predetermined location for which the notification is not to be generated. For example, the current location of the electronic device 120 may be determined to correspond to a location where the use of the electronic device 120 may be restricted such as a library, a theater, or the like. In this case, the notification processing unit 256 may proceed to determine that the notification is not to be generated at 560. Otherwise, the notification processing unit 256 may proceed to determine a context score for generating the notification based on other types of context data at 530.At 530, the notification processing unit 256 may determine a context score based on one or more types of context data. As used herein, the term "context score" may be a probability value indicating whether the electronic device 120 is in a context in which the notification is to be generated. In one embodiment, the notification processing unit 256 may calculate a context score based on context data received from the sensor unit 210 and/or the processor 250. For example, the context data may include one or more types of context data other than the user input, the current location, and the current time. Alternatively, the context data may include all types of context data received from the sensor unit 210 and/or the processor 250.In some embodiments, a context score may be determined based on the types of context data such as an ambient light value, an image of a user, an event indicative of an unacknowledged incoming communication, and/or a movement of the electronic device. Each of the types of context data may be represented with any suitable values, which may be weighted by an associated predetermined weight and combined to calculate the context score using any suitable weighting scheme. For example, a context score may be determined based on the context data, which may be weighted as shown in Table 1 below according to one embodiment of the present disclosure.TABLE 1:Context DataContext ValueWeightWeighted Context ValueAmbient LightIntensity: 0.20.10.02Image of UserImage of User: 10.50.5Unacknowledged Incoming CommunicationElapsed time: 0.40.30.12MovementElapsed time: 0.80.10.08In the case of ambient light in Table 1 above, the ambient light may be represented with a numerical context value in a range between 0 and 1 that may be proportional to the intensity of the light, where the value 0 may indicate a lowest intensity level (e.g., complete darkness) and the value 1 may indicate a highest intensity. For example, a low ambient light value may indicate that the electronic device 120 is covered by or located within an object (e.g., a plurality of books, paper, clothing, a pocket, etc.) and thus the user 110 may not be able to find the electronic device 120. In such a situation, a notification may be generated to alert the user 110 of the electronic device 120. In other cases, a low ambient light value may be a result of the time of the day such as evening time and may not be clearly indicative of whether the notification should be generated. Thus, in the illustrated embodiment in Table 1, a relatively low weight of 0.1 may be assigned to the ambient light having an intensity of 0.2 such that the notification processing unit 256 may determine a weighted context value of 0.02 for the ambient light value.For the case of the image of the user 110, the image may be represented with a numerical context value of either 0 or 1 depending on whether the user 110 is recognized to be in the image. For example, when the user 110 is identified in the image received from the image sensor 310, the value of 1 may be assigned. Otherwise, the value of 0 may be assigned. If the user 110 is detected in the image received via the image sensor 310, it is highly likely that the user 110 can see the electronic device 120. Accordingly, a relatively high weight of 0.5 may be assigned to the image of the user 110 having a value of 1 in Table 1 so that the notification processing unit 256 may determine a weighted context value of 0.5 for the image.In the case of the unacknowledged incoming communication in the electronic device 120, an event indicative of such incoming communication data may be represented with a numerical context value in a range between 0 and 1, which may be inversely proportional to an elapsed time since the receipt of the unacknowledged incoming communication. For example, upon receiving an event indicative of the unacknowledged incoming communication as context data, the notification processing unit 256 may determine how much time has elapsed since the unacknowledged incoming communication was received via the communication unit 230. When the event indicative of the unacknowledged incoming communication is received immediately upon receipt via the communication unit 230, the context value for the event may correspond to 1. On the other hand, when the elapsed time since the receipt of an unacknowledged incoming communication is longer than a predetermined threshold time period (e.g., 10 hours, a day, etc.), the context value for the event indicative of the unacknowledged incoming communication may correspond to 0. For an elapsed time between these cases, any suitable intermediate value may be assigned in inverse proportion to the elapsed time. In the illustrated embodiment, a value of 0.4 may be assigned for an elapsed time of six hours and a weight of 0.3 may be assigned to such an event such that the notification processing unit 256 may determine a weighted context value of 0.12 for the event indicative of the unacknowledged incoming communication.For the case of the movement of the electronic device 120, movement data indicative of a movement of the electronic device 120 may be represented with a numerical context value in a range between 0 and 1, which may be inversely proportional to the elapsed time since the last or most recent movement of the electronic device 120. For example, if the current movement data received from the movement sensor 320 indicates movement of the electronic device 120, the elapsed time may be zero and the context value for the movement of the electronic device 120 may correspond to 1. On the other hand, if the current movement data indicated no movement of the electronic device, the notification processing unit 256 may determine how much time has elapsed since the last or most recent movement was detected based on a time that the last or most recent movement was detected. For example, when movement data indicating a movement of the electronic device 120 is received from the movement sensor 320, the processor 250 may store the time at which the movement of the electronic device 120 is detected in the storage unit 240. In this case, the notification processing unit 256 may access the time at which the last movement of the electronic device 120 was detected from the storage unit 240, and determine how much time has elapsed since the last movement was detected. If the elapsed time since the last or most recent movement of the electronic device 120 is longer than a predetermined threshold time period (e.g., 10 hours, a day, etc.), the context value for the movement of the electronic device 120 may be determined to be 0. For an elapsed time between zero and the predetermined threshold time period, any suitable intermediate value may be assigned in inverse proportion to the elapsed time. As shown in the illustrated embodiment, a value of 0.8 may be assigned for an elapsed time of two hours and a weight of 0.1 may be assigned to such movement data. In this case, the notification processing unit 256 may determine a weighted context value of 0.08 for the movement of the electronic device 120.Upon generating a weighted context value for each of the types of context data in Table 1, the notification processing unit 256 may calculate a context score of 0.72 by adding the weighted context values. For example, a context score S may be determined according to the following equation S = ∑ i = 1 N w i v i , where wi and vi are a weight and a context value, respectively. Alternatively, a context score S may be determined according to any suitable function for determining the context score such as S = f(v1, ... vN), where vi is a context value. Although the notification processing unit 256 determines the context score based on the types of context data shown in Table 1, it may also determine the context score based on other types of context data such as the user input, the current location, the current time, a direction from which the input sound is spoken (e.g., a direction of departure), and/or the like. In such a case, a high weight value may be assigned to each of such types of context data such that the context score may be determined substantially based on one or more of such types of context data.With reference to FIG.5 , upon determining the context score, the notification processing unit 256 may compare the context score with a predetermined threshold score at 540. If the context score is less than or equal to the predetermined threshold score, the notification processing unit 256 may determine that notification is to be generated at 550. On the other hand, if the context score is determined to be greater than the predetermined threshold, the notification processing unit 256 may determine that the notification is not to be generated at 560. With reference to Table 1 above, given a predetermined threshold score of 0.5, the notification processing unit 256 may determine that the notification is not to be generated at 560 since the calculated context score of 0.72 is greater than or equal to the predetermined threshold score of 0.5. On the other hand, if a calculated context score is less than the threshold score, the notification processing unit 256 may determine that the notification is to be generated at 550.In some embodiments, the notification processing unit 256 may determine whether to generate the notification additionally based on whether an input sound is spoken in a direction toward the electronic device 120, which may correspond to a direction toward the sound sensor 130. For example, the input sound may be a speech phrase spoken by the user 110 (e.g., a speech phrase indicative of a command to generate the notification), which is received by the electronic device 120 via the sound sensor 130. Upon receiving the input sound, the notification processing unit 256 may determine whether the input sound is spoken in a direction other than a direction toward the electronic device 120.According to one embodiment, the notification processing unit 256 may determine a departure angle of the input sound from the user 110 as a "direction of departure" (DOD) of the input sound. In this case, the input sound may be determined to be spoken in a direction toward the electronic device 120 if the direction of departure of the input sound is in a direction along a line (e.g., a reference line or direction) between a sound source (e.g., a user) and the electronic device 120. Otherwise, the input sound may be determined to be spoken in a direction other than a direction toward the electronic device 120. Further, the notification processing unit 256 may also determine that a direction of departure of the input sound is toward the electronic device 120 when the direction is determined to be within a predetermined angle or range from the line between the electronic device 120 and the user 110.FIG. 6 illustrates an input sound spoken by the user 110 in a direction toward the electronic device 120 according to one embodiment of the present disclosure. In the illustrated embodiment, the user 110 may speak a speech phrase as the input sound in a direction 610, which may deviate from a reference direction 620 toward the electronic device 120. Upon receiving the speech phrase as an input sound via the sound sensor 130, the notification processing unit 256 may determine a direction of departure 610 of the speech phrase, the reference direction 620 between the user 110 and the electronic device 120, and an angle θ1 between the directions 610 and 620. Given a predetermined angle β within which an input sound may be considered to be spoken toward the electronic device 120, the notification processing unit 256 may determine that the angle θ1 is less than the predetermined angle β and thus determine that the direction of departure 610 of the speech phrase is toward the electronic device 120.FIG. 7 illustrates an input sound spoken by the user 110 in a direction other than a direction toward the electronic device 120 according to one embodiment of the present disclosure. As shown in the illustrated embodiment, the user 110 may speak a speech phrase as the input sound in a direction 710, which may deviate from a reference direction 720 toward the electronic device 120. Upon receiving the speech phrase as an input sound via the sound sensor 130, the notification processing unit 256 may determine a direction of departure 710 of the speech phrase, the reference direction 720 between the user 110 and the electronic device 120, and an angle θ2 between the directions 710 and 720. Given the predetermined angle β within which an input sound may be considered to be spoken toward the electronic device 120, the notification processing unit 256 may determine that the angle θ2 is greater than the predetermined angle β and thus determine that the direction of departure 710 of the speech phrase is in a direction other than a direction toward the electronic device 120.FIG. 8 illustrates recognizing a speech phrase as a command to generate the notification based on reception times of a first speech phrase 810 and a second speech phrase 820, according to one embodiment of the present disclosure. In the illustrated embodiment, the speech phrase may include the first speech phrase 810 as a command to activate the voice assistant unit 254 and the second speech phrase 820 as a command to generate the notification. Initially, the user 110 may speak the first speech phrase (e.g., "Hey Snapdragon") at time T1. The electronic device 120 may receive the first speech phrase via the sound sensor 130 and the speech recognition unit 252 may recognize the first speech phrase as a command to activate the voice assistant unit 254 in the processor 250 using any suitable speech recognition function. Upon recognizing the first speech phrase, the speech recognition unit 252 may activate the voice assistant unit 254.At time T2, the user 110 may speak the second speech phrase (e.g., "Where are you"). The voice assistant unit 254, which has been activated, may receive the second speech phrase via the sound sensor 130 and recognize the second speech phrase as a command to generate the notification. Upon recognizing the second speech phrase as the command to generate the notification, the voice assistant unit 254 may determine whether the first speech phrase 810 and the second speech phrase 820 are received within a predetermined time period (e.g., 5 seconds) based on the reception times of the first and second speech phrases 810 and 820. Once the first speech phrase 810 and the second phrase 820 are determined to have been received within the predetermined time period, the voice assistant unit 254 may activate the notification processing unit 256 and provide the recognized command to the notification processing unit 256, which may determine whether to generate the notification. In one embodiment, the voice assistant unit 254 may be deactivated once it provides the recognized command to the notification processing unit 256.According to some embodiments, the speech recognition unit 252 or the voice assistant unit 254 may recognize both of the first and second speech phrases 810 and 820. In one embodiment, the first and second speech phrases 810 and 820 may be received in any order or sequence and the speech recognition unit 252 and/or the voice assistant unit 254 may be configured to recognize the first and second speech phrases 810 and 820 in such order. For example, if the speech recognition unit 252 fails to receive or recognize the first speech phrase 810 but receives and recognizes the second speech phrase 820, the speech recognition unit 252 may then receive and recognize the first speech phrase 810 as a command to generate the notification.FIG. 9 illustrates the electronic device 120 configured to transmit a notification including location information of the electronic device to an external device 930 of the user 110, according to one embodiment of the present disclosure. As used herein, the term "external device" may be any electronic device that is physically separate from the electronic device 120 and capable of communicating wirelessly with the electronic device 120. As shown, the user 110 may be at a location 910 (e.g., an office) and the electronic device 120 may be at a location 920 (e.g., home) such that the electronic device 120 may not be able to receive or recognize a speech phrase spoken by the user 110.In the illustrated embodiment, the user 110 may input a command (e.g., a speech phrase to generate a notification) to the external device 930 (e.g., a smartwatch, smart glasses, etc.) for locating the electronic device 120. In response, the external device 930 may wirelessly transmit a request to generate the notification, which may include the input speech phrase, to the electronic device 120. Upon receiving the request for the notification via the communication unit 230, the processor 250 in the electronic device 120 may receive location information from the location sensor 340 and wirelessly transmit the location information to the external device 930 via the communication unit 230. Alternatively or additionally, the processor 250 may receive any other types of context data indicative of a location of the electronic device (e.g., an image captured by the image sensor 310) via the sensor unit 210 or from the storage unit 240, and transmit such data as location information of the electronic device 120 to the external device 930. Additionally or alternatively, the electronic device 120 may output the notification via the output unit 220. In response to receiving the location information of the electronic device 120, the external device 930 may output the location information for the user 110.Additionally or alternatively, the external device 930 may receive a speech phrase from the user 110 for locating the electronic device 120 and recognize the speech phrase as a command to locate the electronic device 120. In response to the recognized command, the external device 930 may transmit a request to generate a notification to the electronic device 120. Upon receiving the request, the electronic device 120 may transmit location information of the electronic device 120 to the external device 930. Additionally, the electronic device 120 may transmit any other types of context data of the electronic device 120 to the external device 930. In this case, the external device 930 may determine whether the notification is to be generated by the electronic device 120 based on the context data received from the electronic device 120. Upon determining that the notification is to be generated, the external device 930 may wirelessly transmit a command to generate the notification to the electronic device 120, which may generate and output the notification in response. In some embodiments, the external device 930 may be configured to detect context data of the external device 930 via one or more sensors. Based on the detected context data of the external device 930, the external device 930 may select one or more output units for outputting the location information of the electronic device 120 and output the location information via the selected output units for the user 110.FIG. 10 illustrates a flowchart of a method performed by the processor 250 for locking or unlocking the electronic device 120, according to one embodiment of the present disclosure. At 1010, the notification processing unit 256 may determine that the notification is to be generated based on context data. For example, the notification processing unit 256 in the processor 250 may determine that the notification is to be generated based on one or more types of context data, which may indicate that the electronic device 120 is likely to be inaccessible to the user 110 (e.g., lost or misplaced). In response to determining that the notification is to be generated, the processor 250 may lock the electronic device 120 at 1020 to prevent unauthorized access to the electronic device 120 by a user other than the user 110. At 1030, the processor 250 may receive a user input adapted to verify the user 110 such as a sound input, a predetermined pattern or image, a personal identification number, a password, a fingerprint, etc. via the input unit 350, the sound sensor 130, a fingerprint sensor, and/or the image sensor 310. In response to receiving and verifying the user input, the processor 250 may unlock the electronic device 120 at 1040. In one embodiment, when the electronic device 120 has been locked in response to determining that the notification is to be generated, the processor 250 may unlock the electronic device based on a type of user input with a high level of verification or authentication. For example, the processor 250 may not unlock the electronic device 120 in response to a passcode input and may require a fingerprint or a facial image as the user input to unlock the electronic device 120.FIG. 11 illustrates a block diagram of an electronic device 1100 in which the methods and apparatus of the present disclosure for generating a notification based on the context data and the command to generate the notification may be implemented according to some embodiments. The electronic device 1100 may be a cellular phone, a smartphone, a wearable computer, a smart watch, smart glasses, a tablet personal computer, a terminal, a handset, a personal digital assistant (PDA), a cordless phone, a tablet, and so on. The wireless communication system may be a CDMA system, a GSM system, a W-CDMA system, a LTE system, a LTE Advanced system, and so on.The electronic device 1100 may be capable of providing bidirectional communication via a receive path and a transmit path. On the receive path, signals transmitted by base stations may be received by an antenna 1112 and may be provided to a receiver (RCVR) 1114. The receiver 1114 may condition and digitize the received signal, and provide the conditioned and digitized digital signal to a digital section for further processing. On the transmit path, a transmitter (TMTR) 1116 may receive data to be transmitted from a digital section 1120, process and condition the data, and generate a modulated signal, which is transmitted via the antenna 1112 to the base stations. The receiver 1114 and the transmitter 1116 may be part of a transceiver that may support CDMA, GSM, W-CDMA, LTE, LTE Advanced, and so on.The digital section 1120 may include various processing, interface, and memory units such as, for example, a modem processor 1122, a reduced instruction set computer/digital signal processor (RISC/DSP) 1124, a controller/processor 1126, an internal memory 1128, a generalized audio/video encoder 1132, a generalized audio decoder 1134, a graphics/display processor 1136, and an external bus interface (EBI) 1138. The modem processor 1122 may perform processing for data transmission and reception, e.g., encoding, modulation, demodulation, and decoding. The RISC/DSP 1124 may perform general and specialized processing for the electronic device 1100. The controller/processor 1126 may perform the operation of various processing and interface units within the digital section 1120. The internal memory 1128 may store data and/or instructions for various units within the digital section 1120.The generalized audio/video encoder 1132 may perform encoding for input signals from an audio/video source 1142, a microphone 1144, an image sensor 1146, etc. The generalized audio decoder 1134 may perform decoding for coded audio data and may provide output signals to a speaker/headset 1148. The graphics/display processor 1136 may perform processing for graphics, videos, images, and texts, which may be presented to a display unit 1150. The EBI 1138 may facilitate transfer of data between the digital section 1120 and a main memory 1152.The digital section 1120 may be implemented with one or more processors, DSPs, microprocessors, RISCs, etc. The digital section 1120 may also be fabricated on one or more application specific integrated circuits (ASICs) and/or some other type of integrated circuits (ICs).In general, any device described herein may represent various types of devices, such as a wireless phone, a cellular phone, a laptop computer, a wireless multimedia device, a wireless communication personal computer (PC) card, a PDA, an external or internal modem, a device that communicates through a wireless channel, etc. A device may have various names, such as access terminal (AT), access unit, subscriber unit, mobile station, mobile device, mobile unit, mobile phone, mobile, remote station, remote terminal, remote unit, user device, user equipment, handheld device, etc. Any device described herein may have a memory for storing instructions and data, as well as hardware, software, firmware, or combinations thereof.The techniques described herein may be implemented by various means. For example, these techniques may be implemented in hardware, firmware, software, or a combination thereof. Those of ordinary skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, the various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.For a hardware implementation, the processing units used to perform the techniques may be implemented within one or more ASICs, DSPs, digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, a computer, or a combination thereof.Thus, the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternate, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates the transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limited thereto, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. For example, a computer-readable storage medium may be a non-transitory computer-readable storage device that includes instructions that are executable by a processor. Thus, a computer-readable storage medium may not be a signal.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein are applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.Although exemplary implementations are referred to utilizing aspects of the presently disclosed subject matter in the context of one or more stand-alone computer systems, the subject matter is not so limited, but rather may be implemented in connection with any computing environment, such as a network or distributed computing environment. Still further, aspects of the presently disclosed subject matter may be implemented in or across a plurality of processing chips or devices, and storage may similarly be affected across a plurality of devices. Such devices may include PCs, network servers, and handheld devices.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.Hereinafter, some aspects of the present disclosure will be additionally stated.(Example 1) According to an aspect of the present disclosure, there is provided a method for generating a notification by an electronic device, comprising: receiving a speech phrase; recognizing, by a processor, the speech phrase as a command to generate the notification; detecting, by at least one sensor, context data of the electronic device; and generating, by the processor, the notification based on the context data and the command.(Example 2) In the method of Example 1, detecting the context data is at least based on one among a user input, movement of the electronic device, timing information, location information of the electronic device, ambient light value, and an input sound.(Example 3) In the method of Example 2, generating the notification comprises determining whether the notification is to be generated at least based on the context data.(Example 4) In the method of Example 3, determining whether the notification is to be generated comprises deactivating a silent mode of the electronic device upon determining that the notification is to be generated.(Example 5) In the method of Example 3, determining whether the notification is to be generated comprises determining that the notification is not to be generated upon detecting that the context data includes the user input.(Example 6) In the method of Example 4, determining whether the notification is to be generated comprises at least one of: determining whether the timing information is within a predetermined time period during which the notification is not to be generated; or determining whether the location information of the electronic device corresponds to a predetermined location where the notification is not to be generated.(Example 7) In the method of Example 1, receiving a speech phrase comprises receiving, by a sound sensor, a first speech phrase and a second speech phrase, and recognizing the speech phrase as a command to generate the notification comprises recognizing the speech phrase as the command to generate the notification in response to determining that the first speech phrase and the second speech phrase are received within a predetermined time period.(Example 8) In the method of Example 7, recognizing the speech phrase as a command to generate the notification comprises recognizing the first speech phrase as a command to activate a voice assistant unit in the electronic device and the second speech phrase as the command to generate the notification.(Example 9) In the method of Example 1, receiving a speech phrase comprises receiving, by a communication unit, the speech phrase from an external device, and generating the notification comprises transmitting, by the communication unit, location information of the electronic device to the external device.(Example 10) The method of Example 4 further includes locking the electronic device to prevent unauthorized access to the electronic device in response to determining that the notification is to be generated.(Example 11) The method of Example 10 further includes unlocking the electronic device in response to receiving a user input.(Example 12) The method of Example 1, generating the notification comprises generating, by an output unit, at least one of audible sound, vibration, or visible light indicative of the notification.(Example 13) According to an aspect of the present disclosure, there is provided an electronic device for generating a notification, comprising: a sound sensor configured to receive a speech phrase; a speech recognition unit configured to recognize the speech phrase as a command to generate the notification; a sensor unit configured to detect context data of the electronic device; and a processor configured to generate the notification based on the context data and the command.(Example 14) The electronic device of Example 13 further includes an output unit configured to generate at least one of audible sound, vibration, or visible light indicative of the notification.(Example 15) In the electronic device of Example 13, the sensor unit is further configured to detect the context data at least based on one among a user input, movement of the electronic device, timing information, location information of the electronic device, ambient light value, and an input sound.(Example 16) In the electronic device of Example 15, the processor further comprises a notification processing unit configured to determine whether the notification is to be generated at least based on the context data.(Example 17) In the electronic device of Example 16, the notification processing unit is further configured to determine whether the notification is to be generated based on at least one of: determining that the notification is not to be generated upon detecting that the context data includes the user input; determining whether the timing information is within a predetermined time period during which the notification is not to be generated; or determining whether the location information of the electronic device corresponds to a predetermined location where the notification is not to be generated.(Example 18) In the electronic device of Example 13, the sound sensor is further configured to receive a first speech phrase and a second speech phrase as the speech phrase, and the speech recognition unit is further configured to recognize the speech phrase as a command to generate the notification in response to determining that the first speech phrase and the second speech phrase are received within a predetermined time period.(Example 19) The electronic device of Example 18 further includes a voice assistant unit, where the speech recognition unit is further configured to recognize the first speech phrase as a command to activate the voice assistant unit and the voice assistant unit is configured to recognize the second speech phrase as the command to generate the notification.(Example 20) The electronic device of Example 13 further includes a communication unit configured to receive the speech phrase from an external device and transmit location information of the electronic device to the external device.(Example 21) In the electronic device of Example 16, the electronic device is further configured to perform at least one of: locking the electronic device to prevent unauthorized access to the electronic device in response to determining that the notification is to be generated; or unlocking the electronic device in response to receiving a user input.(Example 22) A non-transitory computer-readable storage medium comprising instructions causing at least one processor of an electronic device to perform operations of: receiving a speech phrase; recognizing the speech phrase as a command to generate the notification; detecting, via at least one sensor, context data of the electronic device; and generating the notification based on the context data and the command.(Example 23) In the non-transitory computer-readable storage medium of Example 22, detecting the context data is at least based on one among a user input, movement of the electronic device, timing information, location information of the electronic device, ambient light value and an input sound.(Example 24) In the non-transitory computer-readable storage medium of Example 23, generating the notification comprises determining whether the notification is to be generated at least based on the context data.(Example 25) In the non-transitory computer-readable storage medium of Example 24, determining whether the notification is to be generated comprises at least one of: determining that the notification is not to be generated upon detecting that the context data includes the user input; determining whether the timing information is within a predetermined time period during which the notification is not to be generated; or determining whether the location information of the electronic device corresponds to a predetermined location where the notification is not to be generated.(Example 26) In the non-transitory computer-readable storage medium of Example 22, receiving, via a sound sensor, a speech phrase comprises receiving a first speech phrase and a second speech phrase, and recognizing the speech phrase as a command to generate the notification comprises recognizing the speech phrase as the command to generate the notification in response to determining that the first speech phrase and the second speech phrase are received within a predetermined time period.(Example 27) In the non-transitory computer-readable storage medium of Example 26, recognizing the speech phrase as a command to generate the notification comprises recognizing the first speech phrase as a command to activate a voice assistant unit in the electronic device and the second speech phrase as the command to generate the notification.(Example 28) In the non-transitory computer-readable storage medium of Example 22, receiving a speech phrase comprises receiving, via a communication unit, the speech phrase from an external device, and generating the notification comprises transmitting, via the communication unit, location information of the electronic device to the external device.(Example 29) The non-transitory computer-readable storage medium of Example 24 further includes instructions causing the at least one processor of the electronic device to perform at least one operation of: locking the electronic device to prevent unauthorized access to the electronic device in response to determining that the notification is to be generated; or unlocking the electronic device in response to receiving a user input.(Example 30) In the non-transitory computer-readable storage medium of Example 22, generating the notification comprises generating, via an output unit, at least one of audible sound, vibration, or visible light indicative of the notification.
A charge pump for generating an input voltage for an operational amplifier includes a storage capacitor for storing a charge pump voltage and a flying capacitor configured to be charged during a first phase of operation and discharged during a second phase of operation. As the flying capacitor is discharged, it charges the storage capacitor. A current source is coupled to the flying capacitor and a switching means is provided for switching current from the current source through the flying capacitor in a first direction during the first phase and in a second direction opposite to the first direction during the second phase.
Claims 1. A charge pump for generating a bootstrap voltage for an operational amplifier, the charge pump comprising: a storage capacitor (C2) for storing a charge pump voltage; and a flying capacitor (C1 ) configured to be charged during a first phase of operation and discharged during a second phase of operation so as to charge the storage capacitor (C2), wherein a current source (VCCS, MP1 ) is coupled to the flying capacitor (C1 ) and a switching means (S1 , S2, S2a) is provided for switching current from the current source (VCCS) through the flying capacitor (C1 ) in a first direction during the first phase and in a second direction opposite to the first direction during the second phase. 2. The charge pump according to claim 1 , wherein the current source is a variable current source. 3. The charge pump according to claim 2, comprising further a control loop including an error amplifier adapted to compare an output voltage of the charge pump (VCp) with a reference voltage (Vref), wherein the error amplifier generates a control signal coupled to control the variable current source based on the difference between the output voltage of the charge pump (Vcp) and the reference voltage (Vref). 4. The charge pump according to claim 3, wherein the time constant of the control loop is substantially greater than a period of the switching sequence of the switching means (S1 , S2, S2a) 5. The charge pump according to any one of the previous claims, further comprising a controller for controlling the switching means. 6. The charge pump according to claim 5, wherein the switching means comprises a first switching path for switching current through the flying capacitor (C1 ) in the first direction and a second switching path for switching current through the flying capacitor (C1 ) in the second direction, wherein the second switching path is controlled by a single control port in the controller. 7. A method of providing an input voltage to an operational amplifier, the method comprising: providing a storage capacitor for storing the input voltage; charging a flying capacitor during a first phase of operation; discharging the flying capacitor during a second phase of operation; and charging the storage capacitor during the second phase of operation using current produced from discharging the flying capacitor, wherein the method further comprises switching current from a current source through the flying capacitor in a first directionduring the first phase and in a second direction opposite to the first direction during the second phase.
Charge pump for generating an input voltage for an operational amplifier A truly rail to rail input operational amplifier with a PMOS or PMP input stage requires a bootstrap or charge pump voltage above the supply voltage, which is supplied by a charge pump. Any noise and ripples of the charge pump voltage, especially at high frequency, leak to the op-amp output due to the mismatch of the input devices; i.e., parasitic capacitors etc. Figures 1 A and 1 B are simplified schematics of a conventional charge pump. The negative terminal of a capacitor C1 is switched between a positive supply voltage rail VDD and ground and the positive terminal is switched between the positive supply voltage and a charge pump voltage rail. A storage capacitor C2 is also connected to the charge pump voltage rail and the positive supply voltage. As shown in Figure 1A, the capacitor C1 is first connected between the supply voltage and ground and charged to the supply voltage. In Figure 1 B, the positive terminal of the capacitor C1 is then disconnected from the supply voltage rail and reconnected to the capacitor C2 and the negative terminal of the capacitor C1 is disconnected from ground and connected to the supply voltage rail VDD. This results in twice the supply voltage across the capacitor C1 , which can then be used to charge the storage capacitor C2 to a voltage equal to 2VDD. For this reason, such a known capacitor is often called a voltage doubler. The output voltage of the conventional charge pump of Figure 1 is shown in Figure 2. It can be seen that the output voltage is of a sawtooth form. This sawtooth voltage ripple contains high frequency harmonics of the running frequency with relatively large amplitudes, which produce unwanted noise at the output of the charge pump. In addition to the output voltage ripple, the conventional charge pump creates significant supply noise. Current consumed by the charge pump circuit from the power supply (current lq) consists of large amplitude current pulses when the circuit switches from the first phase to the second phase. The value of these current pulses is limited only by the switch resistance. Current pulses create supply voltage ripples due to the bus resistance and wirebond inductance, which increases the high-frequency noise of the operational amplifier. It is an object of the present invention to have a charge pump voltage source for use with rail to rail operational amplifier tail current sources that has a low ripple. The present invention provides a charge pump for generating a bootstrap voltage, in particular a bootstrap voltage for the tail current of an input stage of an operational amplifier. The charge pump comprises a storage capacitor for storing a charge pump voltage and a flyingcapacitor configured to be charged during a first phase of operation and discharged during a second phase of operation so as to charge the storage capacitor. A current source is coupled to the flying capacitor and a switching means is provided for switching current from the current source through the flying capacitor in a first direction during the first phase and in a second direction opposite to the first direction during the second phase. Switching current from a current source to charge the flying capacitor in the first phase of operation and to discharge the flying capacitor in the second phase of operation determines the current flowing to and from the flying capacitor. So, the present invention provides a charge pump voltage that is smoother (e.g. more symmetric and more triangular) than the sawtooth output voltage produced by the conventional voltage doubler so that there is less high-frequency content and consequently a reduced high frequency noise of the operational amplifier in which the charge pump is used. Furthermore, the output voltage level can be controlled by configuring the current source (which can be a variable current source) to provide the right level of current for charging the flying capacitor to the required voltage. Therefore the charge pump output voltage can be tailored to any voltage (only limited by twice the input voltage). For example, if an output voltage of twice the supply voltage (as provided by a conventional voltage-doubling charge pump) is too high for a particular application, the voltage can be set to the required level by control of the current source. Providing a current source for charging the flying capacitor also results in the charge pump having a current without high amplitude current pulses, which means that less noise is generated in the supply bus. Preferably, the charge pump according to the present invention includes a control loop with an error amplifier adapted to compare an output voltage of the charge pump with a reference voltage. The error amplifier generates a control signal coupled to control the variable current source based on the difference between the output voltage of the charge pump and the reference voltage. This in turn defines the amount to which the flying capacitor is charged, which defines the output voltage level. Therefore, by selection of the reference current source for providing the appropriate reference voltage, and the capacitance of the flying capacitor, the required output voltage of the charge pump can be set. The output voltage can be set in a feedback operation. For example, the charge pump output voltage can be compared with the reference voltage. If the output voltage deviates from the a voltage level defined by the reference voltage the current supplied by the current source is adjusted. This way, the value of the current becomes equal to two times the load current of the charge pump, and the output voltage becomes equal to the voltage level defined by the reference voltage. In particular, the reference voltage can be set to be equal to the output voltage. If the time constant of the control loop is substantially greater than a period of the switching sequence of the switching means, the current drawn from the current source is basicallyconstant. Charging and discharging the flying capacitor using a constant current means that the current drawn by the charge pump is constant, which reduces voltage ripples in the power supply. Preferably, the charge pump comprises a controller for controlling switching of the switching means. The switching means preferably comprises a first switching path for switching current through the flying capacitor in the first direction and a second switching path for switching current through the flying capacitor in the second direction. The second switching path can be controlled by a single control port in the controller, which reduces the complexity of the charge pump circuit. The controller provides a feedback mechanism to the switching arrangement, which means that, when the storage capacitor has been charged to the required charge pump voltage by the flying capacitor, the current source can be immediately switched to start charging the flying capacitor again. The present invention also provides a method of providing a bootstrap voltage. In particular a method of providing a bootstrap voltage for a tail current source of an operational amplifier. The method comprises charging a flying capacitor during a first phase of operation, decoupling and discharging the flying capacitor during a second phase of operation, and charging a storage capacitor during the second phase of operation using current produced from discharging the flying capacitor. Further, switching current from a current source through the flying capacitor in a first direction during the first phase and in a second direction opposite to the first direction during the second phase. Using a switched current source to charge and discharge the flying capacitor reduces unwanted frequency components in the charge pump output voltage, as well as smoothing out the current drawn by the charge pump. Furthermore, the level of the charge pump output voltage can be chosen as required by setting the amount of current that is used to charge the flying capacitor. This means that the output voltage can be variably adjusted below the process-defined supply voltage limit. Further advantages and characteristics of the invention ensue from the description below of a preferred embodiment, and from the accompanying drawings, in which: Figure 1A is a simplified schematic diagram of a conventional charge pump in a first phase of operation; Figure 1 B is a simplified schematic diagram of a conventional charge pump in a first phase of operation; Figure 2 shows graphs of output voltage against time and supply current against time in a conventional charge pump;Figure 3 shows a simplified schematic diagram of a charge pump according to the present invention; Figure 4 shows graphs of output voltage against time and supply current against time in a charge pump according to the invention; and Figure 5 is a simplified schematic diagram of a charge pump according to the invention. Figure 3 shows a simplified schematic diagram of a charge pump according to the present invention. C1 is the flying capacitor which is alternately switched either to VDD and VSS (i.e. the ground potential) or between VDD and VCp, so as to charge C2. The output load is represented by a constant current source CS having a constant load current lLoad- The two switches S1 operate synchronously in alternation with switches S2, S2a. The switching of S2a may be a bit different from S2 in order to avoid unwanted switching effects. During a first phase, switches S2, S2a are closed and C1 is charged via VCCS. During a second phase S2, S2a are opened and switches S1 are closed. In the second phase, flying capacitor C1 is coupled to storage capacitor C2 and discharges to C2. Both, the charging and the discharging currents are controlled by VCCS. Accordingly, the voltage across C1 depends on the duration of the charging and discharging phases and the value of the current through VCCS. The magnitude of the current supplied to C1 is defined by a feedback loop including error amplifier A and VCCS. A reference voltage VREF and the output voltage VCp are both coupled to the error amplifier A. The error amplifier generates a control voltage in relation to the difference between the reference voltage VREF and the output voltage VCp. The control voltage is applied to a voltage controlled voltage source VCCS. The voltage controlled current source VCCS is controlled to supply a higher constant current to the flying capacitor C1 , if the voltage difference at the input of the error amplifier A is large. A small voltage difference entails only a small control voltage and therefore a small current through VCCS. So, error amplifier A and VCCS together determine the load on the flying capacitor C1 and thereby the output voltage VCp Generally, the time constant of the control mechanism is greater than the switching frequency of switches S1 , S2, S2a and the current through VCCS remains substantially constant for a constant output current Load- For Vref equal VCp, the current Iq drawn from VCC is equal to two times l|Oad- Figure 4 shows the output voltage of the charge pump in Figure 3 against time. It can be seen that the output voltage ripple has a triangular form, instead of the sawtooth voltage generated by conventional charge pumps. This triangular output voltage contains less high frequency components than a sawtooth output voltage and has half the amplitude, therefore less noise is generated in following circuit units connected to the charge pump. Figure 4 also shows the current Iq drawn from VDD by the charge pump according to the invention. The current Iq is constant, with no sharp peaks, and is equal to twice the load current l|Oad (thecurrent supply to the operational amplifier being driven by the charge pump). Therefore noise generated in the supply bus is considerably reduced. Figure 5 shows a charge pump circuit according to an embodiment of the present invention. A controller CTRL, for example a oscillator, a state machine or a microcontroller, is connected between positive and negative supply voltages, VDD and VSS, respectively. The controller CTRL is provided with output ports S1 , S2 and S2a for controlling switches in the charge pump circuit using a free-running oscillator, or clock frequency, or the voltage at the drain of MP1 as an indicator of the flying capacitor charge or discharge state. A flying capacitor C1 is connected to two switching paths implemented by MOS transistors. The first switching path is operable to connect the capacitor C1 between the positive supply voltage VDD and the negative supply voltage VSS, which can be ground, and is implemented by an NMOS transistor MNO and a PMOS transistor MP9, with the transistors MNO and MN9 acting as switches. The gate terminal of the transistor MNO is connected to the port S2 of the controller CTRL and the gate terminal of the transistor of the transistor MP9 is connected to the port S2a of the controller CTRL so that the control ports S2 and S2a open and close the switches in the first switching path by applying appropriate gate voltages to the transistors MNO and MP9, respectively. It is also possible to use a single control port at the controller CTRL for opening and closing the switching transistors MNO and MP9. The second switching path is operable to connect the capacitor C1 between the positive supply voltage VDD and the charge pump voltage rail VCp and is implemented by two PMOS switching transistors MPO and MP5. The gate terminals of both transistors MPO and MP5 are connected to the control port S1 of the controller CTRL so that the control port S1 opens and closes the switches in the second switching path by applying an appropriate gate voltage to both transistors MPO and MP5. A current source implemented by a PMOS transistor MP1 is connected between the positive supply voltage rail VDD and the capacitor C1 in both switching paths so that when the first switching path is open the current source MP1 is connected to a first terminal of the capacitor C1 and when the second switching path is open the current source is connected to a second terminal of the capacitor C1. The gate terminal of the current source transistor MP1 represents the voltage controlled current source VCCS shown in Figure 3. The gate of MP1 is connected to a circuit that represents an error amplifier (as the error amplifier A shown in Figure 3). The error amplifier and reference voltage generation circuit is provided by MP3, current source lref and resistor R1. in this case reference voltage, i.e. the difference between the output voltage VCp and VDD is equal to Vgs MP3 + R1*lref. The gain is provided by MP3, the reference current source lref, and the drain terminal of PMOS transistor MP3, which is configured to act asan error amplifier. Therefore, the output voltage VCp of the charge pump can be regulated as required by choosing an appropriate value of Iref. A storage capacitor C2, for storing the voltage that is to be applied to a load is connected between the charge pump voltage rail VCp and the positive supply voltage rail VDD. In a first phase of operation, the control ports S2 and S2a in the controller CTRL open the transistors MP9 and MNO so that current from the current source transistor MP1 flows through the flying capacitor C1 from the positive supply voltage rail VDD to the negative supply voltage rail (ground), thereby charging the capacitor C1. In a second phase of operation, the control ports S2 and S2a close the transistors MP9 and MNO and open the transistors MPO and MP5. This means that, in effect, the negative terminal of the capacitor C1 is now connected to the positive supply voltage rail VDD via the current source MP1 and the positive terminal of the capacitor C1 is connected to the charge pump voltage rail VCp. Current from the current source transistor MP1 then flows through the capacitor C1 in the opposite direction to the direction of current flow through the capacitor C1 during the first phase of operation. This discharges the capacitor C1 and, as the capacitor C1 discharges, it charges the storage capacitor C2 to the required op-amp input voltage. When it is detected at the input port cp of the controller CTRL that the charge pump voltage rail is at the required voltage, the controller CTRL closes the transistors MPO and MP5 off using the control port S1 and opens the transistors MNO and MP9 using the control ports S2 and S2a, respectively. The first phase of operation of the charge pump then begins again so that the flying capacitor performs a charge and discharge cycle and the charge pump can operate continuously. Although the present invention has been described with reference to a particular embodiment, it is not limited to this embodiment and no doubt further alternatives will occur to the skilled person that lie within the scope of the invention as claimed.
Described herein are technologies for managing lists of universal resource locators ("URLs") for a mobile device based, at least in part, upon the determined location of the device. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
A method to provide location aware services using a mobile device, comprising: determining a location of the mobile device, wherein the location of the mobile device is determined using at least one or more of global positioning system, GPS, wireless fidelity, Wi-Fi, systems, and identifiable wireless sources;determining one or more contextual factors based on the location of the mobile device, wherein the one or more contextual factors includes a mode of travel of a user of the mobile device, a type of location, and one or more concerts scheduled in the location of the mobile device;recommending one or more location relevant recommendations based on the location of the mobile device and the one or more contextual factors of the mobile device; anddisplaying a list of the one or more location relevant recommendations to enable the user of the mobile device to be prompted to use the one or more location relevant recommendations,wherein the one or more location relevant recommendations allow the user to play music, which is relevant to the location and the contextual factors,wherein the one or more location relevant recommendations allow the user to know about one or more concerts scheduled in the current location,wherein the one or more location relevant recommendations aid the user of the mobile device to avoid manual search to access the one or more relevant recommendations.The method of claim 1, wherein knowing about the one or more concerts scheduled in the current location is based on a personal history of web-site usage at or near the location.The method of claims 1 or 2, wherein the type of location indicates whether the user of the mobile device is at home or flying or at a new location.The method of claims 1 to 3, wherein the identifiable wireless sources include an identifier assigned to one or more wireless sources.The method of claim 4, wherein the identifiable wireless sources include an identifier assigned to a wireless access point, WAP.The method of claims 1 to 5, wherein a recommendation of the list of one or more relevant recommendations represents an address of a web-site.
BACKGROUNDThe use of mobile devices, such as smartphones, is nearly ubiquitous. Many of these mobile devices include the capability to determine their physical location. That is, the mobile device is capable of determining its location in the physical world. Conventionally location determination is typically accomplished by using Global Positioning Systems (GPS), some form of triangulation or interpolation of multiple radio signals, internet protocol (IP) geo-location, or some combination thereof.A collection of so-called location-based services (LBS) are emerging that take advantage of the location-detection capability of the mobile devices that so many people are carrying with them each day. For example, LBSs include targeted advertising, social networking, locating friends ("check-ins"), photo tagging, life logging, location-based games, fitness monitoring, and others. Location-based services may include vehicle or parcel tracking as well.With the ubiquitous nature of the mobile devices comes the frequent access to the websites on such devices via wireless Internet access. Users have grown accustomed to finding information by searching the World Wide Web (i.e., the "web") at any time and any place.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 shows example scenarios to illustrate implementations in accordance with the technologies described herein.Fig. 2 is a flow chart illustrating an example method in accordance with the technologies described herein.Fig. 3 is a state diagram illustrating an example method in accordance with the technologies described herein.Fig. 4 illustrates an example system in accordance with the technologies described herein.Fig. 5 illustrates an example computing device to implement in accordance with the technologies described herein.Fig. 6 illustrates an example device to implement in accordance with the technologies described herein.The Detailed Description references the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.DETAILED DESCRIPTIONDisclosed herein are technologies for managing lists of uniform resource locators ("URLs") for a mobile device based, at least in part, upon the determined location of the device. Generally, a URL is the global address of documents, services, and other resources on the World Wide Web (i.e., the "web"). A website is a set of related web pages containing content such as text, images, video, audio, etc. The web pages of a website are the most common documents to which a URL points. Consequently, a URL may also be called a link, a website address, or a web address. Collectively, a URL list may be called favorites or bookmarks.The described technology may include, for example, helping a user of a mobile device easily find URLs to websites that are appropriate and best for the current location. The disclosed technologies may also include automatic and dynamic generation of a list of URLs to location-relevant websites. Similarly, such technologies may include automatic caching of location-relevant websites (or pages at such sites) when the present wireless connection to the Internet is not bandwidth restrictive or cost prohibitive.Often some websites are designed for use in specific locations or types of locations. Some examples include a university campus map, a regional subway application, or information related to a particular neighborhood or city location. An example of a website that is useful in specific types of locations is a baseball scoring website, which is useful while at a baseball game.Unfortunately, using conventional approaches, a user of a mobile device can find it difficult to find websites associated with or appropriate for a specific location and to cull the valuable ones from the less helpful ones. With the technology disclosed here, a user can arrive in a location and have his or her mobile device provide a list of links to one or more websites that are appropriate for the specific location.If the user arrives in New York City, for example, there is a tremendous number of available websites to assist in finding museums, restaurants, or even the subway schedule. Those available websites vary in degree of quality and location appropriateness. The technology described herein will help the user in finding which location-specific websites that are available and which ones that are ones are valuable to that user.Another concern not adequately addressed by the conventional approaches is how to manage already cached location-specific applications based the appropriateness for the current location. When the user leaves a particular location where a location-specific website is appropriate, the technology described herein removes the location-specific website from the cache. If the user is leaving the location, then there is no need for the device to cache the web pages of that site for the user.The identification of websites that are appropriate for a particular location can also be used more generally to predict the websites that a user will access at any point in the day. As the user traverses the places and routes that he normally travels, the mobile device keeps track of the websites associated with each location (place/route).Each user of a mobile device has a limited knowledge and understanding of which location-specific websites are appropriate for a particular location. For example, a user who attends a minor league baseball game is likely unaware of a website that is particular to the ballpark that provides live statistics of the game. The user might not ever find the website by searching for it.Conventional approaches require a large amount of the user's time and manual input. When searching for websites, users can query for specific websites but they have to actively do so with keyword searches or knowledge of the type of website they are looking for. Furthermore, users must remember which websites are related to which location or try to manually arrange them in ways that makes this process easier.In short, the technology described herein helps a user to gain the benefits of using location-specific websites without requiring a large amount of manual searching for such websites.EXAMPLE LOCATION-AWARE URL LIST MANAGEMENT SCENARIOSFig. 1 shows an example set of scenarios 100 in which one or more implementations of the technology described herein may be employed. As depicted, the scenarios include four locations with a mobile device in operation at each location. User 102 is holding a smartphone 110 as he approaches his train in a metropolitan transit center 112 of a city that he is visiting for the first time. Another user (not shown) is with a cell phone 120 waiting during a layover at an airport 122. A hungry traveler (not shown) is using his tablet computer 130 while eating a restaurant 132. Still another user (not shown) has her smartphone 140 with her at home 142.Each of these mobile devices is connected to a communications network 150 via a wireless connection. Such a connection can be Wi-Fi, Bluetooth, cellular, or another technology. This connection links the mobile devices to the Internet, a private intranet, and/or to a so-called cloud. Each of the web servers 170 and a database server 160 may be part of the Internet, a private intranet, or a cloud, at least in part. Of course, each of the web servers 170 and the database server 160 can be implemented as one or more servers.While referring to Fig. 1 , various example scenarios 100 are discussed. When at the transit center 112, the user 102 browses the web on his smartphone 110. Some of those might include some websites that are specific to the transit system of the city. For example, it might include website with a subway train schedule. Using known or new techniques, the smartphone 110 determines its current location, which is the transit center 112.That current location (the transit center 112) is associated with the website that the user 102 is using on the smartphone 110 while at that location. Other contextual factors of the website's use are associated with the website and the current location. For example, how much the website is used at that location, how often is it used at that location, which pages on that website are used at that location, how frequently the website is used at that location by others, and the similar factors. In addition to use, some of the contextual factors may include ratings provided by users of website at particular locations.This associated information can be stored on the smartphone 110. In addition, such location-aware associations can be performed by many mobile devices at that transit center 112 over a period of time. Those various associations can be uploaded via the communications network 150 to the database server 160, where such associations are collected and organized. The information gathered about the various associations between the websites and locations, and perhaps contextual factors, can be called crowd-sourced since it is gathered from a crowd of users over time.While waiting a few hours in the airport 122 for his connecting flight home, the user may wish to explore what is available to him at the airport. Using an implementation of the technology described herein, the cell phone 120 communicates its current location to the database server 160, which returns a list of links to websites that are specific to the current location of the phone 120. The links can be listed in order of relevance based upon contextual factors associated with the linked websites in the database server 160.Similar to the airport scenario, the hungry traveler can receive a list of recommended websites on his tablet computer 130 while dining at the restaurant 132. The traveler can choose to browse a local news website while dining.While carrying her smartphone 140, a user arrives at her home 142 in Spokane, Washington after a business trip to New York City. While she was in New York City, she frequently used several websites that helped get around and better enjoy the city. Now she is home and not interested in favorites list being populated by links to websites relevant to a city across the nation. Her smartphone 140 determines her current location and presents her a list of website links relevant to that current location. Indeed, her browser on her smartphone 140 may have a list simply labeled "Useful Here" that lists only location-relevant website links.LOCATION AWARENESSLocation awareness involves the mobile device determining its present location. Conventional location-determination approaches include GPS and signal positioning (e.g., triangulation, trilateration, and other forms of interpolation and extrapolation) to determine geo-physical location relative to multiple signal sources. GPS are near-ubiquitous outdoor location technology and a GPS enabled typical smartphone has three to five meter accuracy. For signal positioning, the signal sources can use cellular or a variant of IEEE 802.11 (i.e., Wi-Fi). Signal-positioning approaches rely upon a map of signal sources whose locations are known to extrapolate a location of a device.Rather than relying on signal-triangulation-based location approaches (like GPS) to determine geo-location with a fine-grain and absolute resolution, the technology described herein is based upon a location determination with a coarse grain and relative resolution. More particularly, the technology described herein utilizes determinations of logical or semantic locations.One or more implementations include, for example, a mobile device recognizing and learning a frequented discrete location based on the "observed" ambient radio environment at that location. In particular, the mobile device can recognize and learn which ambient identifiable wireless ("IWS") sources are part of a topography within reception range at that discrete location.A wireless access point (WAP) is a specific example of an ambient IWS source. The IWS sources are called ambient herein because they may be detected or "observed" in the environment while a mobile device moves about the world. The IWS sources are called "identifiable" because each is uniquely identifiable. For example, each WAP may be uniquely identified by its basic service set identification (BSSID) or media access card (MAC) address. Of course, other identifying characteristics may be used alone or in combination with each other or with the BSSID or MAC address. Examples of such other identifying characteristics include service set identification (SSID) and received signal strength indication (RSSI).Geo-location, also called geo-physical location, includes determination of a real-world geographic location of an object or person. "Physical location" is a broader term than geo-location and includes a determination of any real-world location of the object or person.CONTEXTUAL FACTORSAs part of one or more implementations described herein, a mobile device can determine contextual factors. In short, a contextual factor is some observed, measured, calculated, and/or determined data about the context in which the mobile device exists. A contextual factor answers some aspects of the questions that are typically asked when gathering information: how, who, what, when, where, and why.In general, the determined present location of the mobile device is a contextual factor. However, herein the location (i.e., where) is a special case of a contextual factor that is handled separately. Consequently, as used herein, contextual factors explicitly exclude location of the mobile phone because that is handled separately. That said, contextual factor can include locations where the user is predicted to be traveling, estimated time/place of arrival, or route prediction.An example of a contextual factor is the mode of travel of the user of the mobile device. Is the user walking, biking, riding bus or train, or in a motor vehicle? If walking, the user might, for example, want to see websites for a local bus schedule.Another example of a contextual factor is the type of location. For example, if the user is determined to be at Spokane International Airport, is a type "airport" or more generally "transportation," consequently, websites associated with that type of location can be recommended to the user.Another example of a contextual factor is the type of event happening at a location. For example, HP Pavilion in San Jose is home to the San Jose Sharks ice hockey team, but also hosts various concerts, shows, and events. In addition, a known schedule of events that occur at a particular location may be a contextual factor.Many of the contextual factors are based on website usage. The user builds a personal history of website usage at or near the determined location. Furthermore, many users generate a crowd-sourced history of website usage at or near the determined location. The route in which websites are used and the destination to which websites are used en route are other factors.Some other context factors may include, for example, crowd-sourced information about websites, such as ratings of websites.EXAMPLE OF LOCATION-AWARE URL LIST MANAGEMENT OPERATIONFig. 2 illustrates an example process 200 for implementing, at least in part, the technology described herein. In particular, process 200 depicts an example of location-aware URL-list-management operations performed, at least in part, by a mobile device, such as smartphone 110. Servers, such as a database server 160 or other cloud-based services may perform some portions of the example process 200.At 202, a mobile device determines its present location using one or more of the new or known location-awareness approaches. The determined location of the mobile device can be, for example, a physical location, a geo-location, or a logical location. The geo-location information can be obtained from a GPS. The location information can be obtained, at least in part, from one or more ambient IWS sources.At 204, the mobile device determines contextual factors of the mobile device.At 206, the mobile device accesses a database of website associations. The database provides an association between websites, their URLs, and locations. In addition, the database may provide additional information about contextual factors associated with the websites and/or with locations. The database, or part thereof, can be stored locally on the mobile device itself. In some implementations the mobile device may access a remote database via a communications network. For example, the smartphone 110 accesses the database server 160 via a network 150. The database may include crowd-sourced information about websites. For example, the database may include a collection of website usage information and user-supplied ratings from many different users for websites used at or near locations.At 208, the database provides a list of websites associated with the present location of the mobile device. In some implementations, the list may include websites associated with the present location or with locations near the present location. Additionally or alternatively, the database provides a list of websites that are associated with different locations than that of the present location or nearby that location of the mobile device. This listing may be used to remove such websites from the device's cache.For websites associated with the present location, operations 210 and 212 are performed. For websites that are associated with a location other than the present location, operations 214 and 216 are performed.At 210, the mobile device selects one or more websites that are associated with or are nearby the present location. If location is the only criterion, then, in some implementations, all the websites associated with the present location are selected. In some implementations the selecting may be based, at least in part, on contextual factors. In one or more implementations, the selection may include the mobile device querying the database to find a list of websites that are associated with the determined location and then the mobile device choosing one or more websites from the list of website links found by the query.When selecting the appropriate websites, the mobile device may collect a group of seemingly disparate but linked web pages together and designate them a website. In doing this, a representative entry-point URL is selected for the designated website.At 212, the mobile device generates a URL list of the links to the selected websites. The list may be ordered based upon one or more of the contextual factors. For example, the websites used most at a particular location by the most people may be listed first.At 213, the mobile device displays the generated URL list of websites relevant to the present location. The user may view the generated list via their mobile browser. Alternatively, the list may be viewed outside the context of their mobile browser. Of course, when the user chooses a URL from the list, the mobile device will open the mobile browser to get and view the website associated with chosen websiteInstead of websites that are associated with the present location, the mobile device may act upon websites that are associated with a different location than the present location. For websites that are associated with a location other than the present location, operations 214 and 216 are performed.At 214, the mobile device selects one or more websites that are associated with a location that is different from the present location. In some implementations, the mobile device may select those websites that are associated with a location far from the present location. The threshold of how far can be determined by known or calculable distances between present and associated locations exceeding a distance threshold. Alternatively, the database may designate nearby locations for websites or for specific locations.If location is the only criterion, then, in some implementations, all the websites associated with a location other than the present location are selected. In some implementations the selecting may be based, at least in part, upon the contextual factors. In one or more implementations, the selection may include the mobile device querying the database to find a list of websites that are associated a location other than the determined location and then the mobile device choosing one or more websites from the list of websites found by the query.At 216, the mobile device determines whether content of the selected websites are stored in the cache of the mobile device. If so, then the mobile device releases portions of the cache storing content of the selected one or more websites. That is, the mobile device removes one or more of the selected websites from the cache on the mobile device. Doing this frees up valuable memory on the mobile device.ANOTHER EXAMPLE OF LOCATION-AWARE URL LIST MANAGEMENT OPERATIONFig. 3 illustrates a state diagram 300 of an example process for implementing, at least in part, the technology described herein. In particular, state diagram 300 depicts an example of location-aware URL list management operation performed, at least in part, by a mobile device, such as a smartphone 110. Servers, such as a database server 160 or other cloud-based services may perform some portions of the state diagram 300.At 301, a mobile device tracks its location continually until the device determines that the user arrives a new location.At 302, when a user arrives at a new location that he or she has never visited with the mobile device before, the mobile device determines that this is a place that the user has not visited before. That is, this location is a new location. In one or more implementations, the determination of the place at which a user arrives can be predicted before arrival if the user is traveling to a known location. In this situation, the device can enter state 302 and then 304 prior to the user's arrival.At 304, the mobile device determines the geo-location and queries a location-aware database to get a list of links to websites associated with the new location. The mobile device presents this list to the user and installs the applications desired by the user. The mobile device adds this new place to a model of location-aware websites, which may involve updating the database of such websites. The mobile device tracks the usage of websites while the user remains at this location.At 306, when the user arrives at a place that he or she has previously visited, the mobile device checks for updates to websites associated with this location and generates a URL list of those websites. In addition, the device may also query the database to find new or better websites to include in the URL list. The mobile device tracks the usage of websites while the user remains at this location.At 308 and 310, the mobile device continues to track user location until the user moves away from the location. If the user moves away from the location, then the device moves to state 312.At 312, the mobile device updates usage statistics and sends the statistics to the database server.EXAMPLE SYSTEMFig. 4 illustrates example system 400 for implementing the technology described herein. The system 400 includes a mobile device 404, a network 430, and a network or cloud-based server 440. The mobile device 404 may be the same as or similar to mobile devices 110, 120, 130, and 140, which have already been introduced. The cloud-based server 440 may be the same as or similar to the database server 160, which has already been introduced.The mobile device 404 includes a memory 410, one or more processor(s) 412, a wireless signal manager 414, a display system 416, a web browser, a location-awareness system 420, a contextualizer 422, a URL list generator 424, and local database 426. These functional components can be separate or some combination of hardware units. Alternatively, the components can be implemented, at least in part, in software and thus be stored in the memory 410 and executed by the processors 412.The memory 410 may include a cache. The cache stores copies of website content (e.g., text, images, audio, video, etc.) that is likely to be needed again in the near future. This allows for quicker access next time.The wireless signal manager 414 handles all wireless signals sent or received by the device. For example, wireless signal manager 414 handles the communications via the network 430. The wireless signal manager 414 especially handles signal management that aid in location awareness. For example, the wireless signal manager 414 may include the GPS components, cellular transceivers, and Wi-Fi transceivers.The display system 416 includes the display itself and the graphics system to drive that display. The web browser 418 typically is an application running on the device that is designed to reach out to the web and load web pages therefrom for the user to view on the mobile device.The location-awareness system 420 uses one or more of the existing and/or new location-awareness approaches to determine the present location of the mobile device 404. The contextualizer 422 determines the contextual factors. The URL list generator 424 generates a list of links to the selected websites. The local database 426 stores relevant data, such as the associations between known locations and often used websites.The network 430 can be a wired and/or wireless network. It can include the Internet infrastructure and it may be presented as the cloud. The network 430 includes wired or wireless local area networks, a cellular network, and/or the like. The network 430 links the mobile device 404 with the network server 440. Some implementations of the technology described here operate without assistance from the network.The network or cloud-based server 440 provides assistance to the mobile device 404 as part of one or more implementations of the technology described herein. In some implementations, the network 430 and network server 440 are not used. The network server 440 can be one or more actual servers.The network server 440 includes a website-searching assistant 442 and a remote database 450. The website-searching assistant 442 helps locate relevant websites for a query submitted by the mobile device 404. The remote database 450 stores associations between websites, their URLs, locations, and/or contextual factors. These associations can be collected from many mobile devices, such as the mobile device 404.As depicted and discussed, the wireless devices 110, 120, 140, and 404 are mobile phones. However, devices can be other types of portable devices, such as smartphones, cell phones, tablet computers, any wireless-enabled wearable devices, laptop computers, netbook computers, or the like.EXAMPLE COMPUTING DEVICEFig. 5 illustrates an example system 500 that may implement, at least in part, the technologies described herein. In various implementations, system 500 is a media system, although system 500 is not limited to this context. For example, system 500 can be incorporated into a personal computer (PC), laptop computer, ultra-laptop computer, tablet, touch pad, portable computer, handheld computer, palmtop computer, personal digital assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet, or smart television), mobile internet device (MID), messaging device, data communication device, and so forth.In various implementations, system 500 includes a platform 502 coupled to a display 520. Platform 502 receives content from devices such as content services device 530, content delivery device 540, or other similar content sources. A navigation controller 550 including one or more navigation features may be used to interact with, for example, platform 502 and/or display 520.In various implementations, platform 502 includes any combination of a chipset 505, a processor 510, memory 512, storage 514,a graphics subsystem 515, applications 516 and/or radio 518. Chipset 505 provides intercommunication among processor 510, memory 512, storage 514, graphics subsystem 515, application 516, and/or radio 518. For example, chipset 505 can include a storage adapter (not depicted) capable of providing intercommunication with storage 514.Processor 510 may be implemented as a complex instruction set computer (CISC) or reduced instruction set computer (RISC) processors, x86 instruction set compatible processors, multicore, or any other microprocessor or central processing unit (CPU). In various implementations, processor 510 may be dual-core processors, dual-core mobile processors, and so forth.Memory 512 may be implemented as a volatile memory device such as, but not limited to, a random access memory (RAM), dynamic random access memory (DRAM), or static RAM (SRAM).Storage 514 may be implemented as a nonvolatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up synchronous DRAM (SDRAM), and/or a network accessible storage device. In various implementations storage 514 includes technology to increase the storage performance-enhanced protection for valuable digital media when multiple hard drives are included.Graphics subsystem 515 processes of images such as still or video for display. Graphics subsystem 515 can be a graphics processing unit (GPU) or a visual processing unit (VPU), for example. An analog or digital interface may be used to communicatively couple the graphics subsystem 515 and the display 520. For example, the interface can be a high-definition multimedia interface, display port, wireless high definition media interface (HDMI), and/or wireless HD-compliant techniques. Graphics subsystem 515 may be integrated into processor 510 or chipset 505. In some implementations graphics subsystem 515 may be a stand-alone card communicatively coupled to chipset 505.The graphics and/or video processing techniques described herein are implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a discrete graphics and/or a video processor may be used. As still another implementation, the graphics and/or video functions may be provided by a general-purpose processor, including a multicore processor. In further embodiments, the functions may be implemented in a consumer electronics device.Radio 518 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques involve communications across one or more wireless networks. Example wireless networks include, but are not limited to, wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, and satellite networks. In communicating across such networks, radio 518 operates in accordance with one or more applicable standards in any version.In various implementations display 520 includes any television-type monitor or display. Display 520 may include, for example, a computer display screen, touch-screen display, video monitor, television-like device, and/or a television. Display 520 can be digital and/or analog. In various implementations, display 520 may be a holographic display. In addition, display 520 may be a transparent surface that receives a visual projection. Such projections convey various forms of information, images, and/or objects. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. Under the control of one or more software applications (516), platform 502 can display user interface 522 on display 520.In various implementations, content services device(s) (530) may be hosted by any national, international, and/or independent service and thus accessible to platform 502 via the Internet. Content services device(s) (530) may be coupled to platform 502 and/or to display 520. Platform 502 and/or content services device(s) 530 may be coupled to a network 560 to communicate media information to and from the network 560. Content delivery device(s) 540 also may be coupled to platform 502 and/or to display 520.In various implementations, content services device(s) 530 include a cable television box, personal computer, network, telephone, Internet-enabled devices, appliances capable of delivering digital information and/or content, and any other similar device capable of unidirectionally or bidirectionally communicating content between content providers and platform 502 and/display 520, via network 560 or directly. The content can be communicated unidirectionally and/or bidirectionally to and from any one of the components in system 500 and a content provider via a network 560. Examples of content include any media information including, for example, video, music, medical and gaming information, and so forth.Content services device(s) 530 receive content such as cable television programming including media information, digital information, and/or other content. Examples of content providers include any cable or satellite television or radio or Internet content providers. The provided examples are not meant to limit implementations in accordance with the present disclosure in any way.In various implementations platform 502 may receive control signals from navigation controller 550 having one or more navigation features. The navigation features of controller 550 may be used to interact with user interface 522, for example. In some embodiments, navigation controller 550 may be a pointing device such as a computer hardware component, specifically a human interface device, that allows a user to input spatial (e.g., continuous and multidimensional) data into a computer. Many systems such as graphical user interfaces (GUI), and televisions and monitors allow the user to control and provide data to the computer or television using physical gestures.Movements of the navigation features of controller 550 can be replicated on a display (e.g., display 520) by movements of a pointer, cursor, focus ring, or other visual indicators displayed on the display. For example, under the control of software applications 516, the navigation features located on navigation controller 550 can be mapped to virtual navigation features displayed on user interface 522. In some embodiments, controller 550 may not be a separate component but may be integrated into platform 502 and/or display 520. The present disclosure, however, is not limited to the elements or in the context shown or described herein.In various implementations, drivers (not shown) include technology to enable users to instantly turn on and off platform 502 like a television with the touch of a button after initial boot up, when enabled. Program logic allows platform 502 to stream content to media adaptors or other content services device(s) 530 or content delivery device(s) 540 even when the platform is turned off. In addition, chipset 505 includes hardware and/or software support for 5.1 surround sound audio and/or high definition 5.1 surround sound audio, for example. Drivers may include a graphics driver for integrated graphics platforms. In some embodiments the graphics driver may comprise a peripheral component interconnect (PCI) express graphics card.In various implementations any one or more of the components shown in system 500 can be integrated. For example, platform 502 and content services device(s) 530 can be integrated, or platform 502 and content delivery device(s) (540) can be integrated, or platform 502, content services device(s) (530), and content delivery device(s) 540 can be integrated. In various embodiments, platform 502 and display 520 can be an integrated unit. Display 520 and content service device(s) 530 can be integrated, or display 520 and content delivery device(s) 540 can be integrated. These examples are not meant to limit the present disclosure.In various embodiments system 500 can be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 500 can include components and interfaces suitable for communicating over a wireless shared media, such as one or more antennae, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. An example of wireless shared media includes portions of a wireless spectrum, such as the RF spectrum. When implemented as a wired system, system 500 can include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a network interface card (NIC), disc controller, video controller, audio controller, and the like. Examples of wired communications media can include a wire, cable, metal leads, printed circuit board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, coaxial cable, fiber optics, and others.Platform 502 can establish one or more logical or physical channels to communicate information. The information includes media information and control information. Media information refers to any data representing content meant for a user. Examples of content include data from a voice conversation, videoconference, streaming video, electronic mail ("e-mail") message, voice-mail message, alphanumeric symbols, graphics, image, video, text, and so on. Data from a voice conversation can be, for instance, speech information, silence periods, background noise, comfort noise, tones, and other similar items. Control information refers to any data representing commands, instructions, or control words meant for an automated system. For example, control information can be used to route media information through a system, or instruct a node to process the media information in a predetermined manner. The embodiments, however, are not limited to the elements or in the context shown or described in Fig. 5 .As described above, system 500 can be embodied in varying physical styles or form factors. Fig. 5 illustrates implementations of a small form-factor device 500 in which system 500 can be embodied. In embodiments, for example, device 500 can be implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power source or supply, such as one or more batteries.Examples of a mobile computing device, in addition to those already mentioned, also may include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computers, clothing computers, and other wearable computers. In various embodiments, a mobile computing device can be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications. Although some embodiments can be described with a mobile computing device, other embodiments can be implemented using other wireless mobile computing devices as well. The embodiments are not limited in this context.As shown in Fig. 6 , device 600 includes a housing 602, a display 604, an I/O device 606, and an antenna 608. Device 600 also includes navigation features 612. Display 604 includes any suitable display unit for displaying information appropriate for a mobile computing device. I/O device 606 includes any suitable I/O device for entering information into a mobile computing device. Examples for I/O device 606 include an alphanumeric keyboard, a numeric keypad, a touch pad, input keys, buttons, switches, rocker switches, microphones, speakers, voice recognition device and software, and others. Information also can be entered into device 600 by way of microphone (not shown). Such information is digitized by a voice recognition device (not shown). The embodiments are not limited in this context.Various embodiments can be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, etc.), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and more. Examples of software include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements varies in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds, and other design or performance constraints.One or more aspects of at least one embodiment can be implemented by representative instructions stored on a machine-readable medium that represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" can be stored on a tangible, machine-readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.While certain features set forth herein have been described with reference to various implementations, this description is not intended to be construed in a limiting sense. Hence, various modifications of the implementations described herein, as well as other implementations, which are apparent to persons skilled in the art to which the present disclosure pertains are deemed to lie within the scope of the present disclosure.Realizations in accordance with the present invention have been described in the context of particular embodiments. These embodiments are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are demonstrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the various configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of the invention as defined in the claims that follow.ADDITIONAL AND ALERTNATIVE IMPLEMENTATION NOTESIn general, a mobile device is a small, hand-held, portable computing device that typically has a display screen and some user input mechanism (e.g., touch screen or keyboard). Often they weigh less than two pounds. Often, they are equipped with wireless communications capabilities, such as Wi-Fi, Bluetooth, and cellular. Examples of implementations of a mobile device include a smartphone, a tablet computer, a feature phone, a personal digital assistant (PDA), any wireless-enabled wearable devices, laptop computers, netbook computers, or other so-called handheld devices or computers.In the above description of exemplary implementations, for purposes of explanation, specific numbers, materials configurations, and other details are set forth in order to better explain the present invention, as claimed. However, it will be apparent to one skilled in the art that the claimed invention may be practiced using different details than the exemplary ones described herein. In other instances, well-known features are omitted or simplified to clarify the description of the exemplary implementations.The inventor intends the described exemplary implementations to be primarily examples. The inventor does not intend these exemplary implementations to limit the scope of the appended claims. Rather, the inventor has contemplated that the claimed invention might also be embodied and implemented in other ways, in conjunction with other present or future technologies.Moreover, the word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as exemplary is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word "exemplary" is intended to present concepts and techniques in a concrete fashion. The term "technology," for instance, may refer to one or more devices, apparatuses, systems, methods, articles of manufacture, and/or computer-readable instructions as indicated by the context described herein.As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more," unless specified otherwise or clear from context to be directed to a singular form.These processes are illustrated as a collection of blocks in a logical flow graph, which represents a sequence of operations that can be implemented in mechanics alone or a combination with hardware, software, and/or firmware. In the context of software/firmware, the execution of the instructions on the medium may cause performance of the operations described herein.Note that the order in which the processes are described is not intended to be construed as a limitation, and any number of the described process blocks can be combined in any order to implement the processes or an alternate process.The term "computer-readable media" includes computer-storage media. For example, computer-storage media may include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, and magnetic strips), optical disks (e.g., compact disk [CD] and digital versatile disk [DVD]), smart cards, flash memory devices (e.g., thumb drive, stick, key drive, and SD cards), and volatile and nonvolatile memory (e.g., random access memory [RAM], read-only memory [ROM]). Examples provide a mobile device comprising: a location-awareness system configured to determine a location of the mobile device; a URL-list-manager configured to: select one or more websites that are associated with the determined location; generate a list of uniform resource locators ("URLs") to the one or more of the selected websites. In some examples the mobile device further comprises a contextualizer configured to determine contextual factors of the mobile device, the URL-list-manager being further configured to select based, at least in part, upon the determined contextual factors. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the URL-list-manager is further configured to designate a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examples the location-awareness system is further configured to determine the location using, at least in part, geo-location information obtained from a global positioning system (GPS). In some examples the location-awareness system is further configured to determine the location using, at least in part, location information obtained from one or more ambient identifiable wireless signal (IWS) sources. In some examples the mobile device further comprises: a display configured to present thereon a user interface to a user of the mobile device, the user interface offering the generated list of ULRs to the one or more of the selected websites; a user input system operatively associated with the user interface, the user-input system being configured to obtain input from a user that indicates the user's choice of one or more of the selected websites to access.Examples provide a method of management of lists of uniform resource locators (URLs) for a mobile device, the method comprising: determining a location of a mobile device; selecting one or more websites that are associated with the determined location; generating a list of URLs to the one or more of the selected websites. In some examples the method further comprises determining contextual factors of the mobile device, wherein the selecting is based, at least in part, upon the determined contextual factors. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the method further comprises designating a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examples the determining of the location is based, at least in part, geo-location information obtained from a global positioning system (GPS). In some examples the determining of the location is based, at least in part, location information obtained from one or more ambient identifiable wireless signal (IWS) sources. In some examples the selecting includes: querying a database to find a list of websites that are associated with the determined location; choosing one or more websites from the list of websites found by the query. In some examples the method further comprises accessing the database via a communications network. In some examples the database includes crowd-sourced information about websites. In some examples the database includes crowd-sourced information about websites, wherein such information is selected from a group consisting of usage at or near locations and user-supplied ratings.Examples provide one or more computer-readable media with processor-executable instructions stored thereon which when executed by one or more processors cause performance of operations comprising: determining a location of a mobile device; determining contextual factors of the mobile device; selecting one or more websites that are associated with the determined location and with one or more determined contextual factors; generating a list of uniform resource locators ("URLs") to the one or more of the selected websites. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the operations further comprising designating a group of web pages to be part of at least one of the selected websites. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location. In some examplesExamples provide a method comprising: determining a location of a mobile device; determining contextual factors of the mobile device; tracking usage of one or more websites while at the determined location; generating an association between the determined location, determined contextual factors, and the one or more tracked websites; facilitating storage of the association in a database. In some examples the contextual factors are selected from a group consisting of mode of travel of a user of the mobile device, crowd-sourced ratings of websites, personal history of website usage at or near the determined location, personal history of website usage en route to the determined location, crowd-sourced history of website usage at or near the determined location, identification of type of the determined location, and identification of the type of event happening at the location. In some examples the determining the contextual factors includes determining usage of one or more websites of the mobile device while at or near the determined location. In some examples the usage being determined for a particular website is selected from a group consisting of whether the particular website is used while at or near the determined location, how much or how long the particular website is used while at or near the determined location, whether the particular website is initiated while at or near the determined location, whether the particular website is active while at or near the determined location, whether the particular website is inactive while at or near the determined location, whether the particular website is deactivated while at or near the determined location, whether the particular website is installed while at or near the determined location, whether the particular website is uninstalled while at or near the determined location, and any combination thereof. In some examples the determined location of the mobile device is selected from a group consisting of a physical location, geo-location, and a logical location.
A processing system includes a processor to construct an input message comprising a plurality of padding bits and a hardware accelerator, communicatively coupled to the processor, comprising a first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash based on the input message, wherein the hardware accelerator comprises a first data path coupled between a first reference node and a first input node of the first plurality of circuits to feed a first padding bit of the plurality of padding bits to the first input node.
CLAIMS What is claimed is: 1. A processing system comprising:a processor to construct an input message comprising a plurality of padding bits; and a hardware accelerator, communicatively coupled to the processor, comprising a first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash based on the input message, wherein the hardware accelerator comprises a first data path coupled between a first reference node and a first input node of the first plurality of circuits to feed a first padding bit of the plurality of padding bits to the first input node. 2. The processing system of claim 1, wherein the hardware accelerator comprises a second plurality of circuits to perform a stage-2 SHA hash, and a second data path coupled between a second reference node and a second input node of the second plurality of circuits to feed a second padding bit of the plurality of padding bits to the second input node. 3. The processing system of any of claims 1 and 2, wherein the first plurality of circuits is to perform a first plurality of rounds of compression on a first plurality of state data associated with the stage-1 SHA hash, and the second plurality of circuits is to perform a second plurality of rounds of compression on a second plurality of state data associated with the stage-2 SHA hash, wherein the hardware accelerator comprises a plurality of registers to store the second plurality of state data, and wherein the hardware accelerator comprises a third data path coupled between a third reference node supplying an initial value and at least one of the plurality of registers. 4. The processing system of claim 1, further comprising:a clock gate circuit to convert a system clock to a gated clock and to supply the gate clock to the first plurality of circuits, wherein the gated clock is to:enable rounds 0 through 2 of the first plurality of rounds of compression; and disable the rounds 0 through 2 of the first plurality of rounds of compression. 5. The processing system of claim 1, wherein the input message comprises a nonce, and wherein the hardware accelerator comprises a plurality of data paths to feed bits of the nonce to circuits to perform a round 3 of the first plurality of rounds of compression.6. The processing system of any of claims 1 and 5, wherein responsive to an increment of the nonce, the hardware accelerator is to increment a same amount to at least one state data associated with the round 3 of the first plurality of rounds of compression. 7. The processing system of claim 6, wherein the hardware accelerator is to subtract a constant value from the at least one state data in rounds 4 through 6 of the first plurality of rounds of compression. 8. The processing system of claim 7, wherein the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of the nonce in Bitcoin mining, and wherein responsive to determine that the nonce is one of valid or invalid, the processor is to increment a value of the nonce to generate a new input message. 9. The processing system of claim 1, wherein the first data path comprises a hardwire coupled between the first reference node and the first input node, and wherein the first reference node supplies a fixed reference value. 10. An application specific integrated circuit (ASIC) comprising:a first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash based on an input message comprising a plurality of padding bits; anda data path coupled between a first reference node and a first input node of the first plurality of circuits to feed a first padding bit of the plurality of padding bits to the first input node. 11. The ASIC of claim 10, wherein the ASIC comprises a second plurality of circuits to perform a stage-2 SHA hash; and a second data path coupled between a second reference node and a second input node of the second plurality of circuits to feed a second padding bit of the plurality of padding bits to the second input node. 12. The ASIC of any of claims 10 and 11, wherein the first plurality of circuits is to perform a first plurality of rounds of compression on a first plurality of state data associated with the stage-1 SHA hash, and the second plurality of circuits is to perform a second plurality of rounds of compression on a second plurality of state data associated with the stage-2 SHA hash, wherein the ASIC comprises a plurality of registers to store the second plurality of state data, and wherein the ASIC comprises a third data path coupled between a third reference node supplying an initial value and at least one of the plurality of registers.13. The ASIC of claim 10, further comprising:a clock gate circuit to convert a system clock to a gated clock and to supply the gate clock to the first plurality of circuits, wherein the gated clock is to:enable rounds 0 through 2 of the first plurality of rounds of compression; and disable the rounds 0 through 2 of the first plurality of rounds of compression. 14. The ASIC of claim 10, wherein the input message comprises a nonce, and wherein the ASIC comprises a plurality of data paths to feed bits of the nonce to circuits to perform a round 3 of the first plurality of rounds of compression. 15. The ASIC of any of claims 10 and 14, wherein responsive to an increment of the nonce, the ASIC is to increment a same amount to at least one state data associated with the round 3 of the first plurality of rounds of compression. 16. The ASIC of claim 15, wherein the ASIC is to subtract a constant value from the at least one state data in rounds 4 through 6 of the first plurality of rounds of compression. 17. The ASIC of claim 16, wherein the ASIC is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of the nonce in Bitcoin mining, and wherein responsive to determine that the nonce is one of valid or invalid, the processor is to increment a value of the nonce to generate a new input message. 18. The ASIC of claim 17, wherein the first data path comprises a hardwire coupled between the first reference node and the first input node, and wherein the first reference node supplies a fixed reference value. 19. A method comprising:receiving, by a hardware accelerator, an input message comprising a first padding bit; feeding, using a first data path coupled between a first reference node and a first input node of a first plurality of circuits, the first padding bit to the first input node of the first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash; andperforming, by the hardware accelerator, the stage-1 SHA hash based on the input message.20. The method of claim 19, further comprising:providing, using a second data path coupled between a second reference node and a first input node of a second plurality of circuits, a second padding bit to the second input node of the second plurality of circuits to perform a stage-2 SHA hash, wherein the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of a nonce in Bitcoin mining. 21. An apparatus comprising: means for performing the method of any of claims 19 and 20. 22. A machine-readable non-transitory medium having stored thereon program code that, when executed by a processor, perform operations comprising:receiving, by a hardware accelerator, an input message comprising a first padding bit; feeding, using a first data path coupled between a first reference node and a first input node of a first plurality of circuits, the first padding bit to the first input node of the first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash; andperforming, by the hardware accelerator, the stage-1 SHA hash based on the input message. 23. The machine-readable non-transitory medium of claim 22, wherein the operations further comprise:providing, using a second data path coupled between a second reference node and a first input node of a second plurality of circuits, a second padding bit to the second input node of the second plurality of circuits to perform a stage-2 SHA hash, wherein the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of a nonce in Bitcoin mining.
OPTIMIZED SHA-256 DATAPATH FOR ENERGY-EFFICIENT HIGH- PERFORMANCE BITCOIN MINING TECHNICAL FIELD [0001] The present disclosure relates to hardware accelerators and, more specifically, to a processing system including hardware accelerator implementing SHA-256 hash using optimized data paths. BACKGROUND [0002] Bitcoin is a type of digital currency used in peer-to-peer transactions. The use of Bitcoin in transactions may eliminate the need for intermediate financial institutes because Bitcoin may enforce authenticity and user anonymity by employing digital signatures.Bitcoin resolves the“double spending” problem (namely, using the same Bitcoin more than once by a same entity in different transactions) using block chaining, whereas a public ledger records all the transactions that occur within the Bitcoin currency system. Every block added to the block chain validates a new set of transactions by compressing a 1024-bit message which includes a cryptographic root (e.g., the Merkle root) of the transaction along with bits representing other information such as, for example, a time stamp associated with the transaction, a version number, a target, the hash value of the last block in the block chain and a nonce. The process of validating transactions and generating new blocks of the block chain is commonly referred to as Bitcoin mining. BRIEF DESCRIPTION OF THE DRAWINGS [0003] The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specificembodiments, but are for explanation and understanding only.[0004] Figure 1 illustrates a processing system to perform Bitcoin mining by employing energy-efficient hardware accelerators including SHA-256 engines according to an embodiment of the present disclosure. [0005] Figure 2 illustrates a process to hash a 1024-bit message into a hash value using three stages of SHA-256 hash in Bitcoin mining.[0006] Figures 3A– 3C illustrates optimized stage-1 SHA-256 and stage-2 SHA-256 engines according to embodiments of the present disclosure.[0007] Figure 4 is a block diagram of a method to use hardwired bits to perform SHA-256 in Bitcoin mining according to an embodiment of the present disclosure.[0008] Figure 5A is a block diagram illustrating a micro-architecture for a processor including heterogeneous core in which one embodiment of the disclosure may be used.[0009] Figure 5B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline implemented according to at least one embodiment of the disclosure.[0010] Figure 6 illustrates a block diagram of the micro-architecture for a processor that includes logic in accordance with one embodiment of the disclosure.[0011] Figure 7 is a block diagram illustrating a system in which an embodiment of the disclosure may be used.[0012] Figure 8 is a block diagram of a system in which an embodiment of the disclosure may operate.[0013] Figure 9 is a block diagram of a system in which an embodiment of the disclosure may operate.[0014] Figure 10 is a block diagram of a System-on-a-Chip (SoC) in accordance with an embodiment of the present disclosure[0015] Figure 11 is a block diagram of an embodiment of an SoC design in accordance with the present disclosure.[0016] Figure 12 illustrates a block diagram of one embodiment of a computer system.DETAILED DESCRIPTION[0017] The reward for a successful Bitcoin mining is the generation of a certain number of new Bitcoins (e.g., 25 Bitcoins) and the service fee associated with thetransactions validated during the mining process. Each Bitcoin may be exchanged for currencies in circulation (e.g., U.S. dollars) or used in transactions with merchants that accept Bitcoins. Bitcoin mining may be associated with certain costs such as, for example, the computing resources consumed to perform Bitcoin mining operations. The most expensive operation in Bitcoin mining involves the computationally-intensive task of determining the validity of a 32-bit nonce. The nonce is a number or a string of bits that is used only once. A 32-bit nonce is a number (or a string of bits) that is represented by 32 bits. The 32-bit nonce may be part of a 1024-bit input message that may also include the Merkle root, the hash of the last chain block, and other parameters. The 1024-bit message may be hashed using three stages of a secure hash algorithm (e.g., SHA-256) to produce a 256-bit hash value that may be compared to a target value also contained in the input message to determine the validity of the nonce. The operations to calculate the hash value are commonly performed on hardware accelerators (e.g., the SHA-256 hash may be performed on application-specific integrated circuits (ASICs)) and may consume a lot of power. The power consumption by the hardware accelerators is the recurring cost for the Bitcoin mining. Embodiments of the present disclosure provide technical solutions including hardware accelerators to perform energy- efficient Bitcoin mining using energy-efficient clock system.[0018] Dedicated Bitcoin mining ASICs are used to implement multiple SHA-256 engines that may deliver a performance of thousands of hashes per second while consuming power of greater than 200W. Embodiments of the present disclosure employ micro- architectural optimizations including selective hardwiring certain parameters in Bitcoin mining computation. The hardwiring of these parameters eliminate the need for recursive rounds of computations of these parameters and reduce the overall circuit area and power consumption by about 15%.[0019] Bitcoin mining operations include operations to generate a 256-bit hash value from a 1024-bit message. The operations are part of cryptographic hash that is one-way (very hard to reverse) and collision-resistant. The hash operations may include two stages (stage-0 and stage-1) of SHA-256 hash to compress a 1024-bit input message into intermediate results, followed by another round (stage-2) of SHA-256 hash applied to the intermediate results generated by the first two stages of SHA-256 hash. The 1024-bit input message to the three stages of SHA-256 hash contains header information, a 32-bit nonce, and padding bits. The padding bits may include 1s and 0s that are generated using a padding generation formulae. The 32-bit nonce is incremented every cycle of the Bitcoin mining process to generate an updated input message, where each cycle takes approximate 10 minutes. A valid nonce is identified if the final hash value contains a certain number of leading zeros. A miner may use the valid nonce as a proof of a successful Bitcoin mining. [0020] The software application of Bitcoin mining may be implemented on a processing system including processors executing Bitcoin mining applications and dedicated hardware accelerators such as, for examples, ASICs containing clusters of SHA engines that run in parallel to deliver high-performance SHA-256 hash operations. The clusters of SHA engines may consume a lot of powers (e.g., at a rate of greater than 200W). Embodiments of the present disclosure include energy-efficient ASIC-based SHA engines that consume less power for Bitcoin mining operations.[0021] Figure 1 illustrates a processing system 100 to perform Bitcoin mining by employing energy-efficient hardware accelerators including SHA-256 engines according to an embodiment of the present disclosure. As shown in Figure 1, processing system 100 (e.g., a system-on-a-chip (SOC)) may include a processor 102 and ASICs 104 communicatively coupled to processor 102 via a bus 106. Processor 102 may be a hardware processing device such as, for example, a central processing unit (CPU) or a graphic processing unit (GPU) that includes one or more processing cores (not shown) to execute software applications.Processor 102 may execute a Bitcoin mining application 108 which may include operations to employ multi-stage of SHA-256 hash to compress a 1024-bit input message. For example, Bitcoin mining application 108 may delegate the calculation of the three stages of SHA-256 hash to hardware accelerators such as, for example, SHA-256 engines 110 to perform stage-0 hash, SHA-256 engines 112 to perform stage-1 hash, and SHA-256 engines 114 to perform stage-2 hash. These SHA-256 engines are implemented on one or more ASICs 104. Each one of ASICs 104 may contain multiple SHA-256 engines (e.g., > 1000) that run in parallel. Embodiments of the present disclosure may take advantage of characteristics of different stages of SHA-256 hash to implement them in energy efficient manners to save power consumption in Bitcoin mining.[0022] The three stages of SHA-256 hash engines 110, 112, 114 are used to convert a 1024 input message into a 256-bit hash output that is compared to a 256-bit target value to determine whether a 32-bit nonce in the input message is a valid proof of successful Bitcoin mining. Each one of the SHA-256 hash engines 110, 112, 114 may receive a 512-bit input and include 64 rounds of calculation which uses the 512-bit input to compress eight 32-bit state (A, B, C, D, E, F, G, H) stored in eight registers (a, b, c, d, e, f, g, h). Each round of the compression is achieved by applying compression functions to the eight states.[0023] In some implementations of ASICs 104, the input message, state data, and input values to multi-stage SHA-256 engines 110, 112, 114 are stored in registers (e.g., an array of flip-flop circuits or level-sensitive latches). However, certain portions of the input message, state data, and input values to multi-stage SHA-256 engines 110, 112, 114 may be fixed to constant data values during SHA-256 hashes or during certain rounds of computation in the SHA-256 hash. Rather than providing these constants using registers, embodiments of the present disclosure hardwire these constant data values to the circuits performing SHA-256 hash, thus reducing the energy consumption compared to providing these constants using registers that may be enabled by clock signals. In one embodiment, certain data paths of stage-1 SHA-256 engines and stage-2 SHA-256 engines are identified to be associated with constant parameters and are hardwired to improve the efficiency of power consumption. As shown in Figure 1, these stage-1 SHA-256 engines and stage-2 SHA-256 engines are referred to as optimized stage-1 SHA-256 engines 112 and optimized stage-2 SHA-256 engines 114 implemented on ASICs104.[0024] Figure 2 illustrates a process 200 to hash a 1024-bit message into a 256-bit hash value using three stages of SHA-256 hash during Bitcoin mining. In SHA-256 hash, the hash value may be stored in eight state registers (a, b, c, d, e, f, g, h) associated with each SHA-256 engine, where each of the state registers is a hardware register that stores a 32-bit word referred to as a state (represented by A, B, C, D, E, F, G, H). The initial values of these states can be 32-bit constants. Alternatively, the state registers may initially store a hash value calculated from a previous iteration of the hashing process. The states (A, B, C, D, E, F, G, H) are updated during SHA-256 hash calculation to generate a 256-bit hash value as the output. SHA-256 hash consumes a block of 512-bit message and compresses it into a 256-bit hash (A– H) stored in state registers (a– h),. The Bitcoin mining process employs three stages of SHA-256 hash to convert the 1024-bit input message to a 256-bit hash value that may be compared to a target value to determine whether a Bitcoin has been identified.[0025] The SHA-256 hash may include 64 rounds (identified as round 0, 1, ..., 63) of applications of compression functions to the states stored in state registers. The compression function employs a 512-bit input value to manipulate the contents stored in registers (a– h). Table 1 illustrates the 64 rounds of the SHA-256 operations as applied to the states stored in registers (a– h) to generate a hash value that can be used to determine if a valid nonce is found as a proof of the identification of a Bitcoin.where logic functionsare compression functions that are defined according the SHA-256 specification, and each registers (a– h) is initiated with a 32- bit initial values, and Wj, j = 0, ...63, are 32-bit values derived from a 512-bit message which can be part of the 1024-bit input message of the Bitcoin mining.[0026] As shown in Figure 2, the process of the Bitcoin mining 200 starts with a 1024-bit message 218. The 1024-bit input message 218 may be composed of header information, a nonce 212, and padding bits 214 that make the input message 218 to the length of 1024 bits. The header information may include a 32-bit version number 202, a 256-bit hash value 204 generated by the immediate preceding block in the block chain of Bitcoin public ledger, a 256-bit Merkle root 206 of the transaction, a 32-bit time stamp 208, and a 256-bit target value 210. Version number 202 is an identifier associated with the version of the block chain. Hash value 204 is the hashing result from the immediate preceding block in the block chain recorded in the public ledger. Merkle root 206 is the a 256-bit hash based on all of the transactions in the block. Time stamp 208 represents the current time when the Bitcoin mining process starts. Target value 210 represents a threshold value that the resulting hash value generated by the Bitcoin mining is compared to. If the resulting hash value (“hash out”) is smaller than the target value 210, the nonce 212 in the input message 218 is identified as a valid nonce that can be used as the proof of the identification of a Bitcoin. If the final result is no less than the target value 210, the nonce 212 is determined to be invalid, or the Bitcoin mining failed to find a Bitcoin. The value of nonce 212 may be updated (e.g., incremented by one), and the Bitcoin mining process is repeated to determine the validity of the updated nonce.[0027] In one embodiment, instead of comparing the final hashing result with the target value, Bitcoin mining application may determine whether the hash out has a minimum number of leading zeros. The minimum number of leading zeros may ensure that the final hashing value is smaller than the target value. The target value (or the number of leading zeros) may be changed to adjust the complexity of Bitcoin mining: decreasing the target value decreases the probability of finding a valid nonce and hence increases the overall search space to generate a new block in the block chain. By modifying the target value 210, the complexity of the Bitcoin mining is adjusted to ensure that the time used to find a valid nonce is relative constant (approximately 10 minutes). For a given header, the Bitcoin mining application may sweep through the search space of 232possibilities to find a valid nonce. The Bitcoin mining process includes a series of mining iterations to sweeping through these possibilities of valid nonce. The header information is kept the same through these mining iterations while the nonce 212 is incremented by one.[0028] Each Bitcoin mining calculation to find a valid nonce may include three stages (stage-0– stage-2) of SHA-256 hash calculations. Referring to Figure 2, at stage-0 SHA-256 hash, the state (A, B, C, D, E, F, G, H) stored in state registers (a, b, c, d, e, f, g, h) may be initiated with eight 32-bit constants. Stage-0 SHA-256 hash may receive a 512-bit input message including the 32-bit version number 202, 256-bit hash value 204 from the last block in the block chain, and a portion (the first 224 bits) of Merkle root 206. Stage-0 SHA-256 hash may produce a first 256-bit intermediate hash value. The first intermediate hash value is then employed to initiate the state registers A– H of the stage-1 SHA-256 hash. The 512-bit input message to the stage-1 SHA-256 hash may include the rest portion (32 bits) of the Merkle root 206, 32-bit time stamp 208, 256-bit target value 210, 32-bit nonce 212, and 128 padding bits 214. Stage-1 SHA-256 hash may produce a second 256-bit intermediate hash value.[0029] At the stage-2 SHA-256 hash, the state registers (a, b, c, d, e, f, g, h) of the stage-2 SHA-256 hash may be set with the 256-bit constant which is identical to the constant used in stage-0 SHA-256 hash. The 512-bit input message to the stage-2 SHA-256 hash may include the second 256-bit intermediate hash result (from the stage-1 SHA-256 hash output) combined with 256 padding bits to make a 512-bit input message to the stage-2 SHA-256 hash. The stage-2 SHA-256 hash may produce a third 256-bit hash value as the hash out for the three stages of SHA-256 hash. The Bitcoin mining application may then determine whether the hash out is smaller than the target value 210. If the hash out is smaller than the target value 210, the nonce 212 in the input message is identified as a valid nonce. If the hash out is no less than the target value 210, the nonce 212 is an invalid nonce. After the determination, nonce 212 is incremented to repeat the process to determine the validity of the updated nonce 212 using the process as shown in Figure 2.[0030] Since stage-0 SHA-256 hash involves only part of the header information but not the nonce itself, the calculation of stage-0 SHA-256 does not present an opportunity for Bitcoin specific optimization. By comparison, both stage-1 and stage-2 SHA-256 hash calculations receive input messages relating to the nonce 212 and hence present opportunities for Bitcoin mining optimizations.[0031] Figure 3A illustrates optimized stage-1 SHA-256 and stage-2 SHA-256 engines according to an embodiment of the present disclosure. The input value to the stage-1 SHA-256 engine includes the 32 least significant bits (LSBs) of the Merkle root 206, 32-bit time stamp 208, 32-bit target value 210, 32-bit nonce 212, and the padding bits 214. The 32 LSBs of the Merkle root 206, 32-bit time stamp 208, 32-bit target value 210, and 32-bit nonce 212 may vary during the nonce validation process. The padding bits 214, however, are constant through different iterations (including stage-0 through stage-2 SHA-256 hashes) to validate different nonce. Thus, the padding bits, once chosen, can be provided by data paths that are hardwired to constant values.[0032] In one embodiment, a constant value (or fixed value) is represented by a sequence of bits, where each bit can be“1” or“0.” The“1” bits may be provided by a data path hardwired to a high voltage state, and inverters (or NOT gate) can be used to convert“1” bits to“0” bits. Thus, a sequence of constant bits can be provided by data paths including inverters and a reference voltage. In one embodiment, when the bits representing a value are hardwired data path, these bits are not stored in registers, thus reducing the circuit area and the power consumption to provide the same value to the SHA-256 engines. Hardwiring the constants may further help optimize the circuit logic consuming the constants by reducing logic area and power.[0033] Similarly, the input value to the stage-2 SHA-256 engines also includes 256 padding bits that are fixed through different iterations to validate different nonce. These 256 padding bits may also be provided by a hardwired data path to a reference voltage or a hardwired data path to the reference voltage through an inverter. [0034] As shown in Table 1, SHA-256 includes 64 rounds of compression calculation that includes applying compression functions to states (A, B, C, D, E, F, G, H) and the 512- bit input value. The states (A, B, C, D, E, F, G, H) are stored in registers (a, b, c, d, e, f, g, h) and are calculated through the 64 rounds of compression calculation. The 512-bit input value is split into 1632-bit words (Wj, j = 0, ..., 15) that are employed as parameters of the compression functions in the first 16 rounds of compression calculation. Since the 512-bit input value to stage-1 SHA-256 engines includes fixed bits (e.g., the padding bits), the fixed bits may be provided using hardwired data paths to the compression calculation circuit for performing the rounds receiving the fixed bits. For example, rounds 4 through 15 of stage-1 SHA-256 may receive hardwired padding bits, and rounds 8 through 15 of stage-2 SHA-256 may also receive hardwired padding bits.[0035] Further, the initial values of state (A, B, C, D, E, F, G, H) of stage-2 SHA- engines are fixed to constant values that do not change through different iterations to validate different nonce. Thus, the initial values provided to registers (a, b, c, d, e, f, g, h) can also be bits that are provided by hardwired data paths. The rounds of SHA-256 calculation may also require a 32-bit constant word that is unique to each round. In a fully unrolled design for sequential hardware for each round, the constant words can also be hardwired to optimize the circuits that perform these rounds of calculation.[0036] Although Merkle root 206, time stamp 208, and target value 210 may be updated when a Bitcoin mining process fails to identify a valid nonce according to the Bitcoin mining protocols (e.g., spent less than 10 minutes to find a valid nonce or did not find a valid nonce in the 232nonce space), Merkle root 206, time stamp 208, and target value 210 commonly do not change during the search for a valid nonce in the 232nonce space. Thus, the 96 most significant bits (MSBs) of the input value to stage-1 SHA-256 engines may remain constant through the iterations during the search in the space of 232nonce. As shown in Table 1, rounds 0 through 2 of compression calculation within SHA-256 hash are based on the 96 MSBs of the input value and do not involve the value of nonce 212. Therefore, rounds 0 through 2 can be calculated once and used through the search of a valid nonce until any one of Merkle root 206, time stamp 208, and target value 210 is updated.[0037] Figure 3B illustrates further optimized stage-1 SHA-256 and stage-2 SHA- 256 engines according to an embodiment of the present disclosure. As shown in Figure 3B, 32-bit nonce 212 is directly provided to the circuits 306 that perform round 3 (and subsequently, rounds 4-63) bypassing rounds 0-2. The circuits 304 to perform rounds 0– 2 may receive the 96 MSBs including 32 LSBs from Merkle root, 32 bit timestamp, and 32-bit target value and perform the calculation only once at the beginning of the process to identify a valid nonce. In one embodiment, a clock gate 302 is used to achieve the computation of rounds 0– 2 once before disabling the computation of rounds 0– 2. Clock gate 302 may include circuit that receive the system clock (CLK) and provide the enabling signals to registers (e.g., flip-flops or level-sensitive latches) at the start of Bitcoin mining process. Subsequent to the calculation using the 96 MSBs (e.g., after the first clock cycle), clock gate 302 may provide a disabling signal to registers in circuits 304 (e.g., registers (a, b, c, d, e, f, g, h) associated with rounds 0– 2) so that the data stored in these registers maintain the same data in further calculation including rounds 3-63. Thus, the computation of rounds 0 -2 are clock-gated to reduce the power consumption.[0038] Further, as shown in Table 1, the registers (a, b, c, d, e, f, g, h) storing states (A, B, C, D, E, F, G, H) in round 2 provide constant values to registers (a, b, c, d, e, f, g, h) associated with round 3, registers (b, c, d, f, g, h) associated with round 4, registers (c, d, g, h) in round 5, and registers (d, h) in round 6. Thus, these registers associated with rounds 3– 6 may be associated with data paths that are hardwired to registers (a, b, c, d, e, f, g, h) associated with round 2, eliminating the need for these registers to store states during computation. This may further reduce the circuit area used for rounds 0– 6 andcorresponding power consumption.[0039] The variable portion of the input values to round 3 is the incrementing nonce. The nonce is incremented after the completion of stage-0 through stage-2 SHA-256 hashes during an iteration of Bitcoin mining. When the SHA-25 engines are implemented as a pipeline that validates one nonce per clock cycle, each increment of nonce may be coincident with one clock cycle. The state A and E stored in registers (a, e) are affected by the incrementing nonce every clock cycle. Thus, the states A and E can be computed once and then incremented every clock cycle to account for the increment of the nonce. This may further reduce the need for a complete computation of round 3 for every clock cycle and reduce the corresponding energy consumption for round 3.[0040] Figure 3C illustrates further optimized stage-1 SHA-256 and stage-2 SHA- 256 engines according to an embodiment of the present disclosure. As shown in Figure 3C, in round 3, registers (b, c, d, f, g, h) may be associated with data paths hardwired to the corresponding registers in round 2. Registers (a, e) may be incremented every clock cycle to match the increment of nonce. In subsequent rounds, certain states may be directly derived from states A and E of round 3. For example, states (B, E) in round 4, states (B, C, F, G) in round 5, and states (B, C, D, F, G, H) in round 6 may be computed by subtracting a constant value from states (A, E) in round 3. Since these states in rounds 4– 6 can be derived from state (A, E) in round 3, the need to store these states are eliminated. The circuit area for rounds 4– 6 and the corresponding power consumption may be further reduced.[0041] Figure 4 is a block diagram of a method 400 to hardwired bits to perform SHA-256 in Bitcoin mining according to an embodiment of the present disclosure. Method 400 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device, a general purpose computer system, or a dedicated machine), firmware, or a combination thereof. In one embodiment, method 400 may be performed, in part, by processing logics of processor 102 and ASIC 104 as shown in Figure 1.[0042] For simplicity of explanation, the method 400 is depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently and with other acts not presented and described herein. Furthermore, not all illustrated acts may be performed to implement the method 400 in accordance with the disclosed subject matter. In addition, those skilled in the art will understand and appreciate that the method 400 could alternatively be represented as a series of interrelated states via a state diagram or events.[0043] Referring to Figure 4, a hardware accelerator 104 may include clusters of SHA engines to perform stage-1 SHA hash for the Bitcoin mining application executing on a processor 102. At 402, the hardware accelerator may receive a 1024-bit input message which may include header information, a nonce, and a number of padding bits.[0044] At 404, the hardware accelerator may feed, using a first data path coupled between a first reference node and a first input node of a first plurality of circuits, the first padding bit to the first input node of the first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash.[0045] At 406, the hardware accelerator may perform the stage-1 SHA-256 hash based on the input message. The stage-1 SHA hash may be used to determine the validity of a nonce stored in the input message.[0046] Figure 5A is a block diagram illustrating a micro-architecture for a processor 500 that implements the processing device including heterogeneous cores in accordance with one embodiment of the disclosure. Specifically, processor 500 depicts an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one embodiment of the disclosure. [0047] Processor 500 includes a front end unit 530 coupled to an execution engine unit 550, and both are coupled to a memory unit 570. The processor 500 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, processor 500 may include a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like. In one embodiment, processor 500 may be a multi-core processor or may part of a multi-processor system.[0048] The front end unit 530 includes a branch prediction unit 532 coupled to an instruction cache unit 534, which is coupled to an instruction translation lookaside buffer (TLB) 536, which is coupled to an instruction fetch unit 538, which is coupled to a decode unit 540. The decode unit 540 (also known as a decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points,microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder 540 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 534 is further coupled to the memory unit 570. The decode unit 540 is coupled to a rename/allocator unit 552 in the execution engine unit 550.[0049] The execution engine unit 550 includes the rename/allocator unit 552 coupled to a retirement unit 554 and a set of one or more scheduler unit(s) 556. The scheduler unit(s) 556 represents any number of different schedulers, including reservations stations (RS), central instruction window, etc. The scheduler unit(s) 556 is coupled to the physical register file(s) unit(s) 558. Each of the physical register file(s) units 558 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register file(s) unit(s) 558 is overlapped by the retirement unit 554 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s), using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). [0050] In one implementation, processor 500 may be the same as processor 102 described with respect to Figure 1.[0051] Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 554 and the physical register file(s) unit(s) 558 are coupled to the execution cluster(s) 560. The execution cluster(s) 560 includes a set of one or more execution units 562 and a set of one or more memory access units 564. The execution units 562 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and operate on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point).[0052] While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 556, physical register file(s) unit(s) 558, and execution cluster(s) 560 are shown as being possibly plural because certain embodiments create separate pipelines for certain types ofdata/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster – and in the case of a separate memory access pipeline, certain embodiments areimplemented in which only the execution cluster of this pipeline has the memory access unit(s) 564). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0053] The set of memory access units 564 is coupled to the memory unit 570, which may include a data prefetcher 580, a data TLB unit 572, a data cache unit (DCU) 574, and a level 2 (L2) cache unit 576, to name a few examples. In some embodiments DCU 574 is also known as a first level data cache (L1 cache). The DCU 574 may handle multiple outstanding cache misses and continue to service incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 572 is a cache used to improve virtual address translation speed by mapping virtual and physical address spaces. In one exemplary embodiment, the memory access units 564 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 572 in the memory unit 570. The L2 cache unit 576 may be coupled to one or more other levels of cache and eventually to a main memory.[0054] In one embodiment, the data prefetcher 580 speculatively loads/prefetches data to the DCU 574 by automatically predicting which data a program is about to consume. Prefeteching may refer to transferring data stored in one memory location of a memory hierarchy (e.g., lower level caches or memory) to a higher-level memory location that is closer (e.g., yields lower access latency) to the processor before the data is actually demanded by the processor. More specifically, prefetching may refer to the early retrieval of data from one of the lower level caches/memory to a data cache and/or prefetch buffer before the processor issues a demand for the specific data being returned.[0055] The processor 500 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).[0056] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0057] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes a separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[0058] Figure 5B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline implemented by processing device 500 of Figure 5A according to some embodiments of the disclosure. The solid lined boxes in Figure 5B illustrate an in-order pipeline, while the dashed lined boxes illustrates a register renaming, out-of-order issue/execution pipeline. In Figure 5B, a processor pipeline 500 includes a fetch stage 502, a length decode stage 504, a decode stage 506, an allocation stage 508, a renaming stage 510, a scheduling (also known as a dispatch or issue) stage 512, a register read/memory read stage 514, an execute stage 516, a write back/memory write stage 518, an exception handling stage 522, and a commit stage 524. In some embodiments, the ordering of stages 502-524 may be different than illustrated and are not limited to the specific ordering shown in Figure 5B.[0059] Figure 6 illustrates a block diagram of the micro-architecture for a processor 600 that includes hybrid cores in accordance with one embodiment of the disclosure. In some embodiments, an instruction in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 601 is the part of the processor 600 that fetches instructions to be executed and prepares them to be used later in the processor pipeline.[0060] The front end 601 may include several units. In one embodiment, the instruction prefetcher 626 fetches instructions from memory and feeds them to an instruction decoder 628 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called“micro- instructions” or“micro-operations” (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, the trace cache 630 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 634 for execution. When the trace cache 630 encounters a complex instruction, the microcode ROM 632 provides the uops needed to complete the operation.[0061] Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, the decoder 628 accesses the microcode ROM 632 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 628. In another embodiment, an instruction can be stored within the microcode ROM 632 should a number of micro-ops be needed to accomplish the operation. The trace cache 630 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 632. After the microcode ROM 632 finishes sequencing micro-ops for an instruction, the front end 601 of the machine resumes fetching micro-ops from the trace cache 630.[0062] The out-of-order execution engine 603 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 602, slow/general floating point scheduler 604, and simple floating point scheduler 606. The uop schedulers 602, 604, 606, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 602 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0063] Register files 608, 610, sit between the schedulers 602, 604, 606, and the execution units 612, 614, 616, 618, 620, 622, 624 in the execution block 611. There is a separate register file 608, 610, for integer and floating point operations, respectively. Each register file 608, 610, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 608 and the floating point register file 610 are also capable of communicating data with the other. For one embodiment, the integer register file 608 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 610 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0064] The execution block 611 contains the execution units 612, 614, 616, 618, 620, 622, 624, where the instructions are actually executed. This section includes the register files 608, 610, that store the integer and floating point data operand values that the micro- instructions need to execute. The processor 600 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 612, AGU 614, fast ALU 616, fast ALU 618, slow ALU 620, floating point ALU 622, floating point move unit 624. For one embodiment, the floating point execution blocks 622, 624, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 622 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present disclosure, instructions involving a floating point value may be handled with the floating point hardware.[0065] In one embodiment, the ALU operations go to the high-speed ALU execution units 616, 618. The fast ALUs 616, 618, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 620 as the slow ALU 620 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 612, 614. For one embodiment, the integer ALUs 616, 618, 620, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 616, 618, 620, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 622, 624, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 622, 624, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.[0066] In one embodiment, the uops schedulers 602, 604, 606, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 600, the processor 600 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instruction sequences for text string comparison operations.[0067] The processor 600 also includes logic to implement store address prediction for memory disambiguation according to embodiments of the disclosure. In oneembodiment, the execution block 611 of processor 600 may include a store address predictor (not shown) for implementing store address prediction for memory disambiguation.[0068] The term“registers” may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer’s perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty-two bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data.[0069] For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMXTM registers (also referred to as ‘mm’ registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as“SSEx”) technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0070] Referring now to Figure 7, shown is a block diagram illustrating a system 700 in which an embodiment of the disclosure may be used. As shown in Figure 7,multiprocessor system 700 is a point-to-point interconnect system, and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. While shown with only two processors 770, 780, it is to be understood that the scope of embodiments of the disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given processor. In one embodiment, the multiprocessor system 700 may implement hybrid cores as described herein.[0071] Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 also includes as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 includes P- P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in Figure 7, IMCs 772 and 782 couple the processors to respective memories, namely a memory 732 and a memory 734, which may be portions of main memory locally attached to the respective processors.[0072] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high-performance graphics interface 739.[0073] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0074] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[0075] As shown in Figure 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722,communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 7, a system may implement a multi-drop bus or other such architecture.[0076] Referring now to Figure 8, shown is a block diagram of a system 800 in which one embodiment of the disclosure may operate. The system 800 may include one or more processors 810, 815, which are coupled to graphics memory controller hub (GMCH) 820. The optional nature of additional processors 815 is denoted in Figure 8 with broken lines. In one embodiment, processors 810, 815 implement hybrid cores according to embodiments of the disclosure.[0077] Each processor 810, 815 may be some version of the circuit, integrated circuit, processor, and/or silicon integrated circuit as described above. However, it should be noted that it is unlikely that integrated graphics logic and integrated memory control units would exist in the processors 810, 815. Figure 8 illustrates that the GMCH 820 may be coupled to a memory 840 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non-volatile cache.[0078] The GMCH 820 may be a chipset, or a portion of a chipset. The GMCH 820 may communicate with the processor(s) 810, 815 and control interaction between the processor(s) 810, 815 and memory 840. The GMCH 820 may also act as an accelerated bus interface between the processor(s) 810, 815 and other elements of the system 800. For at least one embodiment, the GMCH 820 communicates with the processor(s) 810, 815 via a multi-drop bus, such as a frontside bus (FSB) 895.[0079] Furthermore, GMCH 820 is coupled to a display 845 (such as a flat panel or touchscreen display). GMCH 820 may include an integrated graphics accelerator. GMCH 820 is further coupled to an input/output (I/O) controller hub (ICH) 850, which may be used to couple various peripheral devices to system 800. Shown for example in the embodiment of Figure 8 is an external graphics device 860, which may be a discrete graphics device, coupled to ICH 850, along with another peripheral device 870.[0080] Alternatively, additional or different processors may also be present in the system 800. For example, additional processor(s) 815 may include additional processors(s) that are the same as processor 810, additional processor(s) that are heterogeneous or asymmetric to processor 810, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There can be a variety of differences between the processor(s) 810, 815 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processors 810, 815. For at least one embodiment, the various processors 810, 815 may reside in the same die package.[0081] Referring now to Figure 9, shown is a block diagram of a system 900 in which an embodiment of the disclosure may operate. Figure 9 illustrates processors 970, 980. In one embodiment, processors 970, 980 may implement hybrid cores as described above. Processors 970, 980 may include integrated memory and I/O control logic (“CL”) 972 and 982, respectively and intercommunicate with each other via point-to-point interconnect 950 between point-to-point (P-P) interfaces 978 and 988 respectively. Processors 970, 980 each communicate with chipset 990 via point-to-point interconnects 952 and 954 through the respective P-P interfaces 976 to 994 and 986 to 998 as shown. For at least one embodiment, the CL 972, 982 may include integrated memory controller units. CLs 972, 982 may include I/O control logic. As depicted, memories 932, 934 coupled to CLs 972, 982 and I/O devices 914 are also coupled to the control logic 972, 982. Legacy I/O devices 915 are coupled to the chipset 990 via interface 996. [0082] Embodiments may be implemented in many different system types. Figure 10 is a block diagram of a SoC 1000 in accordance with an embodiment of the present disclosure.Dashed lined boxes are optional features on more advanced SoCs. In Figure 10, an interconnect unit(s) 1012 is coupled to: an application processor 1020 which includes a set of one or more cores 1002A-N and shared cache unit(s) 1006; a system agent unit 1010; a bus controller unit(s) 1016; an integrated memory controller unit(s) 1014; a set or one or more media processors 1018 which may include integrated graphics logic 1008, an image processor 1024 for providing still and/or video camera functionality, an audio processor 1026 for providing hardware audio acceleration, and a video processor 1028 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 1030; a direct memory access (DMA) unit 1032; and a display unit 1040 for coupling to one or more external displays. In one embodiment, a memory module may be included in the integrated memory controller unit(s) 1014. In another embodiment, the memory module may be included in one or more other components of the SoC 1000 that may be used to access and/or control a memory. The application processor 1020 may include a store address predictor for implementing hybrid cores as described in embodiments herein.[0083] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1006, and external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.[0084] In some embodiments, one or more of the cores 1002A-N are capable of multi-threading. The system agent 1010 includes those components coordinating and operating cores 1002A-N. The system agent unit 1010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1002A-N and the integrated graphics logic 1008. The display unit is for driving one or more externally connected displays.[0085] The cores 1002A-N may be homogenous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 1002A-N may be in order while others are out-of-order. As another example, two or more of the cores 1002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[0086] The application processor 1020 may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, Atom™ or Quark™ processor, which are available from Intel™ Corporation, of Santa Clara, Calif. Alternatively, the application processor 1020 may be from another company, such as ARM Holdings™, Ltd, MIPS™, etc. The application processor 1020 may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The application processor 1020 may be implemented on one or more chips. The application processor 1020 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0087] Figure 11 is a block diagram of an embodiment of a system on-chip (SoC) design in accordance with the present disclosure. As a specific illustrative example, SoC 1100 is included in user equipment (UE). In one embodiment, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra- thin notebook, notebook with broadband adapter, or any other similar communication device. Often a UE connects to a base station or node, which potentially corresponds in nature to a mobile station (MS) in a GSM network.[0088] Here, SOC 1100 includes 2 cores—1106 and 1107. Cores 1106 and 1107 may conform to an Instruction Set Architecture, such as an Intel® Architecture Core™-based processor, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 1106 and 1107 are coupled to cache control 1108 that is associated with bus interface unit 1109 and L2 cache 1110 to communicate with other parts of system 1100. Interconnect 1110 includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnect discussed above, which potentially implements one or more aspects of the described disclosure. In one embodiment, cores 1106, 1107 may implement hybrid cores as described in embodiments herein.[0089] Interconnect 1110 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 1130 to interface with a SIM card, a boot ROM 1135 to hold boot code for execution by cores 1106 and 1107 to initialize and boot SoC 1100, a SDRAM controller 1140 to interface with external memory (e.g. DRAM 1160), a flash controller 1145 to interface with non-volatile memory (e.g. Flash 1165), a peripheral control 1150 (e.g. Serial Peripheral Interface) to interface with peripherals, video codecs 1120 and Video interface 1125 to display and receive input (e.g. touch enabled input), GPU 1115 to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the disclosure described herein. In addition, the system 1100 illustrates peripherals for communication, such as a Bluetooth module 1170, 3G modem 1175, GPS 1180, and Wi-Fi 1185.[0090] Figure 12 illustrates a diagrammatic representation of a machine in the example form of a computer system 1200 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0091] The computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), a static memory 1206 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1218, which communicate with each other via a bus 1230.[0092] Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1202 may also be one or more special- purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one embodiment, processing device 1202 may include one or processing cores. The processing device 1202 is configured to execute the processing logic 1226 for performing the operations and steps discussed herein. In one embodiment, processing device 1202 is the same as processor architecture 100 described with respect to Figure 1 as described herein with embodiments of the disclosure. [0093] The computer system 1200 may further include a network interface device 1208 communicably coupled to a network 1220. The computer system 1200 also may include a video display unit 1210 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1212 (e.g., a keyboard), a cursor control device 1214 (e.g., a mouse), and a signal generation device 1216 (e.g., a speaker). Furthermore, computer system 1200 may include a graphics processing unit 1222, a video processing unit 1228, and an audio processing unit 1232.[0094] The data storage device 1218 may include a machine-accessible storage medium 1224 on which is stored software 1226 implementing any one or more of the methodologies of functions described herein, such as implementing store address prediction for memory disambiguation as described above. The software 1226 may also reside, completely or at least partially, within the main memory 1204 as instructions 1226 and/or within the processing device 1202 as processing logic 1226 during execution thereof by the computer system 1200; the main memory 1204 and the processing device 1202 also constituting machine-accessible storage media.[0095] The machine-readable storage medium 1224 may also be used to store instructions 1226 implementing store address prediction for hybrid cores such as described according to embodiments of the disclosure. While the machine-accessible storage medium 1128 is shown in an example embodiment to be a single medium, the term“machine- accessible storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term“machine-accessible storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term“machine-accessible storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.[0096] The following examples pertain to further embodiments. Example 1 is a processing system including a processor to construct an input message comprising a plurality of padding bits and a hardware accelerator, communicatively coupled to the processor, comprising a first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash based on the input message, wherein the hardware accelerator comprises a first data path coupled between a first reference node and a first input node of the first plurality of circuits to feed a first padding bit of the plurality of padding bits to the first input node. [0097] In Example 2, the subject matter of Example 1 further provides that the hardware accelerator comprises a second plurality of circuits to perform a stage-2 SHA hash, and a second data path coupled between a second reference node and a second input node of the second plurality of circuits to feed a second padding bit of the plurality of padding bits to the second input node.[0098] In Example 3, the subject matter of any of Examples 1 and 2 further provides that the first plurality of circuits is to perform a first plurality of rounds of compression on a first plurality of state data associated with the stage-1 SHA hash, and the second plurality of circuits is to perform a second plurality of rounds of compression on a second plurality of state data associated with the stage-2 SHA hash, wherein the hardware accelerator comprises a plurality of registers to store the second plurality of state data, and wherein the hardware accelerator comprises a third data path coupled between a third reference node supplying an initial value and at least one of the plurality of registers.[0099] In Example 4, the subject matter of Example 1 further comprises a clock gate circuit to convert a system clock to a gated clock and to supply the gate clock to the first plurality of circuits, wherein the gated clock is to enable rounds 0 through 2 of the first plurality of rounds of compression, and disable the rounds 0 through 2 of the first plurality of rounds of compression.[00100] In Example 5, the subject matter of Example 1 further provides that the input message comprises a nonce, and wherein the hardware accelerator comprises a plurality of data paths to feed bits of the nonce to circuits to perform a round 3 of the first plurality of rounds of compression.[00101] In Example 6, the subject matter of any of Examples 1 and 5 further provides that responsive to an increment of the nonce, the hardware accelerator is to increment a same amount to at least one state data associated with the round 3 of the first plurality of rounds of compression.[00102] In Example 7, the subject matter of Example 6 further provides that the hardware accelerator is to subtract a constant value from the at least one state data in rounds 4 through 6 of the first plurality of rounds of compression.[00103] In Example 8, the subject matter of Example 7 further provides that the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of the nonce in Bitcoin mining, and wherein responsive to determine that the nonce is one of valid or invalid, the processor is to increment a value of the nonce to generate a new input message. [00104] In Example 9, the subject matter of Example 1 further provides that the first data path comprises a hardwire coupled between the first reference node and the first input node, and wherein the first reference node supplies a fixed reference value.[00105] Example 10 is an application specific integrated circuit (ASIC) comprising a first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash based on an input message comprising a plurality of padding bits, and a data path coupled between a first reference node and a first input node of the first plurality of circuits to feed a first padding bit of the plurality of padding bits to the first input node.[00106] In Example 11, the subject matter of Example 10 further provides that the ASIC comprises a second plurality of circuits to perform a stage-2 SHA hash; and a second data path coupled between a second reference node and a second input node of the second plurality of circuits to feed a second padding bit of the plurality of padding bits to the second input node.[00107] In Example 12, the subject matter of any of Examples 10 and 11 further provides that the first plurality of circuits is to perform a first plurality of rounds of compression on a first plurality of state data associated with the stage-1 SHA hash, and the second plurality of circuits is to perform a second plurality of rounds of compression on a second plurality of state data associated with the stage-2 SHA hash, wherein the ASIC comprises a plurality of registers to store the second plurality of state data, and wherein the ASIC comprises a third data path coupled between a third reference node supplying an initial value and at least one of the plurality of registers.[00108] In Example 13, the subject matter of Example 12 further comprises a clock gate circuit to convert a system clock to a gated clock and to supply the gate clock to the first plurality of circuits, wherein the gated clock is to enable rounds 0 through 2 of the first plurality of rounds of compression, and disable the rounds 0 through 2 of the first plurality of rounds of compression.[00109] In Example 14, the subject matter of Example 10 further provides that the input message comprises a nonce, and wherein the ASIC comprises a plurality of data paths to feed bits of the nonce to circuits to perform a round 3 of the first plurality of rounds of compression.[00110] In Example 15, the subject matter of any of Examples 10 and 14 further provides that responsive to an increment of the nonce, the ASIC is to increment a same amount to at least one state data associated with the round 3 of the first plurality of rounds of compression. [00111] In Example 16, the subject matter of Example 15 further provides that the ASIC is to subtract a constant value from the at least one state data in rounds 4 through 6 of the first plurality of rounds of compression.[00112] In Example 17, the subject matter of Example 16 further provides that the ASIC is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of the nonce in Bitcoin mining, and wherein responsive to determine that the nonce is one of valid or invalid, the processor is to increment a value of the nonce to generate a new input message.[00113] In Example 18, the subject matter of Example 17 further provides that the first data path comprises a hardwire coupled between the first reference node and the first input node, and wherein the first reference node supplies a fixed reference value.[00114] Example 19 is a method comprising receiving, by a hardware accelerator, an input message comprising a first padding bit, feeding, using a first data path coupled between a first reference node and a first input node of a first plurality of circuits, the first padding bit to the first input node of the first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash, and performing, by the hardware accelerator, the stage-1 SHA hash based on the input message.[00115] In Example 20, the subject matter of Example 19 further comprises providing, using a second data path coupled between a second reference node and a first input node of a second plurality of circuits, a second padding bit to the second input node of the second plurality of circuits to perform a stage-2 SHA hash, wherein the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of a nonce in Bitcoin mining.[00116] Example 21 is an apparatus comprising: means for performing the method of any of Examples 19 and 20.[00117] Example 22 is a machine-readable non-transitory medium having stored thereon program code that, when executed by a processor, perform operations comprising receiving, by a hardware accelerator, an input message comprising a first padding bit, feeding, using a first data path coupled between a first reference node and a first input node of a first plurality of circuits, the first padding bit to the first input node of the first plurality of circuits to perform a stage-1 secure hash algorithm (SHA) hash, and performing, by the hardware accelerator, the stage-1 SHA hash based on the input message.[00118] In Example 23, the subject matter of Example 22 further provides that the operations further comprise providing, using a second data path coupled between a second reference node and a first input node of a second plurality of circuits, a second padding bit to the second input node of the second plurality of circuits to perform a stage-2 SHA hash, wherein the hardware accelerator is to perform the stage-1 SHA hash and stage-2 SHA hash sequentially to determine a validity of a nonce in Bitcoin mining.[00119] While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations there from. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this disclosure.[00120] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure.[00121] A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro- controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non- transitory medium. Furthermore, in another embodiment, use of a module refers to the non- transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.[00122] Use of the phrase‘configured to,’ in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still‘configured to’ perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate‘configured to’ provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term ‘configured to’ does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.[00123] Furthermore, use of the phrases‘to,’‘capable of/to,’ and or‘operable to,’ in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.[00124] A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1’s and 0’s, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 910 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.[00125] Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state,respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.[00126] The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine- accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from.[00127] Instructions used to program logic to perform embodiments of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD- ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[00128] Reference throughout this specification to“one embodiment” or“an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases“in one embodiment” or“in anembodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.[00129] In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.
A processor can operate in three different modes. In an active mode, a first voltage is provided to the processor, where the first voltage is sufficient to allow the processor to execute instructions (402). In a low-power mode, a retention voltage is provided to the processor (408). The processor consumes less power in the retention mode than in the active mode. In addition, the processor can operate in a third mode, where a voltage is provided to the processor sufficient to allow the processor to process cache messages, such as coherency messages, but not execute other normal operations or perform normal operations at a very low speed relative to their performance in the active mode (412).
WHAT IS CLAIMED IS: 1. A method comprising providing a first operating voltage to a processor for a first period (402); executing instructions at the processor for the first period (404); in response to receiving a mode change indicator during the first period (406), providing a retention voltage to the processor during a second period (408), wherein the processor is in a retention state during the second period, and wherein the retention voltage is lower than the first operating voltage; in response to receiving a first cache message during the second period (410), providing a second operating voltage to the processor during a third period (412), the second operating voltage lower than the first operating voltage and greater than the retention voltage; and processing the first cache message during the third period (414). 2. The method of claim 1, wherein the cache message is a cache coherency message. 3. The method of claim 1, further comprising providing the retention voltage to the processor during a fourth period in response to completing processing of the cache message (416). 4. The method of claim 3, further comprising: receiving a second cache message during the fourth period (206); and providing the second operating voltage during a fifth period in response to receiving the second cache message (208). 5. The method of claim 1, further comprising: providing a clock signal having a first frequency to the processor for the first period (160); and providing a clock signal having a second clock frequency to the processor for the third period (160). 6. The method of claim 5, further comprising determining the second frequency based on a number of cache messages received. 7. The method of claim 6, wherein determining the second frequency comprises determining the second frequency based on a number of cache messages received in a first period of time. 8. A device, comprising: a processor (102) comprising a processor core and a cache; a mode control module (140) configured to control a mode of operation of the processor; a voltage regulator ( 130) configured to: set an operating voltage of the processor to a first voltage in response to the mode control module indicating an active mode of the processor; set the operating voltage of the processor to a second voltage lower than the first voltage in response to the mode control module indicating a low processing mode of the processor, wherein the processor is enabled to process cache messages in the low processing mode; and set the operating voltage to a third voltage in response to the mode control module indicating the processor is in a retention mode, the third voltage lower than the second voltage. 9. The device of claim 8, wherein the mode control module is configured to set the mode of operation to the low processing mode in response to the processor receiving a cache message when the processor is in the retention mode. 10. The device of claim 9, wherein the mode control module is configured to set the mode of operation to the retention mode in response to the processor completing processing of the cache message.
DATA PROCESSING DEVICE WITH LOW-POWER CACHE ACCESS MODE FIELD OF THE DISCLOSURE The present disclosure relates to processors and more particularly to processors that process cache transactions in multiple modes. BACKGROUND Some processors can operate in multiple modes, such as an active mode and a low power or sleep mode. In an active mode, a voltage regulator provides a voltage to the processor that allows the processor to execute instructions and perform normal operations. In the low power mode, the voltage regulator provides a retention voltage to the processor that allows the processor to retain its internal state, but not execute instructions or other normal operations. The retention voltage is lower than the voltage provided in the active mode, thereby allowing the processor to conserve power. The processor can enter the low power mode to conserve power but retain its internal state so that when it returns to the active mode it is able to continue operations from the state it had prior to entering the low power mode. Some processors can support a coherent memory space or allow other modules of a device to access the processor cache. In order to perform cache transactions to maintain coherency or to service access requests from other modules received while the processor is in low- power mode, conventional processors switch from the low-power mode to the active mode. However, due to physical characteristics of the voltage regulator, the processor cannot quickly change from the low power mode to the active mode. Thus, conventional processors typically enter the low-power mode less frequently as more cache transaction requests are received, and therefore are in the low power mode less often. This can result in an undesirable consumption of power by the processor. Accordingly, there is a need for a new processing device and methods.BRIEF DESCRIPTION OF THE DRAWINGS The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. FIG. 1 is a block diagram of a particular embodiment of a device incorporating a processor having multiple operating modes; FIG. 2 is a diagram of a particular embodiment of operating voltages provided to the processor of FIG. 1; FIG. 3 is a block diagram of a frequency control module of the device of FIG. 1; FIG. 4 is a flow diagram of a particular embodiment of a method of configuring an operating mode of a processor; and FIG. 5 is a flow diagram of an alternative embodiment of configuring an operating mode of a processor. DETAILED DESCRIPTION A processor that can operate in three different modes is disclosed. In an active mode, a first voltage (referred to herein as an "active voltage") is provided to the processor, where the first voltage is sufficient to allow the processor to execute instructions. In a low-power mode, a retention voltage is provided to the processor, where the retention voltage is insufficient for the processor to execute instructions, but is sufficient to allow the processor to retain state information stored prior to entering the low-power mode. The processor consumes less power in the retention mode than in the active mode. In addition, the processor can operate in a third mode, referred to herein as a cache-transaction processing mode, where a voltage (referred to herein as a "cache-transaction processing voltage") is provided to the processor, where the cache-transaction processing voltage is sufficient to allow the processor to process cache messages, such as coherency messages, but not execute other normal operations or perform normal operations at a very low speed relative to their performance in the active mode. The voltage provided to the processor in the cache-transaction processing mode is lower than the voltage in the active mode and higher than the voltage in the retention mode.If the processor is to perform a cache transaction when it is in the low power mode, it enters the cache-transaction processing mode and processes the cache transaction. Once processing of the cache transaction is complete, the processor returns to the low-power mode. Because there is a relatively small voltage difference between the retention voltage provided in the low-power mode and the cache-transaction processing voltage provided to the processor in the cache-transaction processing mode (as compared to the voltage difference between the retention voltage and the active voltage), the processor is able to more rapidly transition between the low-power mode and the cache-transaction processing mode (as compared to the transition from the low power mode to the active mode), thereby allowing the processor to remain in the low-power mode for a longer period of time and reducing power consumption of the processor. Referring to FIG. 1, a block diagram of a system 100 is disclosed. The system 100 includes a processor 102, a bus 103, a processor 105, a peripheral device 106, a peripheral device 107, and a voltage regulator 130. The processor 102, processor 105, and peripheral devices 106 and 107 are each connected to the bus 103. The processor 102 includes an output to provide a signal, labeled V_CTRL, to an input of the voltage regulator 130. In addition, the processor 102 includes an input to receive an adjustable voltage, labeled VDD, from an output of the voltage regulator 130. The processor 102 and the processor 105 can each be a microprocessor, microcontroller, an application specific integrated circuit (ASIC), and the like. The peripheral devices 106 and 107 can each be a memory controller, input/output controller, peripheral controller, and the like. In addition, each of the illustrated portions of the system 100 can be integrated on a common semiconductor substrate, or be located on different substrates. For example, the processor 102 and the processor 105 can be integrated on a common semiconductor substrate, with the peripheral devices 106 and 107 located external to that semiconductor substrate.. In the illustrated embodiment, the voltage regulator 130 is located external to the processor 102. In other embodiments, the voltage regulator 130 can be implemented internal to the processor 102. During operation, the processor 102 can operate in an active mode, a low-power mode, and a cache-transaction processing mode. In the active mode, the processor 102 can execute instructions and perform other normal operations, hi the low-power mode, the processor 102 is placed in a retention state, so that the state of the processor 102 is retained. In thelow-power mode the processor 102 cannot execute instructions or perform other normal operations. In the cache-transaction processing mode, the processor 102 is able to process cache messages provided by the processor 105 and the peripheral devices 106 and 107 via the bus 103. The cache messages represent requests for the processor 102 to process cache transactions, such as coherency transactions or access transactions. The processor 102 processes the cache message by analyzing the cache message and, when appropriate, performing the requested cache transaction. The voltage regulator 130 provides a different level of the operating voltage VDD for each of the three modes of the processor 102. In the active mode, the operating voltage VDD is set to the active voltage level to allow the processor 102 to execute instructions, hi the cache-transaction processing mode, the operating voltage VDD is set to the cache- transaction processing voltage level, which is lower than the active voltage level. This voltage level allows the processor 102 to process cache messages, but not perform other normal operations or perform other normal operations only at a low speed relative to performance of normal operations in the active mode. In the low-power mode, the operating voltage VDD is set to a retention voltage, so that the processor 102 is able to retain state information but cannot execute instructions. The retention voltage is lower than the cache-transaction processing voltage, hi a particular embodiment, the retention voltage is about 0.7 volts, the cache-transaction processing voltage is between about 0.75 and about 0.8 volts, and the active voltage is about 1.1 volts. The level of the voltage VDD is controlled by the signal V_CTRL. Accordingly, when the processor 102 enters a new mode, it configures the voltage regulator 130 to set the voltage VDD to the appropriate level for the new mode using the signal V_CTRL. The processor 102 can change modes depending on different factors. For example, the processor 102 can change from the active mode to the low-power mode after a predetermined period of time where no user input to the system 100 has been received, hi the low-power mode, the processor 102 can still receive cache messages from the processor 105 or the peripheral devices 106 and 107. Examples of cache messages that can cause the processor 102 to enter cache- transaction processing mode can include cache probe or cache read messages (e.g. messages to check if a cache location contains modified data), cache invalidate messages (e.g. messages indicating that a particular cache line should be invalidated because data associated with that cache line has been modified by one of theperipheral devices 106 and 107 or by the processor 105), and cache write messages (e.g. messages that allow the peripheral devices 106 and 107 and the processor 105 to write directly to the cache). In response to receiving the cache message, the processor 102 can enter the cache-transaction processing mode, process the cache message, and return to the low-power mode upon completion of processing. Because the processor 102 does not have to enter the active mode to process the cache message, it is able to return to the low-power mode more quickly, thereby conserving power. The processor 102 includes a processor core 110, a cache 120, a mode control module 140, a coherency agent 150, and a frequency control module 160. The processor core 110 includes a bi-directional connection to the cache 120. The processor core 110 also includes an input to receive a signal FRQ_CTRL and an input to receive a signal C_CTRL1. The cache 120 includes an input to receive a signal C_CTRL2. The coherency agent includes outputs to provide control signals C_CTRL1, C_CTRL2, and C_CTRL3. The mode control module 140 includes an input to receive the signal C_CTRL3, an output to provide the signal V_CTRL, an output to provide the signal MODE_INDICATOR, and an output to provide the signal M_RCV. The frequency control module 160 includes an input to receive the signal M_RCV and an output to provide the signal FRQ_CTRL. The processor core 110 is configured to executes instructions in the active mode, and perform other operations, such as processing cache messages in the active mode and the cache-transaction processing mode. The processor core 110 is also configured to provide access requests and coherency information to the cache 120. The cache 120 is configured to provide and store data in response to requests provided by the processor core 110 or information provided via the signal C_CTRL2. The cache 120 also maintains coherency information for its stored data, and can modify that coherency information based on requests from the processor core 110 or information provided via the signal C_CTRL2. The coherency agent 150 is configured to receive cache messages, represent cache transaction requests, via the bus 103 from the processor 105 and the peripheral devices 106 and 107. The cache messages can represent coherency transactions or cache access requests from the processor 105 and the peripheral devices 106 and 107. The coherencyagent 150 provides information about the received cache messages via the signals C.CTRL1, C_CTRL2, and C.CTRL3. The mode control module 140 is configured to receive information about received cache messages 140 and is configured to control the mode of operation of the processor 102. To control the mode of operation, the mode control module 140 provides information via the V_CTRL signal to set the operating voltage VDD, information via the MODE_INDICATOR signal to set the clock frequency for the processor core 110 in each mode of operation, and information via the M_RCV signal to indicate that a cache message has been received. The frequency control module 160 is configured to receive information via the M_RCV signal indicating that a cache message has been received, and information via the MODE_INDICATOR signal indicating the mode of operation for the processor 102. The frequency control module 106 is configured to provide information via the FRQ-CTRL signal to set the clock frequency of the processor core 110 depending on the mode of operation for the processor 102. The frequency control module 160 is further configured to determine the number of cache messages received in a defined period of time and, based on this determination, provide information via the FRQ_CTRL signal to change the clock frequency of the processor core 110 in the cache-transaction processing mode. During operation, in the active mode the processor core 110 executes instructions to perform tasks of the processor 102. The coherency agent 150 ensures that the cache 120 remains coherent with other memory of the system 100, such as a cache of the processor 105 (not shown) or memory controlled by one of the peripheral devices 106 and 107. The coherency agent 150 receives cache messages, such as coherency messages, via the bus 103. Based on the received cache messages, the coherency agent 150 provides coherency information to the processor core 110 and the cache 120 via the signals C_CTRL1 and C-CTRLl, respectively. For example, in response to receiving a cache message indicating that data associated with a memory address has been modified by the processor 105, the coherency agent 150 notifies the processor core 110 and the cache 120 of the modification. In response, the processor core 110 and the cache 120 determine if the cache 120 stores data associated with that memory address and, if so, take appropriate action such as invalidating the cache line.The mode control module 140 controls the operational mode of the processor 102 depending on the operating conditions of the system 100 and other factors. For example, the mode control module 140 can change the mode of operation from the active mode to the low-power mode if there has not been a user input to the system 100 in a defined amount of time, if there has been no bus activity for a defined amount of time, or if an operating system or other software executing at the processor 102 or the processor 105 directs the processor 102 to enter the low-power mode. The mode control module 140 can also change the mode of operation from the low-power mode to the active mode in response to a user input or interrupt being received. To change the mode of operation, the mode control module 140 provides the signal V_CTRL to the voltage regulator to change the operating voltage VDD for the processor 102. In addition, the mode control module 140 indicates the mode of operation to the frequency control module 160 via the signal MODE_IND ICATOR to set the clock frequency for the processor core 110 in each mode. In response to receiving a coherency message, the coherency agent 150 notifies the mode control module 140 via the signal C_CTRL3. In response, if the processor 102 is in the low-power mode, the mode control module 140 changes the mode of operation to the cache-transaction processing mode. The mode control module 140 provides the signal V_CTRL to set the operating voltage VDD to the appropriate level so that the processor 102 can process the cache message. In addition, the mode control module 140 notifies the frequency control module 160 that a cache message has been received. Once the processor core 110 has completed processing the cache message, the mode control module 140 returns the processor 102 to the low-power mode, including changing the level of the operating voltage VDD, thereby conserving power. hi an alternative embodiment, the mode control module 140 may change the mode of operation of the processor 102 only after a threshold number of cache messages have been received. In this case, the mode control module 140 returns the processor to the low-power mode once all pending cache messages have been processed. The frequency control module 160 sets the clock frequency for the processor core 110 based on the MODE_INDICATOR signal, m a particular embodiment, the clock frequency is set to about zero in the low-power mode, and in the cache-transaction processing mode is set to a slower frequency than in the active mode, hi addition, in the cache-transaction processing mode the frequency control module measures the number of cache messagesreceived in a certain period of time. If the number of received cache messages exceeds a threshold, the frequency control module 160 provides information via the FRQ_CTRL signal to change the clock frequency for the processor core 110 in the cache-transaction processing mode. This causes the processor core 110 to consume more power but process the cache message more quickly. Accordingly, by setting the threshold number of cache messages appropriately, the overall power consumption of the processor 102 can be reduced. Referring to FIG. 2, a diagram depicting an example voltage output 202 for the voltage regulator 130 of FIG. 1 during operation of the system 100 is illustrated. The y-axis of the illustrated diagram indicates the level of the voltage VDD, while the x-axis indicates time. As illustrated, in the time period 204, the processor 102 is in an active mode and the operating voltage VDD is at the active voltage level. At time 205, a mode change indicator is received, indicating that the processor 102 should be placed in the low-power mode. This mode change indicator may be received in response to a user input, the lack of a user input in a predetermined period of time, or other factor. For example, software can cause the mode change indicator to be issued. In another embodiment, software can initiate issuance of the mode change indicator, but the indicator is not issued until an absence of bus activity has been detected for a period of time. In other embodiments, the mode change indicator can be issued in response to the absence of bus activity for a period of time, without software initiation. In response to the mode change indicator, the voltage VDD is changed to the retention voltage level and the processor 102 enters the low power mode and remains in the low- power mode during time period 206. At time 207, the processor 102 receives a cache message. In response, the processor 102 changes to the cache- trans action processing mode and the voltage level VDD is set to the cache-transaction processing voltage level. The processor 102 remains in the cache-transaction processing mode during the time period 208. In response to completion of processing of the cache message, at time 209, the processor 102 returns to the low-power mode and the voltage VDD provided by the voltage regulator 130 is set to the retention voltage. The duration of the time period 208 depends on the frequency of the clock of the processor core 110. If the number of cache messages received in a particular amount of time exceeds a threshold, the clock frequency can be adjusted to shorten the time period 208. Thisincreases the amount of power consumed by the processor 102 during the time period 208, but allows the processor 102 to process cache messages more quickly and thus return to the low-power state, at time period 210, more quickly. Accordingly, the threshold number of cache messages can be set to reduce overall power consumption of the processor 102. At time 211, another cache message is received. In response, the voltage VDD is set to the cache-transaction processing voltage level and the processor 102 enters the cache- transaction processing mode for the time period 212. Upon completion of processing of the cache message at time 213, the processor 102 returns to the low power mode and the voltage VDD is set to the retention voltage level for the time period 214. Thus, the processor 102 can enter the cache-transaction processing mode and return to the low-power mode each time a cache message is received. At time 215, a mode change indicator is received, indicating that the processor 102 should change to the active mode. The mode change indicator may be received in response to a user input or other factor. For example, a peripheral device may initiate an interrupt that causes the mode change indicator. In another embodiment, the mode change indicator can be received in response to the expiration of a period of time. In response to the mode change indicator, the processor 102 changes to the active mode and the operating voltage VDD is again set to the highest level for the time period 216. Referring to FIG. 3, a block diagram of a particular embodiment of a frequency control module 360, corresponding to the frequency control module 160 of FIG. 1, is illustrated. The frequency control module 360 includes a clock module 305, a time counter 306, a coherency message counter 310, and a frequency selection module 315. The clock module 305 includes an output to provide a clock signal CLK. The time counter 306 includes an input to receive the clock signal CLK and an output. The coherency message counter 310 includes an input, labeled RESET, connected to the output of the time counter 306. The coherency message counter 310 also includes an input to receive the signal M_RCV and an output. The frequency selection module 315 includes an input connected to the output of the coherency message counter 310, an input to receive the MODE_INDICATOR SIGNAL and an output to provide the signal FRQ-CTRL. During operation, the frequency selection module provides information via the FRQ-CTRL signal to set the clock frequency for the processor core 110 based on the mode of operationindicated by the MODE_IND ICATOR signal. Further, when a coherency message is received, the coherency message counter 310 is notified via the signal M_RCV. In response, a value stored by the coherency message counter 310 is adjusted. In addition, the timer counter 306 provides a signal to the RESET input to reset the coherency message counter 310 after a certain period of time, based on the clock signal CLK. In a particular embodiment, the counter 306 is a decrement counter that starts at an initial value and counts down to zero based on transitions of the clock signal CLK. When the counter 306 reaches zero, the signal to reset the coherency message counter is provided. Thus, the value stored by the coherency message counter 310 represents the number of coherency messages received in the period of time. The period of time may be a fixed value or a programmable value. The programmable value may be set based on a BIOS value for the system 100, based on an instruction executed at the processor 102, or otherwise programmed by a user. If the value stored by the coherency message counter 310 exceeds a threshold before it is reset, indicating that the number of coherency messages received in the set period of time exceeded the threshold, the coherency message counter 310 notifies the frequency selection module 210. In response, the frequency selection module 210 provides information via the signal FRQ_CTRL to change the clock frequency for the processor core 110 when the processor 102 is in the cache-transaction processing mode. Thus, if the number of coherency messages received in a particular period of time exceeds a threshold, the frequency control module 260 adjusts the clock speed for the processor core 110 when the processor 102 is in the cache-transaction processing mode, ensuring that the coherency messages are processed more quickly, thereby allowing the processor 102 to rapidly return to from the cache-transaction process mode to the low-power mode and conserving power. Referring to FIG. 4, a flow diagram of a particular embodiment of a method of providing voltages to a processor is illustrated. At block 402, a first operating voltage is provided to a processor during a first period, so that the processor is in an active mode. At block 404, instructions are executed at the processor during the first period. At block 406, a mode change indicator is received. In response to the mode change indicator, at block 408 a retention voltage is provided to the processor during a second period of time. At block 410, a coherency message is received during the second period. In response, at block 412 a second operating voltage is provided to the processor during a third period. At block 414, the coherency message is processed at the processor. At block 416, in responseto completion of processing the coherency message, the retention voltage is provided to the processor during a fourth period. Thus, the processor is able to process coherency messages without entering the active mode, thereby allowing the processor to return to the low-power mode more quickly, thus conserving power. Referring to FIG. 5, a flow diagram of an alternative embodiment of a method of providing voltages to a processor is illustrated. At block 502, a processor is in a low power mode during a first period, and therefore a retention voltage is provided to the processor during this period. At block 504, a coherency message is received during the first period, while the processor is in the low-power mode. In response, at block 506 the processor enters the cache-transaction processing mode and a first operating voltage is provided during a second period of time. At block 508, during the second period (i.e. while the processor is in the cache-transaction processing mode) the cache message is processed. At block 510, in response to completion of processing the cache message, the processor returns to the low power mode and the retention voltage is provided during a third period of time. At block 512, a mode change indicator is received during the third period of time, while the processor is in the low-power mode. In response, at block 514 the processor enters an active mode and a second operating voltage is provided to the processor. Other embodiments, uses, and advantages of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. It will further be appreciated that, although some circuit elements and modules are depicted and described as connected to other circuit elements, the illustrated elements may also be coupled via additional circuit elements, such as resistors, capacitors, transistors, and the like. The specification and drawings should be considered exemplary only, and the scope of the disclosure is accordingly intended to be limited only by the following claims and equivalents thereof.
Methods and apparatuses relating to a fusion manager to fuse instructions are described. In one embodiment, a hardware processor includes a hardware binary translator to translate an instruction stream into a translated instruction stream, a hardware fusion manager to fuse multiple instructions of the translated instruction stream into a single fused instruction, a hardware decode unit to decode the single fused instruction into a decoded, single fused instruction, and a hardware execution unit to execute the decoded, single fused instruction.
CLAIMSWhat is claimed is:1. A hardware processor comprising:a hardware binary translator to translate an instruction stream into a translated instruction stream;a hardware fusion manager to fuse multiple instructions of the translated instruction stream into a single fused instruction;a hardware decode unit to decode the single fused instruction into a decoded, single fused instruction; anda hardware execution unit to execute the decoded, single fused instruction.2. The hardware processor of claim 1, wherein the hardware fusion manager is to:detect a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, andfuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction.3. The hardware processor of claim 2, wherein the hardware fusion manager is to not fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected.4. The hardware processor of claim 2, wherein the hardware fusion manager is to not fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction if the hardware fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction.5. The hardware processor of claim 1, wherein the hardware fusion manager is to:detect, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, andfuse the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction.6. The hardware processor of claim 5, wherein the hardware fusion manager is to not fuse the instruction that is to produce the result and the store instruction that is to read the result if the hardware fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result.7. The hardware processor of claim 5, wherein the hardware fusion manager is to not fuse the instruction that is to produce the result and the store instruction that is to read the result if the hardware fusion manager detects:any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, andthe single fused instruction is to overwrite the result.8. The hardware processor of any one of claims 1-7, wherein the instruction stream is astream of macro-instructions.9. A method comprising:translating an instruction stream into a translated instruction stream with a binary translator; fusing multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager;decoding the single fused instruction into a decoded, single fused instruction with a hardware decode unit of a hardware processor; andexecuting the decoded, single fused instruction with a hardware execution unit of thehardware processor.10. The method of claim 9, wherein the fusing comprises:detecting a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, andfusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction.11. The method of claim 10, further comprising not fusing the zero extending loadinstruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected.12. The method of claim 10, further comprising not fusing the zero extending loadinstruction and the instruction that is to read the result of the zero extending load instruction if the fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction.13. The method of claim 9, wherein the fusing comprises:detecting, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, andfusing the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction.14. The method of claim 13, further comprising not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result.15. The method of claim 13, further comprising not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects:any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, andthe single fused instruction is to overwrite the result.16. The method of any one of claims 9-15, wherein the instruction stream is a stream of macro-instructions.17. A non-transitory machine readable medium that stores code that when executed by a machine causes the machine to perform a method comprising:translating an instruction stream into a translated instruction stream with a binary translator;fusing multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager;decoding the single fused instruction into a decoded, single fused instruction; andexecuting the decoded, single fused instruction.18. The non-transitory machine readable medium of claim 17, wherein the method comprises:wherein the fusing comprises:detecting a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, and fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction.19. The non-transitory machine readable medium of claim 18, wherein the method comprises: not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected.20. The non-transitory machine readable medium of claim 18, wherein the method comprises:not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction if the fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction.21. The non-transitory machine readable medium of claim 17, wherein the method comprises:wherein the fusing comprises:detecting, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, andfusing the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction.22. The non-transitory machine readable medium of claim 21, wherein the method comprises:not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result.23. The non-transitory machine readable medium of claim 21, wherein the method comprises: not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects:any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, andthe single fused instruction is to overwrite the result.24. The non-transitory machine readable medium of any one of claims 17-23, wherein the instruction stream is a stream of macro-instructions.25. An apparatus comprising:means to translate an instruction stream into a translated instruction stream with a binary translator;means to fuse multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager;means to decode the single fused instruction into a decoded, single fused instruction; and means to execute the decoded, single fused instruction.
HARDWARE APPARATUSES AND METHODS TO FUSE INSTRUCTIONS TECHNICAL FIELD[0001] The disclosure relates generally to electronics, and, more specifically, an embodiment of the disclosure relates to a hardware fusion manager to fuse instructions from a binary translator.BACKGROUND[0002] A processor, or set of processors, executes instructions from an instruction set, e.g., the instruction set architecture (ISA). The instruction set is the part of the computer architecture related to programming, and generally includes the native data types,instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O). It should be noted that the term instruction herein may refer to a macro-instruction, e.g., an instruction that is provided to the processor for execution, or to a micro-instruction, e.g., an instruction that results from a processor's decoder decoding macro-instructions.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The present disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:[0004] Figure 1 illustrates a hardware processor including a hardware binary translator and a hardware fusion manager according to embodiments of the disclosure.[0005] Figure 2 illustrates a hardware processor including a hardware fusion manager according to embodiments of the disclosure.[0006] Figure 3 illustrates a hardware processor according to embodiments of the disclosure.[0007] Figure 4 illustrates a flow diagram of a fusion operation according to embodiments of the disclosure.[0008] Figure 5 illustrates pseudocode of a fusion operation according to embodiments of the disclosure.[0009] Figure 6 illustrates an input instruction stream before a fusion operation and an output instruction stream after the fusion operation according to embodiments of the disclosure.[0010] Figure 7 illustrates pseudocode of a fusion operation according to embodiments of the disclosure.[0011] Figure 8 illustrates an input instruction stream before a fusion operation and an output instruction stream after the fusion operation according to embodiments of the disclosure.[0012] Figure 9A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure.[0013] Figure 9B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure.[0014] Figure 10A is a block diagram illustrating fields for the generic vector friendly instruction formats in Figures 9A and 9B according to embodiments of the disclosure.[0015] Figure 10B is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 10A that make up a full opcode field according to one embodiment of the disclosure. [0016] Figure IOC is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 10A that make up a register index field according to one embodiment of the disclosure.[0017] Figure 10D is a block diagram illustrating the fields of the specific vector friendly instruction format in Figure 10A that make up the augmentation operation field 950 according to one embodiment of the disclosure.[0018] Figure 11 is a block diagram of a register architecture according to oneembodiment of the disclosure[0019] Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure.[0020] Figure 12B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the disclosure.[0021] Figure 13A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments of the disclosure.[0022] Figure 13B is an expanded view of part of the processor core in Figure 13A according to embodiments of the disclosure.[0023] Figure 14 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure.[0024] Figure 15 is a block diagram of a system in accordance with one embodiment of the present disclosure.[0025] Figure 16 is a block diagram of a more specific exemplary system in accordance with an embodiment of the present disclosure.[0026] Figure 17, shown is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present disclosure.[0027] Figure 18, shown is a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present disclosure. [0028] Figure 19 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure.DETAILED DESCRIPTION[0029] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0030] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0031] A (e.g., hardware) processor (e.g., having one or more cores) may execute instructions to operate on data, for example, to perform arithmetic, logic, or other functions. Code (e.g., software) to be executed on a processor may be translated from one format to another format. A (e.g., dynamic) binary translator may be utilized to translate code (e.g., an instruction) from one format to another format. A (e.g., dynamic) binary translator may be utilized to translate code (e.g., an instruction) from multiple formats to a single format. A binary translator may translate code (e.g., an instruction) from a guest format to a host format. A binary translator may translate an instruction of a first ISA into an instruction of a second ISA. A binary translator may translate (e.g., an x86 format) macro-instruction(s) into micro-instruction(s). An instruction may translate into a plurality of translated instructions, e.g., a one-to-one correspondence is not required in one embodiment. Multiple instructions may translate into one translated instruction or a number of translated instructions that is less than the number of multiple (e.g., untranslated) instructions, e.g., a one-to-onecorrespondence is not required in one embodiment. A binary translator may translate a software instruction (e.g., in binary code) into a hardware instruction (e.g., in binary code), for example, for execution on a hardware processor. A (e.g., dynamic) binary translator may include hardware, software, firmware, or any combination thereof. A (e.g., dynamic) binary translator may translate one instruction (e.g., in source binary code complying with the architecture of a source processor (source architecture)) into a translated instruction (e.g., into target binary code complying with the architecture of a target processor (target architecture)). A dynamic binary translation process may take place during execution of the source binary code (e.g., at run time).[0032] In certain embodiments, it may be desired to fuse a plurality of (e.g., two) instructions into a single instruction. In certain embodiments, the fusing discussed herein may increase the performance (e.g., by having fewer instructions to execute) of a binary translation based processor (e.g., processor system) versus a processor that executes a native ISA without binary translation. Certain embodiments herein may increase the code density and/or efficiency of the binary instruction stream for execution on a binary translation based processor. For example, embodiments herein may reduce the number of instructions (e.g., macro-instruction) from two to one. Certain embodiments may reduce the load (e.g., pressure) on instruction scheduling and/or book-keeping hardware, for example, a reorder buffer(s) and reservation station(s). Certain embodiments may reduce the load (e.g., pressure) on register allocation, for example, with the translator and/or with a hardware allocator, e.g., by the fusing of a first and second instruction eliminating the write back of the result of the first instruction into an intermediate register to be used (e.g., read) by the second (e.g., subsequent) instruction. Certain embodiments of a binary translator based processor herein may reduce the (e.g., instruction) cache footprint and/or reduce the usage of instruction fetch and/or decoding bandwidth. Certain embodiments herein may improve code (e.g., instruction) density. Certain embodiments herein may utilize a processor's decode (e.g., cracking) unit, which may be (e.g., highly) efficient in breaking macro-instructions or macro- operations into native (e.g., to the hardware processor's ISA) micro-instructions and/or micro-operations (e.g., that the processor core is designed to handle). Certain embodiments may reduce the scope of any hardware changes in a binary translation based processor, e.g., while preserving the efficiency of the base design from which the binary translation enabled processor is derived.[0033] In certain embodiments, a fusion manager may be included to fuse a plurality of (e.g., two) instructions into a single instruction. In one embodiment, a fusion manager may be implemented in hardware, software, firmware, or any combination thereof. In one embodiment, a fusion manager may fuse a plurality of macro-instructions into a single macro-instruction, for example, before any of the macro-instructions are decoded and/or executed. The single, fused instruction may generate (e.g., from its execution) the same result or results (e.g., resultant or resultants) as the unfused, plurality of instructions. In one embodiment, a fusion manager is to detect a plurality (e.g., a pair) of instructions that may be fused, e.g., without destroying (e.g., overwriting) any data that a subsequent instruction (e.g., in execution order) is to access (e.g., read). For example, a fusion manager may detect an instruction to perform an arithmetic and/or bitwise logical operation, e.g., utilizing an arithmetic logic unit (ALU) of a processor. An ALU may operate on integer numbers and/or floating point numbers (e.g., which may be referred to as a floating point unit (FPU)). An ALU may not include a memory unit and/or a branch (e.g., prediction) unit. In one embodiment, an arithmetic and/or bitwise logical operation instruction is not (e.g., only) a load instruction and/or a store instruction.[0034] In one embodiment, a fusion manager may detect a zero extending load instruction (e.g., macro-instruction) and an instruction (e.g., macro-instruction) that is to read the result (for example, a same location where the result was stored) of the zero extending load instruction and fuse them into a single instruction (e.g., macro-instruction). For example, a fusion manager may detect a zero extending load instruction and an arithmetic and/or bitwise logical operation instruction (e.g., an add, subtract, multiply, or divide) that is to read the result of the zero extending load instruction. When executed, a zero extending load instruction may only perform a zero extending load operation or may also include other operations. When executed, a zero extending load instruction may load a value (e.g., of a certain number of bits) and zero extend that value (e.g., add zeros in addition to the certain number of bits to obtain a larger size). For example, a value may be loaded that does not utilize all of the bits of a register and the value may have zeros included in the other bit positions (e.g., in the higher significant bit positions when the value is the lower significant bit positions) to fill up each bit position of the register. Additionally or alternatively, a fusion manager may detect an instruction (e.g., macro-instruction) that is to produce a result and a store (e.g., write) instruction (e.g., macro-instruction) that is to read the result and fuse them into a single instruction (e.g., macro-instruction). When executed, a store instruction may only perform a store operation or may also include other operations. In one embodiment, when executed, a store instruction performs a store operation but not any other (e.g., arithmetic and/or bitwise logical) operations. In one embodiment, when executed, a store instruction is a move instruction that performs a move operation but not any other (e.g., arithmetic and/or bitwise logical) operations. [0035] In one embodiment, an instruction (e.g., two or more instructions to be fused into a single instruction) operates on scalar data, e.g., not vector data. For example, an instruction may be a single instruction, single data (SISD) instruction, e.g., and not a single instruction, multiple data (SEVID) instruction. In certain embodiments, an instruction may operate on operands in a direct addressing mode and/or an indirect addressing mode. In certain embodiments, an instruction may operate on operands in a register or registers, e.g., where the registers are addressed via a register name (e.g., memory may not be accessed in direct register addressing or memory may be accessed in indirect register addressing).[0036] In one embodiment, a (e.g., dynamic) binary translator may translate an instruction stream (e.g., a section of an instruction stream) into a translated instruction stream. A binary translator may assign the (e.g., hardware) resources that an instruction may use (e.g., a dynamic binary translator may assign resources during runtime of the processor). For example, a binary translator may assign the particular register or registers of a hardware processor that are to be utilized in executing a stream of instructions, for example, register(s) storing an input and/or register(s) to store a result (e.g., output). A stream of instructions may generally refer to a section of instructions (e.g., a thread). In one embodiment, a stream of instructions is a block of instructions, e.g., a block of less than about 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 35, 40, 45, 50, 75, 100, 200, 300, 400, 500, 1000, etc. instructions. For example, a stream of instructions may be a (e.g., consecutive) block of instructions that does not include each instruction in a software application. In one embodiment, during the translation stage of a binary translation, a fusion manager may produce a stream of binary translated instructions (e.g., chain of instructions) and then detect and fuse a plurality of (e.g., two) those instructions into a single instruction, for example, so a decode unit of a processor may then decode the single instruction.[0037] Figure 1 illustrates a system 100 including a hardware processor 102 having a hardware binary translator 106 and a hardware fusion manager 108 according toembodiments of the disclosure. Hardware binary translator may be a circuit to perform a binary translation. Hardware fusion manager may be a circuit to perform a fusion, e.g., as discussed herein. Code 104 (e.g., software) may be received by the processor 102, e.g., code from a compiler. Code 104 may be in a first format, for example, as discussed above. Code (e.g., a block of code) may be input into hardware (e.g., dynamic) binary translator 106 of processor 102. Hardware (e.g., dynamic) binary translator 106 may perform a binary translation of the code (e.g., instruction stream) from one format to a second format, for example, as discussed above. The translated code (e.g., instruction stream) from the binary translator 106 may be output to the hardware fusion manager 108. In one embodiment, the processor 102 includes storage (e.g., a buffer) to store the translated code. The fusion manager may view (e.g., scan) the translated code (e.g., instruction stream) and detect multiple instructions of the translated code that may be fused into a single instruction, for example, as discussed further below in reference to Figures 5-6 and 7-8. In one embodiment, a fusion manager may detect certain instructions by viewing (e.g., scanning) the opcode of each instruction, for example, and comparing that opcode to a list of opcodes for instructions that may be combined. Fusion manager may include logic (e.g., a circuit) to detect two or more instructions that may be fused together into a single instruction. Fusion manager may include logic (e.g., a circuit) to fuse two or more instructions into a single instruction, for example, after detection. Example embodiments of detecting and fusing are discussed below in reference to Figures 5-8. In one embodiment, fusion manager may detect which instruction(s) are to overwrite (e.g., destroy) data (e.g., a result) of an instruction, for example, if a first instruction's result(s) (e.g., the result value or the address of the memory location storing the result, which may be used in indirect register addressing) is to be overwritten by a second (e.g., in program order or in the order of execution, which may be in an out-of-order processor) instruction. One or more single, fused instructions (e.g., and other non-fused instructions) may be decoded in the decode unit 110 of the processor 102. One or more decoded, single fused instructions (e.g., and other decoded, non-fused instructions) may be executed in the execution unit 112 of the processor 102.[0038] Figure 2 illustrates a system 200 including a hardware processor 202 having a hardware fusion manager 208 according to embodiments of the disclosure. Binary translator may be separate from the processor, e.g., in hardware, software, firmware, or anycombination thereof. Binary translator may provide an output of translated code (e.g., a translated instruction stream) to the processor (e.g., the hardware fusion manger 208 of the processor 202). Hardware fusion manager may be a circuit to perform a fusion, e.g., as discussed herein. Code 204 (e.g., software) may be received by the binary translator 206, e.g., code from a compiler. Code 204 may be in a first format, for example, as discussed above. Code (e.g., a block of code) may be input into (e.g., dynamic) binary translator 206. Binary translator 206 may perform a binary translation of the code (e.g., instruction stream) from one format to a second format, for example, as discussed above. The translated code (e.g., instruction stream) from the binary translator 206 may be output to the hardware fusion manager 208 of the processor 202. In one embodiment, the processor 202 includes storage (e.g., a buffer) to store the translated code. The fusion manager may view (e.g., scan) the translated code (e.g., instruction stream) and detect multiple instructions of the translated code that may be fused into a single instruction, for example, as discussed further below in reference to Figures 5-6 and 7-8. In one embodiment, a fusion manager may detect certain instructions by viewing (e.g., scanning) the opcode of each instruction, for example, and comparing that opcode to a list of opcodes for instructions that may be combined. Fusion manager may include logic (e.g., a circuit) to detect two or more instructions that may be fused together into a single instruction. Fusion manager may include logic (e.g., a circuit) to fuse two or more instructions into a single instruction. For example, detection and fusing embodiments are discussed below in reference to Figures 5-8. In one embodiment, fusion manager may detect which instruction(s) are to overwrite (e.g., destroy) data (e.g., a result) of an instruction, for example, if a first instruction's result(s) (e.g., the result value or the address of the memory location storing the result, which may be used in indirect register addressing) is to be overwritten by a second (e.g., in program order or in the order of execution, which may be in an out-of-order processor) instruction. One or more single, fused instructions (e.g., and other non-fused instructions) may be decoded in the decode unit 210 of the processor 202. One or more decoded, single fused instructions (e.g., and other decoded, non-fused instructions) may be executed in the execution unit 212 of the processor 202.[0039] Figure 3 illustrates a system 300 including a hardware processor 302 according to embodiments of the disclosure. Binary translator may be separate from the processor, e.g., in hardware, software, firmware, or any combination thereof. Binary translator may provide an output of translated code (e.g., a translated instruction stream) to the fusion manger 308. Fusion manager may be separate from the processor, e.g., in hardware, software, firmware, or any combination thereof. Fusion manager may operate according to any of the methods discussed herein. Code 304 (e.g., software) may be received by the binary translator 306, e.g., code from a compiler. Code 304 may be in a first format, for example, as discussed above. Code (e.g., a block of code) may be input into binary translator 306. Binary translator 306 may perform a binary translation of the code (e.g., instruction stream) from one format to a second format, for example, as discussed above. The translated code (e.g., instruction stream) from the binary translator 306 may be output to the fusion manager 308. In one embodiment, the system 300 includes storage (e.g., a buffer) to store the translated code. The fusion manager may view (e.g., scan) the translated code (e.g., instruction stream) and detect multiple instructions of the translated code that may be fused into a single instruction, for example, as discussed further below in reference to Figures 5-6 and 7-8. In one embodiment, a fusion manager may detect certain instructions by viewing (e.g., scanning) the opcode of each instruction, for example, and comparing that opcode to a list of opcodes for instructions that may be combined. Fusion manager may include logic (e.g., a circuit) to detect two or more instructions that may be fused together into a single instruction. Fusion manager may include logic (e.g., a circuit) to fuse two or more instructions into a single instruction. For example, detection and fusing embodiments are discussed below in reference to Figures 5-8. In one embodiment, fusion manager may detect which instruction(s) are to overwrite (e.g., destroy) data (e.g., a result) of an instruction, for example, if a first instruction's result(s) (e.g., the result value or the address of the memory location storing the result, which may be used in indirect register addressing) is to be overwritten by a second (e.g., in program order or in the order of execution, which may be in an out-of-order processor) instruction. One or more single, fused instructions (e.g., and other non-fused instructions) may be decoded in the decode unit 310 of the processor 302. One or more decoded, single fused instructions (e.g., and other decoded, non-fused instructions) may be executed in the execution unit 312 of the processor 302. In one embodiment, a fusion manager may perform a fusion operation before any decoding (e.g., by the decode unit) and/or execution (e.g., by the execution unit). In one embodiment, the (e.g., dynamic) fusion manager may be located after the decode unit.[0040] Figure 4 illustrates a flow diagram 400 of a fusion operation according to embodiments of the disclosure. Flow diagram 400 includes translating an (e.g., macro- instruction) instruction stream into a translated (e.g., macro- instruction) instruction stream with a binary translator 402, fusing multiple (e.g., macro- instruction) instructions of the translated (e.g., macro- instruction) instruction stream into a single fused (e.g., macro- instruction) instruction with a fusion manager 404, decoding the single fused (e.g., macro- instruction) instruction into a decoded, single fused instruction with a hardware decode unit of a hardware processor 406, and executing the decoded, single fused (e.g., macro- instruction) instruction with a hardware execution unit of the hardware processor 408. [0041] Figure 5 illustrates pseudocode 500 of a fusion operation according to embodiments of the disclosure. In one embodiment, each and/or all sections (514, 516, 518) of pseudocode may be a software routine or performed by a hardware circuit (e.g., logic). Pseudocode 500 illustrates one example of pseudocode that a fusion manager may utilize in the detection and/or fusing of instructions (e.g., macro-instructions). For example, data may be stored (e.g., in memory) at a different size (e.g., number of bits) than what is used by (e.g., ALU) operations on the data, for example, data may be stored as a single byte (8 bits), but then extended to multiple bytes, e.g., 4 bytes (32 bits) or 8 bytes (64 bits), before other operations are performed on the data. Thus, code (e.g., an instruction stream) may include a zero extending load instruction to load a value of data (e.g., from a register or other memory) and zero-extend it, e.g., to the full register width where it is to be loaded. The zero extending may include filling the non-used (e.g., empty) bits (e.g., most significant bits) with zeros, e.g., so that the value of the extended data remains the same as the non-extended value. For example, a fusion manager may detect an instruction to perform an arithmetic and/or bitwise logical operation, e.g., utilizing an arithmetic logic unit (ALU) of a processor. In certain embodiments, an (for example, subsequent, e.g., subsequent in program order) instruction may perform an arithmetic and/or bitwise logical operation (e.g., utilizing an arithmetic logic unit (ALU) of a processor) on the zero extended result. This may allow the arithmetic and/or bitwise logical operation to use the full machine (e.g., processor) register width. In one embodiment, a zero extending load instruction includes a field (e.g., opcode or operand) that indicates the size of the value before and/or after extension. In one embodiment, a first type of zero extending load instruction extends to a first (e.g., fixed) size (e.g., to 32 bits). In one embodiment, a second type of zero extending load instruction extends to a second, different (e.g., fixed) size (e.g., to 64 bits).[0042] In one embodiment, a fusion manager may utilize pseudocode 500 to detect a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream (e.g., an instruction to perform an arithmetic and/or bitwise logical operation), and fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into a single fused instruction. For example, fusion manager may utilize (e.g., a circuit to implement) first routine 514, second routine 516, and/or third routine 518. Note that the term next writer may refer to the next writer in program order or in the order of execution (e.g., in an out-of-order processor).[0043] Figure 6 illustrates an input instruction stream 601 before a fusion operation and an output instruction stream 603 after the fusion operation according to embodiments of the disclosure. In Figures 6 and 8, the destination register appears as the rightmost field of each instruction (e.g., after each source field), however other formats may be used (e.g., with the destination register being the leftmost operand field). In Figures 6 and 8, the % sign may indicate a register (e.g., and not a modulo operation) and the opened and closed parenthesis (register name) may indicate an indirect mode of addressing, e.g., indirect register addressing where the address to be accessed is stored as the value in the named register. The register names used are examples and other register identifiers may be utilized. In Figures 6 and 8, the term between when referring to two instructions may generally refer to between in program order or in the order of execution (e.g., in an in order or out-of-order processor). In one embodiment, a fused instruction includes an additional (e.g., source) field compared to either of the unfused instructions. In certain embodiments, the fused instruction specifies the two sources for the first half of the fused operation and a third source for the second half of the fused operation, for example, the second half of the fused operations obtains its other source (e.g., implicitly) from the result of the first half.[0044] Note that Figure 6 schematically illustrates five different fusion operations, thus reducing the input instruction stream 601 by 5 instructions relative to the output instruction stream 603 in the depicted embodiment.Example for fusing instructions 01 and 11 in the input instruction stream 601 according to the pseudocode in Figure 5:Fusion manager (e.g., routine 514) first may examine instruction 01 in input instruction stream 601 and detect that it is a load, and also a zero extending load. Fusion manager may go to (e.g., function) ALU_search 516 (from I.A. I. a) with I equal to instruction 01 in input instruction stream 601. Therefore at line II thereof, Load_Dest is equal to register %edi. At line III, K becomes instruction 11 in input instruction stream 601 since this is the first instruction after 01 that reads register %edi. In one embodiment, the fusion manager does not continue a fusion operation if it detects a control flow instruction (e.g., branch) therebetween, for example, between 01 and 11 in the input instruction stream 601 in this example. Instruction 11 is in the category of fuseable instructions of fusion manager, so fusion manager continues to line IV.A of routine 516 where the next writer (e.g., that is to overwrite the contents) of register %edi (X) identified is also instruction 11 in the input instruction stream 601. At line IV. C, fusion manager verifies that there are no other readers of %edi other than K between I (instruction 01 in input instruction stream 601) and X (instruction 11 in input instruction stream 601) and thus the fusion manager determines I and K may be fused, e.g., with the fused instruction being instruction 08 in output instruction stream 603. For example, in one embodiment, instruction 11 reads both %edi and %ebx and the other instruction to be fused (instruction 01) reads memory location 0xl(%ebp) and register %edi. Thus instead of explicitly requiring all four sources in this example, the two sources of the first half (0xl(%ebp) and %edi) are listed explicitly and the result of this operation is supplied implicitly to the second half of the fused operation along with %ebx.Example for fusing instructions 05 and 06 in the input instruction stream 601 according to the pseudocode in Figure 5:Fusion manager (e.g., routine 514) first may examine instruction 05 and detect that it is a load, and also a zero extending load. Fusion manager may go to (e.g., function) ALU_search 516 (from I.A. I. a) with I equal to instruction 05 in input instruction stream 601. Therefore at line II thereof, Load_Dest is equal to register %ecx. At line III, K becomes instruction 06 in input instruction stream 601 since this is the first instruction after 05 that reads register %ecx. In one embodiment, the fusion manager does not continue a fusion operation if it detects a control flow instruction (e.g., branch) therebetween, for example, between 05 and 06 in the input instruction stream 601 in this example. Instruction 06 is in the category of fuseable instructions of fusion manager, so fusion manager continues to line IV.A of routine 516 where the next writer (e.g., that is to overwrite the contents) of %ecx (X) identified is instruction 07 in the input instruction stream 601. At line IV. C, fusion manager verifies that there are no readers of register %ecx other than K between I (instruction 05 in the input instruction stream 601) and X (instruction 07 in the input instruction stream 601) and X also does not read %ecx (e.g., it only writes it) and thus the fusion manager determines I and K may be fused, e.g., with the fused instruction being instruction 04 in output instruction stream 603. In one embodiment, two instructions are considered in the category of fuseable instructions when there is a performance benefit to doing so, e.g., where the single, fused instructions will execute to completion faster than (e.g., sequential) execution of the two unfused instructions.[0045] Instructions 08 and 09 of input instruction stream 601 may similarly be fused into single instruction 06 of the output instruction stream 603, e.g., according to the pseudocode in Figure 5. In the embodiment in Figure 6, the next writer of register %esi (e.g., esi) after instruction 08 of input instruction stream 601 is instruction 22 in input instruction stream 601.[0046] Instructions 12 and 15 of input instruction stream 601 may similarly be fused into single instruction 09 of the output instruction stream 603, e.g., according to the pseudocode in Figure 5. In the embodiment in Figure 6, the next writer of register %ebx (e.g., ebx) after instruction 12 of input instruction stream 601 is instruction 19 in input instruction stream 601.[0047] Instructions 14 and 17 of input instruction stream 601 may similarly be fused into single instruction 12 of the output instruction stream 603, e.g., according to the pseudocode in Figure 5. In the embodiment in Figure 6, the next writer of register %edx (e.g., edx) after instruction 12 of input instruction stream 601 is instruction 18 in input instruction stream 601.Example for not fusing instructions 13 and 16 in the input instruction stream 601 according to the pseudocode in Figure 5:Fusion manager (e.g., routine 514) first may examine instruction 13 and detect that it is a load, and also a zero extending load. Fusion manager may go to (e.g., function) ALU_search 516 (from I.A. I. a) with I equal to instruction 13 in input instruction stream 601. Therefore at line II thereof, Load_Dest is equal to register %ecx. At line III, K becomes instruction 16 in input instruction stream 601 since this is the first instruction after 13 that reads register %ecx. In one embodiment, the fusion manager does not continue a fusion operation if it detects a control flow instruction (e.g., branch) therebetween, for example, between 13 and 16 in the input instruction stream 601 in this example. Instruction 16 is in the category of fuseable instructions of fusion manager, so fusion manager continues to line IV.A of routine 516, but no next writer (e.g., that is to overwrite the contents) of %ecx (X) is found (e.g., in the input instruction stream). As such, the fusion manager determines that instructions 13 and 16 of the input instruction stream 601 are not fusable in this embodiment, for example, because fusion manager may not detect (e.g., guarantee) some other instruction will not utilize (e.g., read) that value in %ecx. [0048] Figure 7 illustrates pseudocode 700 of a fusion operation according to embodiments of the disclosure. In one embodiment, part and/or all of section 714 of pseudocode may be a software routine or performed by a hardware circuit (e.g., logic).Pseudocode 700 illustrates one example of pseudocode that a fusion manager may utilize in the detection and/or fusing of instructions (e.g., macro-instructions). In one embodiment, a fusion manager may use pseudocode 700 to detect (e.g., in the instruction stream) an (e.g., macro-instruction) instruction that is to produce a result and a (for example, subsequent, e.g., subsequent in program order) store (e.g., macro-instruction) instruction that is to read the result, and fuse the (e.g., macro-instruction) instruction that is to produce the result and the store (e.g., macro-instruction) instruction that is to read the result into the single fused (e.g., macro-instruction) instruction. For example, a fusion manager may detect an instruction to perform an arithmetic and/or bitwise logical operation, e.g., utilizing an arithmetic logic unit (ALU) of a processor, that is to produce a result. In certain embodiments, an instruction may perform an arithmetic and/or bitwise logical operation (e.g., utilizing an arithmetic logic unit (ALU) of a processor) to produce a result and the result may be used in a (for example, subsequent, e.g., subsequent in program order) store (e.g., move) instruction.[0049] Figure 8 illustrates an input instruction stream 801 before a fusion operation and an output instruction stream 803 after the fusion operation according to embodiments of the disclosure. Figure 8 schematically illustrates one fusion operation, thus reducing the input instruction stream 801 by one instruction relative to the output instruction stream 803 in the depicted embodiment.Example for fusing instructions 01 and 03 in the input instruction stream 801 according to the pseudocode in Figure 7:[0050] Fusion manager (e.g., routine 714) first may examine instruction 03 in input instruction stream 801 and detect that it is a store (K becomes instruction 03 in the input instruction stream 801). At line I.A. I, Store_Source (e.g., the value to be stored into memory) is located in register %ebx. At line I. A.2, 1 becomes instruction 01 in input instruction stream 801 since that instruction sets the value of Store_Source (e.g., in register %ebx in this example). In one embodiment, the fusion manager does not continue a fusion operation if it detects a control flow instruction (e.g., branch) therebetween, for example, between 01 and 03 in the input instruction stream 801 in this example. Instruction 02 reads Store_Source (%ebx), so the condition at I. A.3. a is false and the fusion manager then performs the additional check described in I.A.3.b. In this example, instruction 02 in the input instruction stream 801 is the only such J instruction and it may be relocated (e.g., rescheduled) to after K (instruction 03 in the input instruction stream 801) in the output instruction stream 803 as the relocating (e.g., rescheduling) will not affect the values read by instructions 02 and/or 03 in the input instruction stream 801. Thus, at line I.A.3.b.i, the fusion manager may fuse instructions 01 and 03 of the input instruction stream 801 into single, fused instruction 01 of the output instruction stream 803, e.g., which effectively moves instruction 02 of the input instruction stream 801 for execution after the operation in instruction 03 of the input instruction stream 801[0051] As another example, the fusion manager would not fuse instructions 01 and 03 of the input instruction stream 801 if an instruction J (e.g., instruction 02 of the input instruction stream 801 in Figure 8) between instruction I (e.g., instruction 01 of the input instruction stream 801 in Figure 8) and instruction K (e.g., instruction 03 of the input instruction stream 801 in Figure 8) was not to be relocated (e.g., rescheduled), for example, if the destination register (e.g., %edx here) of instruction 02 (J) of the input instruction stream 801 was to be read by instruction 03 (K).[0052] Note that although Figure 8 and Figure 6 are separate, the fusion operations (e.g., according to pseudocode 500 and pseudocode 700) may occur on the same input instruction stream (e.g., simultaneously).[0053] Certain embodiments herein allow reduced instruction set computing (RISC) binary translator ISA instructions to achieve the brevity and density of complex instruction set computing (CISC) (e.g., x86) macro instructions, for example, utilizing the existing decoding logic of a processor. The density improvement may be achieved via macro- instruction reduction, which also may improve allocate, rename, and/or retire bandwidth. Certain embodiments may include binary translator ISA instructions that support instructions with 3 operands, e.g., two sources and one destination. This may allow certain fusion operations (e.g., an ALU and store fusion) where the memory operand is not a source operand.[0054] In one embodiment, a hardware processor includes a hardware binary translator to translate an instruction stream into a translated instruction stream, a hardware fusion manager to fuse multiple instructions of the translated instruction stream into a single fused instruction, a hardware decode unit to decode the single fused instruction into a decoded, single fused instruction, and a hardware execution unit to execute the decoded, single fused instruction. The hardware fusion manager may detect a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, and fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction. The hardware fusion manager may not fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected. The hardware fusion manager may not fuse the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction if the hardware fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction. The hardware fusion manager may detect, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, and fuse the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction. The hardware fusion manager may not fuse the instruction that is to produce the result and the store instruction that is to read the result if the hardware fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result. The hardware fusion manager may not fuse the instruction that is to produce the result and the store instruction that is to read the result if the hardware fusion manager detects: any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, and/or the single fused instruction is to overwrite the result. The instruction stream may be a stream of macro-instructions.[0055] In another embodiment, a method includes translating an instruction stream into a translated instruction stream with a binary translator, fusing multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager, decoding the single fused instruction into a decoded, single fused instruction with a hardware decode unit of a hardware processor, and executing the decoded, single fused instruction with a hardware execution unit of the hardware processor. The method may include detecting a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, and fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction. The method may include not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected. The method may include not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction if the fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction. The method may include detecting, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, and fusing the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction. The method may include not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result. The method may include not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects: any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, and/or the single fused instruction is to overwrite the result. The instruction stream may be a stream of macro-instructions.[0056] In yet another embodiment, a non-transitory machine readable medium that stores code that when executed by a machine causes the machine to perform a method including translating an instruction stream into a translated instruction stream with a binary translator, fusing multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager, decoding the single fused instruction into a decoded, single fused instruction, and executing the decoded, single fused instruction. The method may include detecting a zero extending load instruction and an instruction that is to read a result of the zero extending load instruction in the translated instruction stream, and fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction. The method may include not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction into the single fused instruction unless a later instruction that is to overwrite the result of the zero extending load instruction is detected. The method may include not fusing the zero extending load instruction and the instruction that is to read the result of the zero extending load instruction if the fusion manager detects any additional instruction of the translated instruction stream between the zero extending load instruction and a later instruction that is to overwrite and not read the result of the zero extending load instruction, other than the instruction that is to read the result of the zero extending load instruction, that is also to read the result of the zero extending load instruction. The method may include detecting, in the translated instruction stream, an instruction that is to produce a result and a store instruction that is to read the result, and fusing the instruction that is to produce the result and the store instruction that is to read the result into the single fused instruction. The method may include not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects any instruction of the translated instruction stream between the instruction that is to produce the result and the store instruction that is to read the result that is also to read the result. The method may include not fusing the instruction that is to produce the result and the store instruction that is to read the result if the fusion manager detects: any instruction of the translated instruction stream that is also to read the result between the instruction that is to produce the result and the store instruction that is to read the result, and/or the single fused instruction is to overwrite the result. The instruction stream may be a stream of macro-instructions.[0057] In another embodiment, an apparatus includes means to translate an instruction stream into a translated instruction stream with a binary translator, means to fuse multiple instructions of the translated instruction stream into a single fused instruction with a fusion manager, means to decode the single fused instruction into a decoded, single fusedinstruction, and/or means to execute the decoded, single fused instruction. [0058] In yet another embodiment, an apparatus comprises a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform any method disclosed herein. An apparatus may be as described in the detailed description. A method may be as described in the detailed description.[0059] An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source 1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2015; and see Intel® Architecture Instruction Set Extensions Programming Reference, August 2015).Exemplary Instruction Formats[0060] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction Format [0061] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[0062] Figures 9A-9B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the disclosure. Figure 9A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the disclosure; while Figure 9B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the disclosure. Specifically, a generic vector friendly instruction format 900 for which are defined class A and class B instruction templates, both of which include no memory access 905 instruction templates and memory access 920 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[0063] While embodiments of the disclosure will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[0064] The class A instruction templates in Figure 9A include: 1) within the no memory access 905 instruction templates there is shown a no memory access, full round control type operation 910 instruction template and a no memory access, data transform type operation 915 instruction template; and 2) within the memory access 920 instruction templates there is shown a memory access, temporal 925 instruction template and a memory access, non- temporal 930 instruction template. The class B instruction templates in Figure 9B include:1) within the no memory access 905 instruction templates there is shown a no memory access, write mask control, partial round control type operation 912 instruction template and a no memory access, write mask control, vsize type operation 917 instruction template; and2) within the memory access 920 instruction templates there is shown a memory access, write mask control 927 instruction template.[0065] The generic vector friendly instruction format 900 includes the following fields listed below in the order illustrated in Figures 9A-9B.[0066] Format field 940 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[0067] Base operation field 942 - its content distinguishes different base operations.[0068] Register index field 944 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[0069] Modifier field 946 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 905 instruction templates and memory access 920 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non- memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations. [0070] Augmentation operation field 950 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the disclosure, this field is divided into a class field 968, an alpha field 952, and a beta field 954. The augmentation operation field 950 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.[0071] Scale field 960 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale* index + base).[0072] Displacement Field 962A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).[0073] Displacement Factor Field 962B (note that the juxtaposition of displacement field 962A directly over displacement factor field 962B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 974 (described later herein) and the data manipulation field 954C. The displacement field 962A and the displacement factor field 962B are optional in the sense that they are not used for the no memory access 905 instruction templates and/or different embodiments may implement only one or none of the two.[0074] Data element width field 964 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[0075] Write mask field 970 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging- writemasking, while class B instruction templates support both merging- and zeroing - writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when thecorresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 970 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the disclosure are described in which the write mask field's 970 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 970 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 970 content to directly specify the masking to be performed.[0076] Immediate field 972 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.[0077] Class field 968 - its content distinguishes between different classes of instructions. With reference to Figures 9A-B, the contents of this field select between class A and class B instructions. In Figures 9A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 968A and class B 968B for the class field 968 respectively in Figures 9A-B).Instruction Templates of Class A[0078] In the case of the non-memory access 905 instruction templates of class A, the alpha field 952 is interpreted as an RS field 952A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 952A.1 and data transform 952A.2 are respectively specified for the no memory access, round type operation 910 and the no memory access, data transform type operation 915 instruction templates), while the beta field 954 distinguishes which of the operations of the specified type is to be performed. In the no memory access 905 instruction templates, the scale field 960, the displacement field 962A, and the displacement scale filed 962B are not present.No-Memory Access Instruction Templates - Full Round Control Type Operation[0079] In the no memory access full round control type operation 910 instruction template, the beta field 954 is interpreted as a round control field 954A, whose content(s) provide static rounding. While in the described embodiments of the disclosure the round control field 954A includes a suppress all floating point exceptions (SAE) field 956 and a round operation control field 958, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 958).[0080] SAE field 956 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 956 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.[0081] Round operation control field 958 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round- to-nearest). Thus, the round operation control field 958 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field's 950 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type Operation[0082] In the no memory access data transform type operation 915 instruction template, the beta field 954 is interpreted as a data transform field 954B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[0083] In the case of a memory access 920 instruction template of class A, the alpha field 952 is interpreted as an eviction hint field 952B, whose content distinguishes which one of the eviction hints is to be used (in Figure 9A, temporal 952B.1 and non-temporal 952B.2 are respectively specified for the memory access, temporal 925 instruction template and the memory access, non-temporal 930 instruction template), while the beta field 954 is interpreted as a data manipulation field 954C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 920 instruction templates include the scale field 960, and optionally the displacement field 962A or the displacement scale field 962B.[0084] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element- wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - Temporal[0085] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-Temporal[0086] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the lst-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class B[0087] In the case of the instruction templates of class B, the alpha field 952 is interpreted as a write mask control (Z) field 952C, whose content distinguishes whether the write masking controlled by the write mask field 970 should be a merging or a zeroing.[0088] In the case of the non-memory access 905 instruction templates of class B, part of the beta field 954 is interpreted as an RL field 957 A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 957A.1 and vector length (VSIZE) 957A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 912 instruction template and the no memory access, write mask control, VSIZE type operation 917 instruction template), while the rest of the beta field 954 distinguishes which of the operations of the specified type is to be performed. In the no memory access 905 instruction templates, the scale field 960, the displacement field 962A, and the displacement scale filed 962B are not present.[0089] In the no memory access, write mask control, partial round control type operation 910 instruction template, the rest of the beta field 954 is interpreted as a round operation field 959A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).[0090] Round operation control field 959A - just as round operation control field 958, its content distinguishes which one of a group of rounding operations to perform (e.g., Roundup, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 959A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the disclosure where a processor includes a control register for specifying rounding modes, the round operation control field's 950 content overrides that register value.[0091] In the no memory access, write mask control, VSIZE type operation 917 instruction template, the rest of the beta field 954 is interpreted as a vector length field 959B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[0092] In the case of a memory access 920 instruction template of class B, part of the beta field 954 is interpreted as a broadcast field 957B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 954 is interpreted the vector length field 959B. The memory access 920 instruction templates include the scale field 960, and optionally the displacement field 962A or the displacement scale field 962B.[0093] With regard to the generic vector friendly instruction format 900, a full opcode field 974 is shown including the format field 940, the base operation field 942, and the data element width field 964. While one embodiment is shown where the full opcode field 974 includes all of these fields, the full opcode field 974 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 974 provides the operation code (opcode). [0094] The augmentation operation field 950, the data element width field 964, and the write mask field 970 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[0095] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.[0096] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the disclosure, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the disclosure). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the disclosure. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction Format[0097] Figure 10 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the disclosure. Figure 10 shows a specific vector friendly instruction format 1000 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 1000 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 9 into which the fields from Figure 10 map are illustrated.[0098] It should be understood that, although embodiments of the disclosure are described with reference to the specific vector friendly instruction format 1000 in the context of the generic vector friendly instruction format 900 for illustrative purposes, the disclosure is not limited to the specific vector friendly instruction format 1000 except where claimed. For example, the generic vector friendly instruction format 900 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 1000 is shown as having fields of specific sizes. By way of specific example, while the data element width field 964 is illustrated as a one bit field in the specific vector friendly instruction format 1000, the disclosure is not so limited (that is, the generic vector friendly instruction format 900 contemplates other sizes of the data element width field 964).[0099] The generic vector friendly instruction format 900 includes the following fields listed below in the order illustrated in Figure 10A.[0100] EVEX Prefix (Bytes 0-3) 1002 - is encoded in a four-byte form.[0101] Format Field 940 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 940 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the disclosure).[0102] The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.[0103] REX field 1005 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 957BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 111 IB, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.[0104] REX' field 910 - this is the first part of the REX' field 910 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the disclosure, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternativeembodiments of the disclosure do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.[0105] Opcode map field 1015 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[0106] Data element width field 964 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).[0107] EVEX.vvvv 1020 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 111 lb. Thus, EVEX.vvvv field 1020 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[0108] EVEX.U 968 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U 1.[0109] Prefix encoding field 1025 (EVEX byte 2, bits [ 1 :0] -pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternativeembodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.[0110] Alpha field 952 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.[0111] Beta field 954 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-o, EVEX.r2_0, EVEX.rrl, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.[0112] REX' field 910 - this is the remainder of the REX' field and is the EVEX.V bit field (EVEX Byte 3, bit [3] - V) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combiningEVEX.V, EVEX.vvvv.[0113] Write mask field 970 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the disclosure, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[0114] Real Opcode Field 1030 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[0115] MOD R/M Field 1040 (Byte 5) includes MOD field 1042, Reg field 1044, and R/M field 1046. As previously described, the MOD field's 1042 content distinguishes between memory access and non-memory access operations. The role of Reg field 1044 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 1046 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.[0116] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 950 content is used for memory address generation. SIB.xxx 1054 and SIB.bbb 1056 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.[0117] Displacement field 962A (Bytes 7-10) - when MOD field 1042 contains 10, bytes 7-10 are the displacement field 962A, and it works the same as the legacy 32-bitdisplacement (disp32) and works at byte granularity.[0118] Displacement factor field 962B (Byte 7) - when MOD field 1042 contains 01, byte 7 is the displacement factor field 962B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 962B is a reinterpretation of disp8; when using displacement factor field 962B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 962B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 962B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset).Immediate field 972 operates as previously described. Full Opcode Field[0119] Figure 10B is a block diagram illustrating the fields of the specific vector friendly instruction format 1000 that make up the full opcode field 974 according to one embodiment of the disclosure. Specifically, the full opcode field 974 includes the format field 940, the base operation field 942, and the data element width (W) field 964. The base operation field 942 includes the prefix encoding field 1025, the opcode map field 1015, and the real opcode field 1030.Register Index Field[0120] Figure IOC is a block diagram illustrating the fields of the specific vector friendly instruction format 1000 that make up the register index field 944 according to one embodiment of the disclosure. Specifically, the register index field 944 includes the REX field 1005, the REX' field 1010, the MODR/M.reg field 1044, the MODR/M.r/m field 1046, the WW field 1020, xxx field 1054, and the bbb field 1056.Augmentation Operation Field[0121] Figure 10D is a block diagram illustrating the fields of the specific vector friendly instruction format 1000 that make up the augmentation operation field 950 according to one embodiment of the disclosure. When the class (U) field 968 contains 0, it signifies EVEX.UO (class A 968A); when it contains 1, it signifies EVEX.Ul (class B 968B). When U=0 and the MOD field 1042 contains 11 (signifying a no memory access operation), the alpha field 952 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 952A. When the rs field 952A contains a 1 (round 952A.1), the beta field 954 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 954A. The round control field 954A includes a one bit SAE field 956 and a two bit round operation field 958. When the rs field 952A contains a 0 (data transform 952A.2), the beta field 954 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 954B. When U=0 and the MOD field 1042 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 952 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 952B and the beta field 954 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 954C. [0122] When U=l, the alpha field 952 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 952C. When U=l and the MOD field 1042 contains 11(signifying a no memory access operation), part of the beta field 954 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 957 A; when it contains a 1 (round 957A.1) the rest of the beta field 954 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 959A, while when the RL field 957 A contains a 0 (VSIZE 957. A2) the rest of the beta field 954 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 959B (EVEX byte 3, bit [6-5]- L1-0). When U=l and the MOD field 1042 contains 00, 01, or 10 (signifying a memory access operation), the beta field 954 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 959B (EVEX byte 3, bit [6-5]- Li_0) and the broadcast field 957B (EVEX byte 3, bit [4]- B).Exemplary Register Architecture[0123] Figure 11 is a block diagram of a register architecture 1100 according to one embodiment of the disclosure. In the embodiment illustrated, there are 32 vector registers 1110 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 1000 operates on these overlaid register file as illustrated in the below tables.[0124] In other words, the vector length field 959B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 959B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 1000 operate on packed or scalar single/double- precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[0125] Write mask registers 1115 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1115 are 16 bits in size. As previously described, in one embodiment of the disclosure, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[0126] General-purpose registers 1125 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[0127] Scalar floating point stack register file (x87 stack) 1145, on which is aliased the MMX packed integer flat register file 1150 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[0128] Alternative embodiments of the disclosure may use wider or narrower registers. Additionally, alternative embodiments of the disclosure may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer Architectures[0129] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing.Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU(sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagram[0130] Figure 12A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the disclosure. Figure 12B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of- order issue/execution architecture core to be included in a processor according toembodiments of the disclosure. The solid lined boxes in Figures 12A-B illustrate the in- order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[0131] In Figure 12A, a processor pipeline 1200 includes a fetch stage 1202, a length decode stage 1204, a decode stage 1206, an allocation stage 1208, a renaming stage 1210, a scheduling (also known as a dispatch or issue) stage 1212, a register read/memory read stage 1214, an execute stage 1216, a write back/memory write stage 1218, an exception handling stage 1222, and a commit stage 1224.[0132] Figure 12B shows processor core 1290 including a front end unit 1230 coupled to an execution engine unit 1250, and both are coupled to a memory unit 1270. The core 1290 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1290 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[0133] The front end unit 1230 includes a branch prediction unit 1232 coupled to an instruction cache unit 1234, which is coupled to an instruction translation lookaside buffer (TLB) 1236, which is coupled to an instruction fetch unit 1238, which is coupled to a decode unit 1240. The decode unit 1240 (or decoder or decoder unit) may decode instructions (e.g., macro-instructions), and generate as an output one or more micro-operations, micro-code entry points, micro-instructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1240 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardwareimplementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1290 includes a microcode ROM or other medium that stores microcode for certain macro-instructions (e.g., in decode unit 1240 or otherwise within the front end unit 1230). The decode unit 1240 is coupled to arename/allocator unit 1252 in the execution engine unit 1250.[0134] The execution engine unit 1250 includes the rename/allocator unit 1252 coupled to a retirement unit 1254 and a set of one or more scheduler unit(s) 1256. The scheduler unit(s) 1256 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1256 is coupled to the physical register file(s) unit(s) 1258. Each of the physical register file(s) units 1258 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1258 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1258 is overlapped by the retirement unit 1254 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1254 and the physical register file(s) unit(s) 1258 are coupled to the execution cluster(s) 1260. The execution cluster(s) 1260 includes a set of one or more execution units 1262 and a set of one or more memory access units 1264. The execution units 1262 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1256, physical register file(s) unit(s) 1258, and execution cluster(s) 1260 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1264). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-orderissue/execution and the rest in-order.[0135] The set of memory access units 1264 is coupled to the memory unit 1270, which includes a data TLB unit 1272 coupled to a data cache unit 1274 coupled to a level 2 (L2) cache unit 1276. In one exemplary embodiment, the memory access units 1264 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1272 in the memory unit 1270. The instruction cache unit 1234 is further coupled to a level 2 (L2) cache unit 1276 in the memory unit 1270. The L2 cache unit 1276 is coupled to one or more other levels of cache and eventually to a main memory. [0136] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1200 as follows: 1) the instruction fetch 1238 performs the fetch and length decoding stages 1202 and 1204; 2) the decode unit 1240 performs the decode stage 1206; 3) the rename/allocator unit 1252 performs the allocation stage 1208 and renaming stage 1210; 4) the scheduler unit(s) 1256 performs the schedule stage 1212; 5) the physical register file(s) unit(s) 1258 and the memory unit 1270 perform the register read/memory read stage 1214; the execution cluster 1260 perform the execute stage 1216; 6) the memory unit 1270 and the physical register file(s) unit(s) 1258 perform the write back/memory write stage 1218; 7) various units may be involved in the exception handling stage 1222; and 8) the retirement unit 1254 and the physical register file(s) unit(s) 1258 perform the commit stage 1224.[0137] The core 1290 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1290 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[0138] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[0139] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1234/1274 and a shared L2 cache unit 1276, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor. Specific Exemplary In-Order Core Architecture[0140] Figures 13A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[0141] Figure 13A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1302 and with its local subset of the Level 2 (L2) cache 1304, according to embodiments of the disclosure. In one embodiment, an instruction decode unit 1300 supports the x86 instruction set with a packed data instruction set extension. An LI cache 1306 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1308 and a vector unit 1310 use separate register sets (respectively, scalar registers 1312 and vector registers 1314) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 1306, alternative embodiments of the disclosure may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).[0142] The local subset of the L2 cache 1304 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1304. Data read by a processor core is stored in its L2 cache subset 1304 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1304 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.[0143] Figure 13B is an expanded view of part of the processor core in Figure 13A according to embodiments of the disclosure. Figure 13B includes an LI data cache 1306A part of the LI cache 1304, as well as more detail regarding the vector unit 1310 and the vector registers 1314. Specifically, the vector unit 1310 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1328), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1320, numeric conversion with numeric convert units 1322A-B, and replication with replication unit 1324 on the memory input. Write mask registers 1326 allow predicating resulting vector writes.[0144] Figure 14 is a block diagram of a processor 1400 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the disclosure. The solid lined boxes in Figure 14 illustrate a processor 1400 with a single core 1402A, a system agent 1410, a set of one or more bus controller units 1416, while the optional addition of the dashed lined boxes illustrates an alternative processor 1400 with multiple cores 1402A-N, a set of one or more integrated memory controller unit(s) 1414 in the system agent unit 1410, and special purpose logic 1408.[0145] Thus, different implementations of the processor 1400 may include: 1) a CPU with the special purpose logic 1408 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1402A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1402A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1402A-N being a large number of general purpose in-order cores. Thus, the processor 1400 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1400 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[0146] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1406, and external memory (not shown) coupled to the set of integrated memory controller units 1414. The set of shared cache units 1406 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1412 interconnects the integrated graphics logic 1408, the set of shared cache units 1406, and the system agent unit 1410/integrated memory controller unit(s) 1414, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1406 and cores 1402- A-N.[0147] In some embodiments, one or more of the cores 1402A-N are capable of multithreading. The system agent 1410 includes those components coordinating and operating cores 1402A-N. The system agent unit 1410 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1402A-N and the integrated graphics logic 1408. The display unit is for driving one or more externally connected displays.[0148] The cores 1402A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1402A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer Architectures[0149] Figures 15-18 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[0150] Referring now to Figure 15, shown is a block diagram of a system 1500 in accordance with one embodiment of the present disclosure. The system 1500 may include one or more processors 1510, 1515, which are coupled to a controller hub 1520. In one embodiment the controller hub 1520 includes a graphics memory controller hub (GMCH) 1590 and an Input/Output Hub (IOH) 1550 (which may be on separate chips); the GMCH 1590 includes memory and graphics controllers to which are coupled memory 1540 and a coprocessor 1545; the IOH 1550 is couples input/output (I/O) devices 1560 to the GMCH 1590. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1540 and the coprocessor 1545 are coupled directly to the processor 1510, and the controller hub 1520 in a single chip with the IOH 1550. Memory 1540 may include a fusion manager module 1540A, for example, to store code that when executed causes a processor to perform any method of this disclosure.[0151] The optional nature of additional processors 1515 is denoted in Figure 15 with broken lines. Each processor 1510, 1515 may include one or more of the processing cores described herein and may be some version of the processor 1400.[0152] The memory 1540 may be, for example, dynamic random access memory(DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1520 communicates with the processor(s) 1510, 1515 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1595.[0153] In one embodiment, the coprocessor 1545 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1520 may include an integrated graphics accelerator.[0154] There can be a variety of differences between the physical resources 1510, 1515 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[0155] In one embodiment, the processor 1510 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1510 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1545. Accordingly, the processor 1510 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1545. Coprocessor(s) 1545 accept and execute the received coprocessor instructions.[0156] Referring now to Figure 16, shown is a block diagram of a first more specific exemplary system 1600 in accordance with an embodiment of the present disclosure. As shown in Figure 16, multiprocessor system 1600 is a point-to-point interconnect system, and includes a first processor 1670 and a second processor 1680 coupled via a point-to-point interconnect 1650. Each of processors 1670 and 1680 may be some version of the processor 1400. In one embodiment of the disclosure, processors 1670 and 1680 are respectively processors 1510 and 1515, while coprocessor 1638 is coprocessor 1545. In another embodiment, processors 1670 and 1680 are respectively processor 1510 coprocessor 1545.[0157] Processors 1670 and 1680 are shown including integrated memory controller (IMC) units 1672 and 1682, respectively. Processor 1670 also includes as part of its bus controller units point-to-point (P-P) interfaces 1676 and 1678; similarly, second processor 1680 includes P-P interfaces 1686 and 1688. Processors 1670, 1680 may exchange information via a point-to-point (P-P) interface 1650 using P-P interface circuits 1678, 1688. As shown in Figure 16, IMCs 1672 and 1682 couple the processors to respective memories, namely a memory 1632 and a memory 1634, which may be portions of main memory locally attached to the respective processors.[0158] Processors 1670, 1680 may each exchange information with a chipset 1690 via individual P-P interfaces 1652, 1654 using point to point interface circuits 1676, 1694, 1686, 1698. Chipset 1690 may optionally exchange information with the coprocessor 1638 via a high-performance interface 1639. In one embodiment, the coprocessor 1638 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.[0159] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[0160] Chipset 1690 may be coupled to a first bus 1616 via an interface 1696. In one embodiment, first bus 1616 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[0161] As shown in Figure 16, various I/O devices 1614 may be coupled to first bus 1616, along with a bus bridge 1618 which couples first bus 1616 to a second bus 1620. In one embodiment, one or more additional processor(s) 1615, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1616. In one embodiment, second bus 1620 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1620 including, for example, a keyboard and/or mouse 1622, communication devices 1627 and a storage unit 1628 such as a disk drive or other mass storage device which may include instructions/code and data 1630, in one embodiment. Further, an audio I/O 1624 may be coupled to the second bus 1620. Note that other architectures are possible. For example, instead of the point-to- point architecture of Figure 16, a system may implement a multi-drop bus or other such architecture.[0162] Referring now to Figure 17, shown is a block diagram of a second more specific exemplary system 1700 in accordance with an embodiment of the present disclosure. Like elements in Figures 16 and 17 bear like reference numerals, and certain aspects of Figure 16 have been omitted from Figure 17 in order to avoid obscuring other aspects of Figure 17.[0163] Figure 17 illustrates that the processors 1670, 1680 may include integrated memory and I/O control logic ("CL") 1672 and 1682, respectively. Thus, the CL 1672, 1682 include integrated memory controller units and include I/O control logic. Figure 17illustrates that not only are the memories 1632, 1634 coupled to the CL 1672, 1682, but also that I/O devices 1714 are also coupled to the control logic 1672, 1682. Legacy I/O devices 1715 are coupled to the chipset 1690.[0164] Referring now to Figure 18, shown is a block diagram of a SoC 1800 in accordance with an embodiment of the present disclosure. Similar elements in Figure 14 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 18, an interconnect unit(s) 1802 is coupled to: an application processor 1810 which includes a set of one or more cores 202A-N and shared cache unit(s) 1406; a system agent unit 1410; a bus controller unit(s) 1416; an integrated memory controller unit(s) 1414; a set or one or more coprocessors 1820 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1830; a direct memory access (DMA) unit 1832; and a display unit 1840 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1820 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[0165] Embodiments (e.g., of the mechanisms) disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches.Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[0166] Program code, such as code 1630 illustrated in Figure 16, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[0167] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, themechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[0168] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine -readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[0169] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD- RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. [0170] Accordingly, embodiments of the disclosure also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)[0171] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[0172] Figure 19 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the disclosure. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 19 shows a program in a high level language 1902 may be compiled using an x86 compiler 1904 to generate x86 binary code 1906 that may be natively executed by a processor with at least one x86 instruction set core 1916. The processor with at least one x86 instruction set core 1916 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1904 represents a compiler that is operable to generate x86 binary code 1906 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1916. Similarly, Figure 19 shows the program in the high level language 1902 may be compiled using an alternative instruction set compiler 1908 to generate alternative instruction set binary code 1910 that may be natively executed by a processor without at least one x86 instruction set core 1914 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1912 is used to convert the x86 binary code 1906 into code that may be natively executed by the processor without an x86 instruction set core 1914. This converted code is not likely to be the same as the alternative instruction set binary code 1910 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1912 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1906.
Various embodiments are generally directed to use of a keyboard as a biometric authentication device. In one embodiment, for example, an apparatus comprises a processor circuit executing a sequence of instructions causing the processor circuit to receive a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to the apparatus, and indicative of at least one physical characteristic associated with the keypress; compare the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the apparatus; and determine if the keypress is associated with at least one authorized user of the apparatus based on the comparison. Other embodiments are described and claimed herein.
CLAIMS 1. A computer-implemented method comprising: receiving a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to a computing device, and indicative of at least one physical characteristic associated with the keypress; comparing the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the computing device; and determining if the keypress is associated with at least one authorized user of the computing device based on the comparison. 2. The computer-implemented method of claim 1, the at least one physical characteristic selected from a group comprising a velocity at which the key is pressed, a pressure exerted to press the key, an amount of time during which the key is held in a fully pressed state, a pressure exerted to hold the key in the fully pressed state, a velocity at which the key is released from the fully pressed state, and an amount of time elapsing from pressing the key to pressing another key. 3. The computer-implemented method of claim 1, comprising presenting a visual prompt on a display of the computing device requesting a user of the computing device to enter text to enable authentication prior to the user being authenticated as an authorized user, the visual prompt comprising a preselected text to enter, the preselected text selected to cause a user to use a defined quantity of digits to operate a defined quantity of keys of the keyboard. 4. The computer-implemented method of claim 1, comprising placing the computing device in an unlocked mode allowing access to a first data in response to determining that the keypress is associated with at least one authorized user and in response to the computing device being in a locked mode denying access to the first data. 5. The computer-implemented method of claim 4, comprising allowing access to a limited subset of available functionality of the computing device during the locked mode, the limited subset of available functionality comprising an opportunity to enter text. 6. The computer-implemented method of claim 5, comprising presenting a visual prompt on a display of the computing device requesting a user of the computing device to enter text in response to the user attempting to access the first data while the computing device is in the locked mode, the visual prompt comprising a preselected text to enter. 7. The computer-implemented method of claim 4, comprising placing the computing device in the locked mode in response to: determining that the at least one physical characteristic has changed since a last authentication of an authorized user to an extent consistent with a different user operating the keyboard in place of the authorized user; and determining that the different user is not an authorized user. 8. The computer-implemented method of claim 4, comprising: placing the computing device in the locked mode in response to a predetermined period of time having elapsed since the computing device was last interacted with by an authorized user; and refraining from placing the computing device in the locked mode in response to detecting the continuing presence of an authorized user in proximity to the computing device. 9. The computer-implemented method of claim 4, comprising refining the at least one stored physical characteristic in response to determining that the at least one physical characteristic has changed to an extent consistent with a physical change of an authorized user. 10. An apparatus comprising: a first processor circuit; and a first storage communicatively coupled to the first processor circuit and storing a first sequence of instructions that when executed by the first processor circuit, causes the first processor circuit to: receive a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to the apparatus, and indicative of at least one physical characteristic associated with the keypress; compare the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the apparatus; and determine if the keypress is associated with at least one authorized user of the apparatus based on the comparison. 11. The apparatus of claim 10, the at least one physical characteristic selected from a group comprising a velocity at which the key is pressed, a pressure exerted to press the key, an amount of time during which the key is held in a fully pressed state, a pressure exerted to hold the key in the fully pressed state, a velocity at which the key is released from the fully pressed state, and an amount of time elapsing from pressing the key to pressing another key. 12. The apparatus of claim 10, the first processor circuit caused to present a visual prompt on a display requesting a user of the apparatus to enter text to enable authentication prior to the user being authenticated as an authorized user, the visual prompt comprising a preselected text to enter, the preselected text selected to cause a user to use a defined quantity of digits to operate a defined quantity of keys of the keyboard. 13. The apparatus of claim 10, the first processor circuit caused to place the apparatus in an unlocked mode allowing access to a first data in response to determining that the keypress is associated with at least one authorized user and in response to the apparatus being in a locked mode denying access to the first data. 14. The apparatus of claim 13, the first processor circuit caused to allow access to a limited subset of available functionality of the apparatus during the locked mode, the limited subset of available functionality comprising an opportunity to enter text. 15. The apparatus of claim 14, the first processor circuit caused to present a visual prompt on a display requesting a user of the apparatus to enter text in response to the user attempting to access the first data while the apparatus is in the locked mode, the visual prompt comprising a preselected text to enter. 16. The apparatus of claim 15, comprising: the display; a second processor circuit; and a second storage communicatively coupled to the second processor circuit and storing a second sequence of instructions, the first processor circuit causing a visual prompt to be presented comprising the first processor circuit caused by executing the first sequence of instructions to signal the second processor circuit, and the second processor circuit caused by executing the second sequence of instructions to present the visual prompt on the display in response to the signal. 17. The apparatus of claim 13, the first processor circuit caused to place the apparatus in the locked mode in response to: the first processor circuit determining that the at least one physical characteristic has changed since a last authentication of an authorized user to an extent consistent with a different user operating the keyboard in place of the authorized user; and the first processor circuit determining that the different user is not an authorized user. 18. The apparatus of claim 17, comprising: a second processor circuit; and a second storage communicatively coupled to the second processor circuit and storing a second sequence of instructions, the first processor circuit causing the apparatus to be placed in one of the locked mode and the unlocked mode comprises the first processor circuit caused by executing the first sequence of instructions to signal the second processor circuit, and the second processor circuit caused by executing the second sequence of instructions to place the apparatus in one of the locked mode and the unlocked mode in response to the signal. 19. The apparatus of claim 13, the first processor circuit caused to refine the at least one stored physical characteristic in response to determining that the at least one physical characteristic has changed to an extent consistent with a physical change of an authorized user. 20. At least one machine-readable storage medium comprising a plurality of instructions that when executed by a computing device, causes the computing device to perform the method of any of claims 1-9.
KEYBOARD AS BIOMETRIC AUTHENTICATION DEVICE BACKGROUND Authentication to separate authorized users of a computing device (e.g., a computer system, a data entry terminal, a smartphone, etc.) from unauthorized users has become increasingly important as individuals and organizations continue to store ever greater amounts of sensitive data on such devices, and as such devices become ever increasingly connected to still others of such devices through ever expanding arrays of wired and wireless networks. One long accepted approach to authentication is to require a would-be user of a computing device to enter a password that is intended to be known only to one or more authorized users, and perhaps also to one or more authorized system administrators having some degree of control over who is supposed to be an authorized user of that computing device. The use of passwords requires far fewer computational resources than various biometric-based approaches to authentication that have been previously proposed (e.g., reading finger prints, voice analysis, etc.). Unfortunately, the use of passwords suffers numerous drawbacks. A password may become known to someone who is not an authorized user of a computing device, thereby enabling that unauthorized person to operate that computing device as if he/she were an authorized user. Passwords having characteristics in their combinations of characters or keystrokes that are deemed to make them "strong" passwords (e.g., passwords that are not easily guessed) tend to be difficult for authorized users to remember. Indeed, it is often the case that given an opportunity to create a password themselves, an authorized user will tend to create a "weak" password that may be a street name, a pet's name, an all-too simple sequence of numbers (e.g., "1234"), etc. that is all too easily guessed by others. Further, once discovered, a password is quickly and easily spread to others (e.g., via the Internet, on slips of paper, etc.). It is with respect to these and other considerations that the techniques described herein to utilize a keyboard as a biometric authentication device are needed. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates side-by-side operating environments of an embodiment of a computing device. FIG. 2 illustrates a first portion of the embodiment of FIG. 1. FIG. 3 illustrates a second portion of the embodiment of FIG. 1. FIG. 4 illustrates an embodiment of a first logic flow. FIG. 5 illustrates an embodiment of a second logic flow. FIG. 6 illustrates an embodiment of a third logic flow. FIG. 7 illustrates an embodiment of a first processing architecture. FIG. 8 illustrates an embodiment of a second processing architecture. DETAILED DESCRIPTION Various embodiments are generally directed to authentication techniques. Some embodiments are particularly directed to use of an input device, such as a keyboard as a biometric authentication device. More specifically, a controller of a computing device receives data from a keyboard indicative of various physical characteristics of the manner in which a person operates the keys of the keyboard and employs that data in determining whether that person is an authorized user of the computing device. In doing so, the controller compares the data received from the keyboard with the pattern data comprising previously stored physical characteristics of the manner in which one or more authorized users of the computing device operate the keyboard. The controller then signals one or more other portions of the computing device with this determination, thereby enabling the computing device to either allow or deny access to an application and/or data. An advantage of authenticating users of the computing device based on such physical characteristics is that authorized users need not memorize a password or engage in various efforts to maintain the secrecy and security of a password, including regularly changing a password. Instead, authorized users are given an opportunity to type text, which could be text of their choosing, or could be a preselected text that could even be allowed to be freely distributed and widely known, since it is the physical characteristics of the manner in which the text is typed, and not the content of the text itself, that is used in authentication. In one embodiment, for example, an apparatus comprises a first processor circuit, and a first storage communicatively coupled to the first processor circuit and storing a first sequence of instructions that when executed by the first processor circuit, causes the first processor circuit to: receive a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to the apparatus, and indicative of at least one physical characteristic associated with the keypress; compare the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the apparatus; and determine if the keypress is associated with at least one authorized user of the apparatus based on the comparison. Other embodiments are described and claimed herein. With general reference to notations and nomenclature used herein, portions of the detailed description which follows may be presented in terms of program procedures executed on a computer or network of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities. Further, these manipulations are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. However, no such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein that form part of one or more embodiments. Rather, these operations are machine operations. Useful machines for performing operations of various embodiments include general purpose digital computers as selectively activated or configured by a computer program stored within that is written in accordance with the teachings herein, and/or include apparatus specially constructed for the required purpose. Various embodiments also relate to apparatus or systems for performing these operations. These apparatus may be specially constructed for the required purpose or may comprise a general purpose computer. The required structure for a variety of these machines will appear from the description given. Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims. FIG. 1 illustrates a block diagram of a computing device 1000 in which a keyboard 120 of the computing device 1000 is employed in biometric authentication of a user. The computing device 1000 may be any of a wide variety of computing devices, including without limitation, a desktop computer system, a data entry terminal, a laptop computer, a netbook computer, a tablet computer, a handheld personal data assistant, a smartphone, etc. The keyboard 120 may be an of a variety of input devices comprising a plurality of keys by which text (e.g., numbers, textual characters, mathematical symbols, phonetic characters, etc.), musical notes, codes (e.g., telephone numbers dates, etc.), and/or other forms of data may be input by a person through pressing multiple ones of the plurality of keys, including without limitation, "QWERTY" keyboards, Dvorak keyboards, telephone keypads, digital piano keyboards, calculator keypads, cash register keypads, keypads of smartphones or personal data assistants, touch screen keyboards (e.g., keyboards drawn on a display with a touch sensor overlay), virtual keyboards projected onto a surface, etc. In various embodiments, the keyboard 120 may be coupled (either permanently or separably) to other components of the computing device 1000 via a wired or wireless connection capable of conveying data indicating physical characteristics of the operation of its plurality of keys 130, or may be physically incorporated into a casing of the computing device 1000 into which others of its components are also incorporated. Thus, especially where the computing device 1000 is meant to be useable in a highly portable mode, the keyboard 120 may not always be coupled to the rest of what the computing device 1000 comprises. Alternatively, the computing device 1000 may permanently comprise the keyboard 120. In various embodiments, the computing device 1000 comprises a controller 200 that itself comprises a processor circuit 250 and a storage 240 accessible to the processor circuit 250 in which is stored at least a control routine 245 and a pattern data 242. As will be explained in greater detail, the processor circuit 250 executes a sequence of instructions of the control routine 245 to receive data from the keyboard 120 indicative of various physical characteristics of the keystrokes of a person operating the keys 130 of the keyboard 120 and to employ that data in determining whether that person is an authorized user. In making this determination, the processor circuit 250 compares the data received from the keyboard 120 with the pattern data 242 comprising previously stored physical characteristics of the manner in which one or more authorized users of the computing device 1000 operate the keyboard 120. The computing device 1000 further comprises another processor circuit 550 and another storage 540 accessible to the processor circuit 550 in which is stored at least a security routine 542. The processor circuit 550 executes a sequence of instructions of at least the security routine 542 to receive an indication from the controller 200 indicating the results of the determination of whether that person operating the keyboard 120 is an authorized user, and to employ that determination in either allowing or denying that person access to an application and/or data (e.g., a local application 546 and/or local data 548 that may also be stored within the storage 540). An advantage of authenticating users of the computing device 1000 through a comparison of the physical characteristics of the manner in which each uses the keyboard 120 to previously stored data indicative of such physical characteristics for one or more authorized users is that authorized users need not memorize a password or engage in various efforts to maintain the secrecy of a password, including regularly changing a password. As will be explained in greater detail, persons operating the keyboard 120 may be authenticated as being either authorized users, or not, while typing whatever they wish. Alternatively, as will also be explained in greater detail, persons may be presented with specific preselected text that they must type into the keyboard 120 to be authenticated. Given that authentication is based on physical characteristics of the manner in which they type that preselected text, and not on their memory of the preselected text itself, the preselected text (unlike a password) may be allowed to be freely known even by those who are not authorized users of the computing device 1000, thereby affording another advantage inasmuch as efforts need not be made to keep the preselected text secret. Furthermore, given a range and combination of physical characteristics that may be associated with a specific authorized user, it becomes very difficult to mimic or spoof such physical characteristics, thereby leading to a more secure form of authentication and access control that cannot be transferred between persons. Other advantages exist as well. However, it should be noted that, although much of the discussion herein focuses on the use of the characteristics of the manner in which a person operates a keyboard as a basis of authentication, such authentication may be used in conjunction with other forms of authentication, including without limitation, passwords, fingerprints, voiceprint, handwriting recognition, asymmetric security keys, eye scan, etc. Such other forms of authentication may be offered as an alternative to the characteristics with which someone operates a keyboard (especially where data concerning a particular person's characteristics for using a keyboard has not yet been stored for use in authentication, as will be discussed), or may be employed as additional "factors" in an authentication scheme in which more than one form of authentication must be provided for someone to be authenticated as an authorized user (e.g., a combination of the characteristics of the manner in which a person operates the keyboard 120 and their fingerprint). In their separate execution of separately stored sequences of instructions, the processor circuits 250 and 550 operate within operating environments that are kept largely separate from each other, namely, a controller environment 1250 in which the processor circuit 250 operates and a system environment 1550 in which the processor circuit 550 operates. The controller environment 1250 is the operating environment for biometric authentication operations for an input device (e.g., the keyboard 120). The system environment 1550 is the operating environment of the computing device 1000 that is meant to be accessible to an authorized user of the computing device 1000 for the purposes of running applications (e.g., the local application 546) and working with data (e.g., the local data 548). Thus, it is expected that an operating system 545 providing aspects of a user interface, a file system, and access control to input/output devices and/or storage devices will also be stored within the storage 540 to enable an authorized user to make such use of the computing device for such purposes to run applications (e.g., a word processor, image generating software, etc.) to view or otherwise work with data (e.g., documents, digitized photographs or audio, etc.). However, different operation systems incorporate different ranges of capability for implementing or supporting various forms of authentication by a would-be user is allowed to make use of an operating environment that includes a given operating system. It may be that the security routine 542 comprises a sequence of instructions that is separate from the operating system 545, but is specifically selected for use with the operating system 545 to enable the operating system 545 to make use of the authentication features implemented within the separate operating environment 1250, as will be described, including allowance or denial of access to specific applications and/or data. However, it may be that operating system 545 incorporates the security routine 542 as a component of the operating system, such that operating system 545 is able to make use of the authentication features implemented within the separate operating environment 1250 without augmentation by other sequences of instruction. Separating the controller environment 1250 from the system environment 1550 provides additional security benefits. User access to the system environment 1550 enables infiltration with applications and/or data containing malicious sequences of instructions (e.g., so-called "viruses", "worms", etc.), including sequences of instructions meant to subvert or defeat user authentication mechanisms. As will be familiar to those skilled in securing computing devices, it is all too common for users to be induced into loading such forms of malicious software onto a computing device through either physical storage media (e.g., compact flash cards, USB "thumb" drives, etc.) or access to other computing devices through a network (e.g., the Internet). The separation of the controller environment 1250 from the system environment 1550 prevents whatever infiltration of the system environment 1550 that may occur from also bringing about an infiltration of the controller environment 1250. More specifically, this ensures that the pattern data 242 remains accessible only to the processor circuit 250 and only for purposes of user authentication, as has been described. Thus, procedures carried out by the processor circuit 250 within the controller environment 1250 serve to support authorized user operation of the computing device 1000 as enabled in the system environment 1550. It should be noted that although much of the discussion of the operating environment 1250 herein is focused on the role played by the controller 200 in authentication of a person operating a keyboard for determining whether or not they are allowed access to the operating environment 1550 (and to what degree), the controller 200 may serve one or more other roles in the operation of the computing device 1000. By way of example, the processor 250, in executing the control routine 245, may monitor voltages and/or temperatures of various components of the computing device 1000, may oversee and/or implement data redundancy calculations employed in storing data, may monitor one or more components for indications of the operating environment 1550 becoming unstable (e.g., a "system hang"), overseeing and/or implementing firewall features of a network interface, etc. Despite the improved security offered by such a separation of operating environments as just described, various alternate embodiments may employ only a single operating environment, namely, the system environment in which the processor circuit 550 performs the functions described herein as being separately performed by the processor circuit 250. Such alternate embodiments may be deemed desirable where the data and/or applications to which access may or may not be allowed as a result of a person being authenticated as an authorized user, or not, are not so sensitive that great harm will be deemed to have been done if someone who is not an authorized person is able to gain access to one or the other of them. Such alternate embodiments may be deemed to be at least acceptable where the computing device 1000 is implemented as a sufficiently closed system that no mechanism is provided for an authorized user to load software, or where the mechanism by which software may be loaded is in some way highly restricted in its use. In various embodiments, each of the processor circuits 250 and 550 may comprise any of a wide variety of commercially available processors, including without limitation, an AMD® Athlon®, Duron® or Opteron® processor; an ARM® application, embedded and secure processors; an IBM® and/or Motorola® DragonBall® or PowerPC® processor; an IBM and/or Sony® Cell processor; or an Intel® Celeron®, Core (2) Duo®, Core (2) Quad®, Core i3®, Core i5®, Core i7®, Atom®, Itanium®, Pentium®, Xeon® or XScale® processor. Further, each of the processor circuits 250 and 550 may comprise a multi-core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety. In various embodiments, each of the storages 240 and 540 may comprise any of a wide variety of types of storage device, including without limitation, read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR-DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (e.g., ferroelectric polymer memory), ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, or a plurality of storage devices organized into one or more arrays (e.g., multiple ferromagnetic disk drives organized into a Redundant Array of Independent Disks array, or RAID array). It should be noted that although each of the storages 240 and 540 are depicted as a single block in FIG. 1, either or both may comprise more than one distinct storage device that may be based on differing storage technologies. In various embodiments, the operating system 545 may be any of a variety of available operating systems appropriate for whatever the processor circuit 550 comprises, including without limitation, Windows™, OS X™, Linux®, or Android OS™. Further, in various embodiments, applications and/or data that a person operating the keyboard 120 may or may not be allowed to access (e.g., the local application 546 and/or the local data 548) may include any of a wide variety of types of sequences of instructions or forms of data, including without limitation, software computer files, including application files (e.g., document files, word processing files, spreadsheet files, presentation files, etc.), system files (e.g., operating system files, library files, utility files, etc.), multimedia content files (e.g., audio files, video files, audio/video files, picture files, image files, etc.), user interface elements.112, a web page, a uniform resource locator (URL) from a web browser, clipboard data, screenshots, device resource data (e.g., sensor data), and so forth. As will be discussed, later in detail, the computing device 1000 may be further coupled to (or perhaps, further comprise) a display 180 on which prompts to enter text may be displayed. Alternatively or additionally, the computing device 1000 may be further coupled to a remote server 900, components of which may provide both digital processing and an executable sequence of instructions of a remote environment 1950 in which an application and/or data may be remotely stored. FIG. 2 illustrates a block diagram that is partially a subset of the block diagram of FIG. 1 and that also depicts further details of the keyboard 120 and its interaction with the controller environment 1250 and the system environment 1550. As will become apparent, more data concerning a person's operation of the keyboard 120 is provided by the keyboard 120 to the controller 200 than is relayed by the controller 200 to other portions of the computing device 1000. As previously discussed, the keyboard 120 comprises a plurality of keys 130. In various embodiments, each of the keys 130 comprises one or more of a pressure detector 131, a keypress detector 132 and a velocity detector 133. Alternatively, a single pressure detector 131, a single keypress detector 132 and/or a single velocity detector 133 may be implemented for more than one key 130. The keys 130 of the keyboard 120 may be based on any of a variety of possible technologies, including without limitation, mechanical key switches, capacitive sensing key switches, strain gauges, resistive or capacitive touch, etc. The choice of technologies at least partly dictates which ones of the pressure detector 131, the keypress detector 132 and the velocity detector 133 may actually be present in any given implementation of the keyboard 120, and accordingly, dictates which of at least some possible physical characteristics of the manner in which a given person operates the keyboard 120 can be detected and utilized for authentication. Further, depending on the manner in which one of these detectors 131-133 is implemented in each of the keys 130, one or more of the others of these detectors 131-133 may be rendered redundant. In embodiments of the keyboard 120 in which each key 130 is implemented with a movable key cap that is able to move when pressed with a digit (e.g., a finger or thumb), the keys are said to have a range of "travel" as one of their characteristics. In other words, operation of such keys involves a mechanical movement. Where there is such mechanical movement, there is both a measurable velocity of the key cap when pressed and when released. Further, there is a measurable amount of pressure applied in pressing the key cap and a measurable amount of pressure in holding the key cap in its fully pressed position after the extent of available travel in pressing the key has been reached. It is commonplace to design keys of keyboards such that only reaching the state of being fully pressed will be detected as a successfully making a "keypress" of a given key such that the key is considered to have been pressed— in other words, typically, depressing a key cap only part of the extent of its available range of travel is usually not enough to cause a keypress to be detected. Shape, size and physical condition of the digits of a person's hands, as well as the condition of other physical and mental attributes of a person, cause some degree of uniqueness in the physical characteristics of the manner in which each person among a plurality of people would operate a given keyboard. Different people have different ones of their digits (e.g., fingers and thumbs) that they are able to move more quickly and nimbly than others, and different people are able to press harder to differing degrees with different ones of their digits. Also, such traits of a digit change as a person moves that digit to reach a key that is closer to the base of that digit versus a key that is further away. This is but a subset of the attributes of a person making each person's set of digits unique that could be employed (through measurement of one or more of the resulting unique physical characteristics of manner of operation of the keyboard 120) to distinguish one person from another. In embodiments of the keyboard 120 in which each key 130 is implemented as a touch surface such that there is little physical movement of any component of the keys 130 as they are operated, there is said to be no range of travel, and it is commonly said that only keypresses are detected. However, depending on the technology used, this may not be entirely accurate. Where a touch surface keyboard is implemented with an array of beams of light that are interrupted as each finger touches a location of a touch surface that corresponds to a key, it may be accurately said that only keypresses are detected. However, where resistive touch sensor technology is employed, a resistance level is changed at a given location corresponding to a key when a person presses the tip of a digit against it, and thus, in truth, such resistive touch sensor technology actually measures pressure. Where such technology is employed, it is typical to specify a threshold of resistance (which serves as a proxy for a threshold of pressure) beyond which is deemed to be an indication of a keypress. A threshold of pressure is also employed in determining whether a keypress has occurred where a touch surface keyboard is implemented with a plurality of strain gauges. Further, where capacitive touch sensor technology is employed, a capacitance level is changed as a tip of a digit approaches a given location of a surface that corresponds to a key and continues to change as that tip makes contact with that surface at that location and begins to flatten (since the tip of a typical human digit is typically somewhat resilient). Where such technology is employed, it is typical to specify a threshold of capacitance change which is deemed to be an indication of a keypress. Yet further, in a variant of touch-sensitive keyboard that is actually a "virtual" keyboard in which the keys 130 are projected onto a surface where a camera or other scanning-type optical sensor is employed to monitor the movements of the tips of a person's digits towards and away from the projected keys 130, velocity of tips of digits towards and away from individual ones of the projected keys 130 may be optically measured, in addition to occurrences of keypresses and key releases. In keyboards designed primarily for text entry, it is commonplace to incorporate only a keypress detector in each of the keys, since typically, the function of such keyboards is only to indicate which of the keys corresponding to different text characters and/or functions has been pressed. This need to detect little more than keypresses makes such keyboards highly amenable to being implemented with technologies that either do or do not provide a range of travel for each key. However, in keyboards designed primarily for the playing of music, it is commonplace to incorporate the ability to detect the velocity with which each key is pressed, the velocity with which each key is subsequently released, and the amount of pressure applied to each key to hold it in its fully pressed state, in addition to the ability to detect each keypress. As those familiar with electronic music keyboards will readily recognize, such information as velocity and pressure are typically employed in modifying various characteristics of the notes that are played with each keypress. As a result, keyboards designed primarily for the playing music tend to require the use of keyboard technologies that do provide some range of travel so that there will be velocities that can be detected. Therefore, embodiments of the keyboard 120 implemented using a technology that provides a range of travel arguably affords the opportunity to detect more in the way of physical characteristics of the manner in which a given person operates the keyboard 120 than embodiments of the keyboard 120 using a technology that does not provide a range of travel. However, there are other physical characteristics of the manner in which a given person operates a keyboard that are largely independent of such issues of technology, including without limitation, the amount of time a key is pressed and the amount of time that elapses between the pressing of keys. Variations in the shape, size, physical condition of a person's hands, as well as other physical and mental attributes of a person, often cause such timings to be unique between people, just as they cause physical characteristics such as velocity and pressure to be unique. Further, aspects of languages used by each person and habits formed over time by each person in operating a keyboard can also cause such timings to be unique. Those skilled in the area of psychology will be familiar with what is called "chunking" in which recurring use of a given combination or sequence of things done by a person repeatedly over time causes that combination to cease to be treated in their mind as a combination of things and to become treated in their mind as a single thing. An example of where chunking affects a physical characteristic of timing in a person's use of a keyboard employed in text entry is exemplified in the typing of the word "the" by persons who frequently type English text. The word "the" occurs more frequently in English than most other words. As a result, it is not uncommon for those trained in touch typing to develop a particular pattern of faster timings in typing the word "the" than many other words they may typically type. Over time, what has happened in these individuals is that their brains have become "hard wired" to move the digits of their fingers in a relatively consistent recurring pattern to more quickly type the word "the" on a text keyboard— in other words, the typing of the word "the" has been "chunked" in their minds into a single unitary activity in which they no longer actively think about the typing of each individual character of that word. While the word "the" is an example of chunking that is highly commonplace for many who type English text, many individuals (regardless of language) have occupations or interests that involve the use of a vocabulary with words that are not in commonplace use outside that occupation or interest. For such individuals, the typing of such less commonplace words can be subjected to chunking such that they type those less commonplace words with faster timings than most other persons. The chunking of such less commonplace words can, therefore, bring about timings in operating a keyboard that are unique to those persons. Unfortunately, the measuring of such timings can be adversely affected, depending on the manner in which the keyboard 120 is coupled to the controller 200. As previously discussed, differing embodiments are possible in which the keyboard 120 is or is not incorporated into a casing into which other components of the computing device 1000 are also incorporated. In embodiments in which the keyboard 120 is so incorporated, the opportunity may be provided to more directly couple detectors associated with each of the keys 130 (e.g., one or more of the detectors 131-133) to the controller 200 such that the controller 200 is able to directly monitor the amount of time each of the keys 130 is pressed and/or the amount of time that elapses between the pressing of keys, thereby allowing the controller 200 to do so more accurately. However, in embodiments in which the keyboard 120 is not so incorporated, the wired or wireless coupling of the keyboard 120 to the controller 200 may entail the use of a signaling technology that impairs accurate measurement of such timings by the controller 200. It is commonplace to use some form of digital serial communications mechanism (either wired or wireless) in which sets of binary bits are grouped into what are frequently called "messages" that are serially transmitted and that indicate that a particular key has been pressed or has been released. In the case of keyboards employed in playing music, such messages also typically include a value indicative of at least the velocity with which the particular key was pressed or released, and the messages indicating the pressing of a key may be followed by one or more messages providing a value indicative of the amount of pressure with which the particular key is being held in its fully pressed state. Such frequent use of digital serial communications is an outgrowth of the manner in which circuitry co-located with the keys of a keyboard is typically designed to monitor the state of those keys. Over time, it has become commonplace to employ a substantially two- dimensional matrix scanning system to monitor the keys of many keyboards in which rows (or other defined subsets or groups of keys) are sequentially scanned. The positions of keys within such matrices has often resulted in each key being given a "scan code" that usually corresponds to their positions, and has often resulted in the keys being identified in the digitally serially transmitted messages by their scan codes. Unfortunately, some of the forms of digital serial communication that are typically used have data transfer rates that are sufficiently slow and/or protocols that are sufficiently time consuming to carry out that the timing with which messages indicating a keypress or key release are transmitted and received may be substantially unconnected to the amount of time a key is held in its fully pressed state or to the amount time that elapses between the pressing of keys. In embodiments in which the controller 200 is more directly coupled to detectors associated with each of the keys 130 (e.g., one or more of the detectors 131-133), no such digital serial communications is interposed between those detectors and the controller 200, and therefore, no opportunity for such digital serial communications to impair the measurement of timings by the controller 200 exists. However, in embodiments in which the controller 200 is not so directly coupled, the keyboard 120 may further comprise a timer 137 employed by circuitry co-located with the detectors associated with each of the keys 130 to enable such circuitry to more accurately measure such timings, and to enable such timings to be indicated in digitally serially transmitted messages. Thus, in such embodiments, the controller 200 may receive messages indicating keypresses and/or key releases (along with scan codes identifying particular ones of the keys 130), velocities, pressures and/or timings via wired or wireless digital serial communications from the keyboard 120. Exactly which of these messages would be received by the controller 200 would at least partly depend on the technology on which the keyboard 120 is based, and accordingly, what types of detectors are associated with each of the keys 130. In embodiments in which the controller 200 is less directly coupled to detectors associated with each of the keys 130 (e.g., one or more of the detectors 131-133), the keyboard 120 and the controller 200 may be coupled by any of a variety of electrically and/or optically conductive cabling by which signals indicating at least keypresses along with one or more physical characteristics of the manner in which a person operates the keyboard 120 may be conveyed. Further, such cabling may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, Ethernet (IEEE-802.3) or IEEE- 1394 (more commonly called "Firewire") promulgated by the Institute of Electrical and Electronics Engineers (IEEE) of Washington, DC; Universal Serial Bus (USB) promulgated by the USB Implementers Forum, Inc. of Portland, OR; RS-422 or RS-232-C promulgated by the Electronic Industries Alliance (EIA) of Arlington, VA; or RC-5720C (more commonly called "Toslink") maintained by the Japan Electronics and Information Technology Industries Association (JEITA) of Tokyo, Japan. Alternatively, the keyboard 120 and the controller 200 may be coupled via a wireless link employing any of a variety of wireless technologies, including without limitation, infrared light, radio frequencies, ultrasound, etc. Further, such a wireless link may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, IEEE 802.1 la, 802.1 lb or 802.1 lg promulgated by the IEEE; Bluetooth promulgated by the Bluetooth Special Interest Group of Bellevue, WA; or ZigBee promulgated by the ZigBee Alliance of San Ramon, CA. Regardless of the exact manner in which the controller 200 receives indications of at least keypresses along with indications of one or more physical characteristics of the manner in which a person operates the keyboard 120, as previously discussed, the processor circuit 250 is caused by execution of the control routine 245 to compare these indications of physical characteristics to the pattern data 242. The pattern data 242 comprises stored physical characteristics of the manner in which an authorized user of the computing device 1000 operates the keyboard 120. In various embodiments, the controller 200 is operable in a learning mode in which the processor circuit 250 is caused by execution of the control routine 245 to store data indicating physical characteristics of the manner in which an authorized user of the computing device 1000 operates the keyboard 120 based on indications of those physical characteristics received from the keyboard 120 being operated by that authorized user. In some of these embodiments, the learning mode may be triggered, either under manual control or automatically, by signals caused to be conveyed to the controller 200 by the processor circuit 550 in executing at least the security routine 542. By way of example, a new authorized user may first be allowed access to make use of the system environment 1550 through a password-based authentication process or still another authentication process, perhaps provided for by the operating system 545. Then, the processor circuit 550 is caused by the security routine 542 to signal the controller 200 to enter the learning mode. In some variations, the new authorized user may be allowed to manually indicate that the learning mode should be entered so as to enable the physical characteristics of their operation of the keyboard 120 to be included in the pattern data 242. In other variations, the security routine 542 may cause the processor circuit 550 to await an indication from the controller 200 as to whether or not the new authorized user is recognized as an authorized user by the controller 200 while the controller 200 is not in the learning mode. Upon receiving no such indication of the new authorized user being recognized by the controller 200, the security routine 542 may then cause the processor circuit 550 to signal the controller 200 to enter the learning mode, automatically, in order to enable the new authorized user to be recognized as an authorized user by the controller 200 in the future. Regardless of the exact manner in which the learning mode is triggered, in some embodiments, operation of the controller 200 in the learning mode may be completely transparent to a new authorized user (again, a user who has been authenticated by some other mechanism, and is not yet recognizable to the controller 200) such that the new authorized user is not required to type any particular word, phrase or other text, and instead, is able to simply proceed with using the computing device 1000. In such embodiments, the new authorized user is able to use the keyboard 120 to type whatever text they wish while the controller 200 remains in learning mode until sufficient text has been typed by the new authorized user that sufficient data indicative of physical characteristics of the manner in which they use the keyboard 120 is stored as part of the pattern data 242 to enable recognition of that new authorized user by the controller 200 in the future. Alternatively, and as will be explained in greater detail, the new authorized user may be presented with specific text to type as part of obtaining such sufficient data for storage as part of the pattern data 242. In various embodiments, at times during which the controller 200 is not in a learning mode, the controller 200 may be in an authentication mode, either continuously, or perhaps when triggered. In some embodiments, the authentication mode may be triggered by a signal caused to be conveyed to the controller 200 by the processor circuit 550 as a result of executing at least the security routine 542. Such triggering may be the result of the processor circuit 550 receiving some indication of a person attempting to operate the computing device 1000 at a time when the computing device 1000 has been "locked" (i.e., is in a "locked mode") such that authentication must take place before the computing device 1000 is able to be operated (although some limited ability to operate the computing device 1000 is still provided to enable authentication, e.g., the keyboard 120 is still operable). In other embodiments where the controller 200 is continuously in the authentication mode (at least when not in the learning mode), the controller 200 may continuously compare physical characteristics of all operation of the keyboard 120 to determine whether a person operating the keyboard 120 is an authorized user on a continuing basis. An advantage of the controller 200 being continuously in the authentication mode is that the controller 200 is caused to continuously watch for a change between the keyboard 120 being operated by an authorized user and being operated by a person who is not an authorized user. By way of example, it may be that a person already recognized by the controller 200 as an authorized user (such that access to an application and/or data has been allowed) has stepped away from the location of the computing device 1000 or has misplaced it, thereby tempting someone who is not authorized to attempt to make use of the computing device 1000. As the unauthorized person begins operating the keyboard 120, the controller 200, as part of continuously comparing the physical characteristics of all operation of the keyboard 120 to the pattern data 242, determines that the keyboard 120 is now being operated by someone who is not an authorized user. The controller 200 then signals the processor circuit 550 that the person who is now using the computing device 1000 is not an authorized user, enabling the processor circuit 550 to be caused by at least the security routine 542 to cease allowing access to at least an application and/or data (perhaps causing the computing device 1000 to enter into a locked mode in which little in the way of the functionality of the computing device 1000 remains accessible beyond what is needed to support authentication). As previously discussed, the controller 200 receives more data concerning operation of the keyboard 120 than it relays onward to other portions of the computing device 1000. Specifically, in various embodiments, the controller 200 does not relay data concerning physical characteristics of the manner in which a person operates the keyboard 120 to the processor circuit 550 as part of maintaining the pattern data 242 as isolated from the system environment 1550 to preserve security. Instead, the controller 200 makes available to the system environment 1550 the indications of keypresses and key releases needed to enable the keyboard 120 to be employed as part of a user interface to the system environment 1550. Thus, the communication between the controller environment 1250 and the system environment 1550 is largely limited to the controller 200 relaying indications of keypresses and key releases, along with indications of determinations made as to whether a person operating the keyboard 120 is an authorized user, while the processor circuit 550 may be caused to signal the controller 200 to trigger entry into one or both of the learning and authentication modes. In various embodiments, the controller 200 further comprises a keyboard interface 520 accessible to the processor circuit 550 as the mechanism to convey signals indicating at least keypresses (and perhaps, also key releases) to the system environment 1550. As those familiar with longstanding architectures commonly employed in computer systems will readily recognize, the operating system 545 may be created with one or more presumptions that a particular form of keyboard interface accessible to the operating system 545 with particular control and/or data bits at particular input/output (I/O) and/or memory addresses will be provided by the computing device 1000. Such presumptions may arise where the particular form of keyboard interface has been in longstanding use across a great many computing devices over an extended period of years such that it is looked upon as a "standard" feature of such computing devices. Therefore, the controller 200 may further comprise a form of the keyboard interface 520 that comprises circuitry implementing registers and/or memory buffer locations accessible to the processor circuit 550 at such specific addresses to mimic the behavior of such an expected keyboard interface. Alternatively, the control routine 245 may cause the processor circuit 250 to simulate the presence of the keyboard interface 520 by causing the processor circuit 250 to access memory locations in a storage device accessible to the processor circuit 550 (perhaps memory locations within the storage 540) in a manner conforming to such presumptions. In various embodiments, the controller 200 may signal its determinations of whether a person operating the keyboard 120 is an authorized user, or not, through a second level authentication carried out between the controller environment 1250 and the system environment 1550. More precisely, after the processor 250 has been caused by control routine 245 to determine that a person operating the keyboard 120 is an authorized user, the processor 250 may be caused to operate the keyboard interface 520 in a manner that mimics the behavior of a person providing a password, an encryption key (perhaps as part of an asymmetric key encryption system between the two environments), or other input to the system environment 1550. In essence, the controller 200 must, itself, demonstrate that it is an "authorized user" from the perspective of the system environment 1550, as an additional layer of security in controlling access to the system environment 1550. This may also be deemed desirable in enhancing security by isolating the system environment 1550 from having access to aspects of that person's user account on the computing device 1000, such as an account identifier. Since from the perspective of the system environment 1550, it is the controller 200 that has an account with the system environment 1550, and not the individual person operating the keyboard 120, aspects of that person's user account remain known only in the controller environment 1250, and not in the system environment 1550 where, possibly, malicious software may pass such information onward. In various embodiments, and as previously discussed, the indications of keypresses and/or key releases may identify the ones of the keys 130 that have been pressed or released via scan codes. In alternative embodiments, the keys 130 may be identified by a binary code that identifies the text characters and control functions that have been entered through operation of the keys 130 of the keyboard 120, such as the American Standard Code for Information Interchange (ASCSII). In embodiments in which scan codes are employed, the storage 540 may further store mapping data 543 comprising data matching scan codes to text characters and control functions. The mapping data 543 is employed by the processor circuit 550 in executing a sequence of instructions of the operating system 545 to derive the text characters and control functions indicated by operation of the keyboard 120 as part of implementing a user interface by which an authorized user may interact with the system environment 1550. Returning to FIG. 1, it should be noted that the control of access to an application and/or to data may not be limited to or may not involve either the local application 546 or the local data 548 (if either is present). More specifically, such access control may alternatively or additionally be applied to an application or data stored within a different storage device incorporated into another, remotely located computing device, such as a remote server 900. In various embodiments, the computing device 1000 may further comprise a network interface 590 enabling access by the computing device 1000 to other computing devices (e.g., the remote server 900) through a network (e.g., the Internet). In a remote environment 1950 made accessible to the computing device 1000 through the network interface 590, the remote server 900 may comprise a processor circuit 950, a storage 940 accessible to the processor circuit 950 and storing an operating system 945 and/or a security routine 942 executed by the processor circuit 950, and a remote storage 965 storing an application and/or data meant to be made available to an authorized user of the computing device 1000. In some of these embodiments, the processor circuit 550 is caused by at least the security routine 542 to, itself, either allow or deny access to an application and/or data stored within the remote storage 965 in a manner substantially similar to the manner to what has been described in allowing or denying access to the local application 545 and/or the local data 548. In others of these embodiments, the processor circuit 550 is caused by at least the security routine 542 to relay indications of determinations received from and made by the controller 200 of whether a person operating the keyboard 120 is an authorized user of the computing device 1000, or not, thereby enabling the processor circuit 950 to independently respond to those determinations in acting to allow or deny access to an application and/or data stored within the remote storage 965. Alternatively, in various embodiments, the controller 200 may directly signal the network interface 590 with determinations of whether a person operating the keyboard 120 is an authorized user of the computing device 1000, or not, thereby enabling the network interface 590 to independently respond to those determinations in acting to allow or deny access to a network to which the computing device 1000 is coupled through the network interface 590. As will be explained in greater detail, this more direct interaction between the controller 200 and the network interface 590 may be implemented as part of embodiments in which the controller 200 is provided with more direct access to various other portions of the electronic system 1000 such that operation of the system environment 1550 is more directly controlled from the controller environment 1250. The network interface 590 comprises circuitry providing at least some of the requisite functionality to enable access to a network based on either a wired or wireless technology (e.g., incorporating one or more transceiver components). The network interface 590 may also be at least partially implemented with sequences of instructions of the operating system 545 executed by the processor circuit 550 (e.g., to implement one or more aspects of a protocol stack). Where the network to which the computing device 1000 is coupled through the network interface 590 entails the use of electrically and/or optically conductive cabling, such cabling may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, Ethernet (IEEE-802.3) or IEEE-1394. Alternatively, where the network to which the computing device 1000 is coupled through the network interface 590 entails the use of a wireless link, such a wireless link may employ signaling and/or protocols conforming to any of a variety of industry standards, including without limitation, IEEE 802.1 la, 802.11b, 802.1 lg, 802.16, 802.20 (commonly referred to as "Mobile Broadband Wireless Access"); Bluetooth; ZigBee; or a cellular radiotelephone service such as GSM with General Packet Radio Service (GSM/GPRS), CDMA/lxRTT, Enhanced Data Rates for Global Evolution (EDGE), Evolution Data Only/Optimized (EV-DO), Evolution For Data and Voice (EV-DV), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), 4G LTE, etc. FIG. 3 illustrates a block diagram that is partially a subset of the block diagram of FIG. 1, and that also depicts details of the use of a display 180 along with the keyboard 120 and their interaction with both the controller environment 1250 and the system environment 1550. In various embodiments, at least the keyboard 120 and the display 180, together, provide a user interface in which the display 180 is also employed in authentication. The display 180 may be based on any of a variety of display technologies, including without limitation, a liquid crystal display (LCD), including touch-sensitive, color, and thin-film transistor (TFT) LCD; a plasma display; a light emitting diode (LED) display; an organic light emitting diode (OLED) display; a cathode ray tube (CRT) display, etc. As previously discussed, in various embodiments, at a time when the controller 200 is operating in a learning mode, data concerning physical characteristics of a person operating the keyboard 120 may be received and stored by the controller as part of the pattern data 242 regardless of what that person types. In such embodiments, the controller 200 remains in the learning mode until sufficient text has been typed that sufficient data indicative of physical characteristics of the manner in which that person uses the keyboard 120 to enable recognition by the controller 200 in the future is able to be stored as part of the pattern data 242. Alternatively, and as will be explained in greater detail, the new authorized user may be presented with specific text to type as part of obtaining such sufficient data for storage as part of the pattern data 242. In various embodiments where a person is allowed to type whatever they wish during the learning mode, whether sufficient data has been received from the keyboard 120 to enable sufficient data either to be stored as part of the pattern data 242 to enable that person to be later authenticated, or to be compared to the pattern data 242 during authentication may be determined through the setting of one or more thresholds. Having insufficient data stored as part of the pattern data 242 will likely impair any future effort at authentication, regardless of what an authorized user types during authentication, and having insufficient data received for comparison to the pattern data 242 during authentication will impair efforts at authentication at that time. Such a threshold may include, without limitation, a predetermined quantity of text characters typed, a predetermined quantity of words, a predetermined quantity of complete sentences, a predetermined percentage of the keys 130 of the keyboard 120 being used, etc. It is advantageous to have such data include physical characteristics of an authorized user's use of a larger quantity of different ones of the keys 130 of the keyboard 120, rather than a smaller quantity, to enhance the accuracy with which the controller 200 is able to distinguish that authorize user from others. It is also advantageous to have such data based on the use of numerous ones (if not all) of an authorized user's digits (or at least all of their fingers), and on the use of each of those digits in reaching ones of the keys 130 that are at different locations so as to cause each of those digits to be employed in reaching in different directions. Employing a greater variety of movement of a greater variety of an authorized user's digits draws out more of the uniqueness in the physical characteristics of the manner in which that authorized user operates the keyboard 120, thereby enhancing authentication accuracy. Thus, in various embodiments, during at least some instances, it may be deemed desirable to provide a person operating the keyboard 120 with preselected text for them to type, instead of relying upon whatever that person chooses to type. Such preselected text is chosen to cause that person to make a wider variety of movements with more of their digits to reach a greater variety of the keys 130 in order to ensure that sufficient data indicative of the physical characteristics with which they operate the keyboard 120 is obtained. More specifically, the storage 240 may also store a testing data 248 that the processor circuit 250 is caused to convey by the control routine 245 from the controller environment 1250 to the system environment 1550 to be caused to be displayed on the display 180 by the processor circuit 550 under the control of at least the security routine 542. The testing data 248 comprises text that a person operating the keyboard 120 is prompted to type in order to cause that person to operate the keyboard in a manner that provides data that is sufficient for use in either learning the physical characteristics with which that person operates the keyboard 120 or to authenticate that person as either an authorized user, or not (i.e., causes that person to use at least a defined quantity of digits of their hands to operate at least a defined quantity of the keys 130 of the keyboard 120). In some of these embodiments in which the testing data 248 is employed, the storage 240 may additionally store a mapping data 243 to enable the processor circuit 250 to match scan codes of the ones of the keys 130 that are pressed by a person being prompted with the testing data 248 to ensure that all of the testing data 248 is typed, and correctly, as part of ensuring that sufficient data is received. Alternatively, the testing data 248 may further comprise a listing of the scan codes of the ones of the keys 130 that are expected to be typed by someone being prompted with the text of the testing data 248 in order to ensure that all of that text is typed, and correctly, without requiring the processor circuit 250 to match the received scan codes to that text. In various embodiments, at least a portion of the pattern data 242 indicating physical characteristics of the manner in which an authorized user operates the keyboard 120 is refined over time, rather than allowed to remain static since being stored during an instance of the controller 200 being in the learning mode. Such recurring refinement of such data may be carried out in recognition of the fact that various physical characteristics of the manner in which a person is able to move each of the digits of their hands changes as they age and in response to other events that may occur during their lifetimes, and such changes will result in changes in the physical characteristics of the manner in which they operate a keyboard. In some of such embodiments, such refinement of such data may be carried out at a predetermined interval of time or other form of recurring interval that may be associated with frequency or extent of use of the computing device 1000 over time. At such an interval, the processor circuit 250, in executing the control routine 245, may place the controller 200 in a refinement mode in which, following authentication of an authorized user, data indicative of physical characteristics of the manner in which that authorized user currently operates the keyboard 120 may be employed to either adjust or replace data previously stored as part of the pattern data 242 concerning those physical characteristics. In some variants of these such embodiments, the authorized user may be prompted to type preselected text of the testing data 248 in order to ensure sufficiency of the data received from the keyboard 120, as previously discussed. In others of such embodiments, various thresholds may be employed for rates of change of one or more physical characteristics, with those thresholds being selected to trigger entry into a refinement mode when physical characteristics of the manner in which an authorized user operates the keyboard 120 have changed enough to meet a defined threshold, but to avoid triggering entry into a refinement mode when those physical characteristics appear to have changed to such an extent that it is more likely that the apparent change is due to someone other than that authorized user operating the keyboard 120. In other words, one or more thresholds may be employed that are selected to enable detection of instances in which such physical characteristics of an authorized user have indeed changed to the extent that refinement is needed, while avoiding impairment in the accuracy with which an authorized user is distinguished from others. As an alternative to the use of such thresholds, in others embodiments in which such refinement is performed, sequences of instructions of the control routine 245 may cause the processor circuit 250 to employ one or more forms of statistical and/or predictive analysis in performing such refinement of at least a portion of the pattern data 242, including without limitation, causing the processor circuit to implement a Bayesian inference engine. Indeed, in some of these embodiments, a Bayesian inference engine may be caused to be implemented by the processor circuit 250 to analyze data concerning physical characteristics of the manner in which a person operates the keyboard 120 during the learning mode to identify significant unique aspects of those physical characteristics that may be later employed in authenticating that person as an authorized user. As has been discussed, one or more of the learning mode, the authentication mode and the refinement mode may be entered into under differing circumstances and/or triggered in different ways in various possible embodiments. As has also been discussed, the manner in which each of these modes is presented to a person desiring to use the computing device 1000 may differ among various possible embodiments. In some embodiments, the learning, authentication and/or refinement modes may be presented in a manner similar to the manner in which each of the setting, using and changing of passwords is typically presented. In other words, in such embodiments, there may be a distinct "login screen" that the processor circuit 550 causes to be presented on the display 180 in a person desiring to make use of the electronic system 1000 is prompted to type text. That text may be preselected text of the testing data 248, as previously discussed. The login screen is presented at times when the computing device 1000 is in a locked mode in which much of the functionality of the computing device 1000, including access to applications and/or data (e.g., the local application 546, the local data 548, and/or applications or data stored within the remote storage 965), is not allowed (except for some limited amount of functionality still being provided to enable authentication, e.g., the keyboard 120). Such a login screen may also present that person one or more alternative ways to be authenticated as an authorized user of the computing device 1000, such as the use of a password, especially where a person who is an authorized user has not yet entered text during the learning mode such that the controller 200 would recognize them as an authorized user, or where the keyboard 120 is currently not coupled such that typing of text is not possible or is made more difficult (in embodiments in which the keyboard 120 is not integrated into the computing device 1000, as has been discussed). In these embodiments, successful authentication of an authorized user (whether through typing text presented in the login screen, entering a password, etc.), the computing device 1000 enters an "unlocked" mode in which the authorized user is able to make use of its functionality. As is typical in implementations of security on many computing devices, the computing device 1000 may revert to the earlier locked mode if a predetermined period of time elapses since the authorized user last interacted with at least some portion of the computing device 1000. However, while the computing device 1000 is in the unlocked mode, the user interface provided to the authorized user via at least the display 180 may present a selectable icon, a command line or other mechanism by which an authorized user who has not yet been made recognizable to the controller 200 may indicate that they wish to be made so recognizable such that the processor circuit 550 is caused by the security routine 542 to respond to that indication by signaling the controller 200 to enter the learning mode. Upon entry of the controller 200 into the learning mode, the security routine 542 may prompt that authorized user through the display 180 to begin entering text, perhaps text of that authorized user's choosing or preselected text (e.g., the previously discussed preselected text of the testing data 248). Also, while the computing device 1000 is in the unlocked mode, the controller 200 may continue to be in the authentication mode, despite the authorized user having already been authenticated through use of the login screen, to continuously monitor data received from the keyboard 120 for a change in the physical characteristics of the manner in which the keyboard 120 is being operated that indicates that someone other than the authorized user is operating the keyboard 120. In response to such an indication of such a change in persons operating the keyboard 120, the controller 200 determines whether the person now operating the keyboard 120 is also an authorized user, or not, and provides a signal to the processor circuit 550 indicating the results of that determination. If the controller 200 determines that the person now operating the keyboard 120 is another authorized user, then the processor circuit 550 is caused by at least the security routine 542 to maintain the computing device 1000 in the unlocked mode. However, if the controller 200 determines that the person now operating the keyboard 120 is not an authorized user, then the processor circuit 550 is caused by at least the security routine 542 to place the computing device 1000 in the locked mode, perhaps once again causing the login screen to be presented on the display 180. Further, while the computing device 1000 is in the unlocked mode, the controller 200 may signal the processor circuit 550 with an indication that one or more physical characteristics of the manner in which the authorized user is operating the keyboard 120 has changed, either in response to a change in those physical characteristics observed as the authorized user entered the text they were prompted to enter to be authenticated or in response to a change in those physical characteristics observed as the authorized user enters text as part of making use of the computing device 1000 while it is in the unlocked mode. In response to this indication from the controller 200, the processor circuit 550 may be caused by at least the security routine 542 to prompt the authorized user (perhaps through a "pop-up" window caused to be presented on the display 180) to enter preselected text (e.g., the preselected text of the testing data 248) or some other text to enable the controller 200 to refine at least a portion of the pattern data 242 pertaining to physical characteristics of the manner in which that authorized user operates the keyboard 120. In other embodiments, the learning, authentication and/or refinement modes may be entered into in a manner more transparent to an authorized user of the computing device 1000, such that explicit prompts to the authorized user to take some particular action as part of maintaining security in the use of the computing device 1000 are minimized. In such other embodiments, what the processor circuit 550 is caused by at least the security routine 542 to present on the display 180 may change little in appearance between times when the computing device 1000 is in locked mode and is in unlocked mode. More specifically, in these other embodiments, while in locked mode, the security routine 542 cooperates with at least the operating system 545 and/or one or more applications (e.g., the local application 546) to provide a person desiring to make use of the computing device 1000 a limited degree of access to the functionality of the computing device 1000. This limited degree of access includes access to some form of functionality provided by the operating system 545 and/or one or more applications that includes an opportunity during the locked mode for that person to begin entering text. Also while in locked mode, the security routine 542 signals the controller 200 to trigger its entry into the authentication mode, so that as to be ready to authenticate that person as being either an authorized user, or not, if they should make use of the opportunity to enter text. As long as that person makes use of only the limited degree of access, and perhaps enters text such that authentication may be carried out, then the user experience provided to that person may remain transparent. However, should that person attempt to access an application or data that is not allowed to be accessed during the locked mode, then the processor circuit 550 may be caused by at least the security routine 542 to present a more explicit prompt for that person to enter text (perhaps a preselected text, such as the preselected text of the testing data 248) to more quickly enable authentication of that person as either an authorized user, or not. In some variants, that more explicit prompt may also offer one or more other ways by which that person may be authenticated, including without limitation, the use of a password. Again, the provision of an alternate way to be authenticated may be desirable to provide where that person has not yet entered text during the learning mode such that the controller 200 would recognize them, or where the keyboard 120 is currently not coupled. Whether, in these other embodiments, authentication of that person as an authorized user is ultimately accomplished through the entry of text using the opportunity provided during the locked mode, through the entry of text in response to being explicitly prompted, or through some other way, such successful authentication as an authorized user results in the computing device 1000 being caused by the processor circuit 550 to enter the unlocked mode. Again, the unlocked mode is meant to appear little different from the locked mode, except that the now the processor circuit 550 is caused by at least the security routine 542 to provide greater access to applications and/or data to that person, now authenticated as an authorized user. By way of example, whereas that now authorized user was not able to access an email account through an email application of the computing device 1000 during the locked mode despite possibly being able to open the email application, the email account is allowed to be accessed during the unlocked mode. Also by way of example, while that now authorized user was not able to access content on a network to which the computing device 1000 may be coupled via the network interface 590 during the locked mode while yet being able to access publicly available content on the Internet, that content on that network may become accessible during the unlocked mode. Further by way of example, while data comprising addresses and phone numbers was remained encrypted (and thus, unreadable) during the locked mode, that data may become unencrypted during the unlocked mode. Again, while the computing device 1000 is in the unlocked mode, the user interface provided to that now authorized user may present a selectable icon, a command line or other mechanism by which an authorized user who has not yet been made recognizable to the controller 200 may indicate that they wish to be made so recognizable such that the processor circuit 550 is caused by the security routine 542 to respond to that indication by signaling the controller 200 to enter the learning mode. Again, such an authorized user may be prompted to begin entering text, perhaps the reselected text of the testing data 248. Also, again, while the computing device 1000 is in the unlocked mode, the controller 200 may continue to be in the authentication mode to continuously monitor data received from the keyboard 120 for a change in the physical characteristics of the manner in which the keyboard 120 is being operated that indicates that someone other than that now authorized user is operating the keyboard 120. Again, such a change in persons operating the keyboard 120 may be responded to with a continuance of being in the unlocked mode if it is determined that the person now operating the keyboard 120 is another authorized user, or such a change in persons may be responded to with return to being in the locked mode if it is determined that the person now operating the keyboard 120 is not an authorized user. In some variants where the locked mode has been so returned to, there may continue to be little change in appearance in what is presented on the display 180, while in other variants the fact that the locked mode has been returned to as a result of operation of the keyboard by someone who is not an authorized user may be accompanied with an explicit notice being presented on the display 180 to the effect that the computing device 1000 is now in the locked mode (perhaps with no applications or data being allowed to be accessed, at all). Further, again, while the computing device 1000 is in the unlocked mode, the controller 200 may signal the processor circuit 550 with an indication that one or more physical characteristics of the manner in which that now authorized user is operating the keyboard 120 has changed, either in response to a change in those physical characteristics observed as that now authorized user entered the text they were prompted to enter to be authenticated or in response to a change in those physical characteristics observed as that now authorized user enters text as part of making use of the computing device 1000 while it is in the unlocked mode. Again, in response to this indication from the controller 200, the processor circuit 550 may be caused by at least the security routine 542 to prompt that now authorized user to enter text, perhaps the preselected text of the testing data 248, to enable the controller 200 to refine at least a portion of the pattern data 242 pertaining to physical characteristics of the manner in which that now authorized user operates the keyboard 120. Still again, while the computing device 1000 is in the unlocked mode, the computing device 1000 may revert to the earlier locked mode if a predetermined period of time elapses since the authorized user last interacted with at least some portion of the computing device 1000. However, the computing device 1000 may further comprise one or more proximity sensors, perhaps as a component of the keyboard 120, that sense the presence or absence of a person in relatively close proximity to the computing device 1000 and which may signal the controller 200 (or other portion of the computing device 1000) to the effect that the person last authenticated as an authorized user never left the proximity of the computing device 1000 such that the computing device 1000 should refrain from reverting to the earlier locked mode, despite the lack of interaction. It should be noted that although much of the discussion herein focuses on the use two operating environments in which one (e.g., the controller environment 1250) serves to control access by would-be users to the other (e.g., the system environment 1550), in various embodiments, the processor 250 may be caused (by execution of the control routine 245) to employ the characteristics with which a person operates the keyboard 120 to authenticate a person as being authorized to interact more directly with the controller environment 1250, instead of (or in addition to) the system environment 1550. As those familiar with setting up typical computing devices will readily recognize, many computing devices have a "setup" or "configuration" mode in which access is given to various menus or other mechanism to configure various aspects of the manner in which a computing device functions. Such settings are often independent of any operating system (e.g., the operating system 545), and indeed are often accessed and used in situations where a computing device is being readied for use at a stage when no operating system has yet been installed such that what there is to the system environment 1550 is not yet operable. Alternative or additionally, such a configuration or setup mode may provide access to a set of manually-selectable diagnostics routines and/or utilities (e.g., a routine to format a storage device for use). FIG. 4 illustrates one embodiment of a logic flow 2100. The logic flow 2100 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2100 may illustrate operations performed by components of the computing device 1000, possibly including one or both of the processor circuits 250 and 550. At 2110, a computing device (e.g., the computing device 1000) is in a locked mode in which a controller of the computing device (e.g., the controller 200) is in an authentication mode. In the locked mode, a prompt for a person to enter text is presented (perhaps visually on a display, such as the display 180), the text possibly being a preselected text (e.g., the preselected text of the testing data 248). In the authentication mode, the controller awaits entry of text from a keyboard (e.g., the keyboard 120) to enable the controller to authenticate a person operating the keyboard as either an authorized user of the computing device, or not, through comparison of physical characteristics of the manner in which that person operates the keyboard to stored physical characteristics of the manner in which one or more authorized users use the keyboard. At 2120, in response to a person having entered text (or possibly made use of an alternate authentication mechanism, such as a password), that person is authenticated as being either an authorized user of the computing device, or not. If, at 2122, that person is determined to not be an authorized user, then the computing device remains in the locked mode and the controller of the computing device remains in the authentication mode at 2110. However, if at 2122, that person is determined to be an authorized user, then the computing device is placed in an unlocked mode at 2130 in which access is allowed to one or more applications and/or data to which access was not allowed during the locked mode. At 2140, the controller determines whether or not one or more physical characteristics of the manner in which the person (now determined to be an authorized user) operates the keyboard has changed to an extent consistent with the degree of change expected to occur in such physical characteristics over time (perhaps simply as a result of aging) such that refinement of physical characteristics stored for that person is needed. If it is determined that such refinement is needed, then the controller enters a refinement mode at 2142 in which the person may be presented with a prompt to type text, perhaps a preselected text, to provide input indicative of the current physical characteristics with which the person now operates the keyboard. Regardless of whether refinement was determined to be necessary, or not, a determination is made at 2150 by the controller of whether or not one or more of the physical characteristics of the manner in which the keyboard is being operated has changed to an extent indicating that a change has occurred in who is operating the keyboard. If it is determined that such a change has occurred, then the person now operating the keyboard is authenticated as being either an authorized user, or not, by the controller at 2120. However, if it is determined that no such change in who is operating the keyboard has occurred, then a check is made at 2160 for whether or not the person has operated the keyboard (or some other component of a user interface of the computing device) to explicitly log out of using the computing device, thereby explicitly indicating that the computing device is to return to being in the locked mode at 2110. However, if it is determined that the person has not logged out, then a check is made at 2170 for whether or not a predetermined period of time has elapsed since the computing device last detected activity indicative of the person continuing to use the computing device. If that period of time has elapsed, then it is taken as an indication that the person is no longer making use of the computing device, and the computing device returns to being in the locked mode at 2110. However, if that period of time has not elapsed, then it is taken as an indication that the person is likely still making some degree of use of the computing device, and a check is again made at 2140 by the controller for an indication of a change in the manner that the person operates the keyboard that is to an extent requiring entry into the refinement mode. Despite the depiction of a specific order in which the determinations at 2140, 2150, 2160 and 2170 are made, it should be noted that these determinations are made repeatedly throughout the time that the computing device is in the unlocked mode, and that these determinations need not be made in any specific order. Furthermore, one or more of these determinations may be made substantially simultaneously. FIG. 5 illustrates one embodiment of a logic flow 2200. The logic flow 2200 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2200 may illustrate operations performed by components of the computing device 1000, possibly including one or both of the processor circuits 250 and 550. At 2210, a computing device (e.g., the computing device 1000) is in a locked mode in which a controller of the computing device (e.g., the controller 200) is in an authentication mode. In the locked mode, a prompt a person is allowed access to a limited subset of otherwise available functionality of the computing device, perhaps being allowed to make some limited use of one application while being denied access to another application, and/or perhaps being allowed to access some data that may be deemed to already be in the public domain while being denied access to other data that isn't. Among the limited subset of functionality to which a person is allowed access in the locked mode is an application (e.g., a text editor or word processing application) that affords a person an opportunity to enter text. In the authentication mode, the controller awaits entry of text from a keyboard (e.g., the keyboard 120) by a person making use of that opportunity to enter text to enable the controller to authenticate that person as either an authorized user of the computing device, or not, through comparison of physical characteristics of the manner in which that person operates the keyboard to stored physical characteristics of the manner in which one or more authorized users use the keyboard. At 2220, if the person has made sufficient use of the opportunity to enter text such that they have entered sufficient text to enable authentication before that person attempts to access an application or data to which access is not allowed, then authentication is performed by the controller. However, if that person does not enter sufficient text before attempting such access (perhaps attempting such access before entering any text), then that person is explicitly prompted (perhaps visually on a display of the computing device, e.g., the display 180) to enter text (or possibly to make use of an alternate mechanism of authentication) at 2224. In that prompting the person may be presented with a preselected text to enter (e.g., the preselected text of the testing data 248). With or without such prompting, the person is authenticated as being either an authorized user of the computing device, or not, at 2222. If that person is determined to not be an authorized user, then the computing device remains in the locked mode and the controller of the computing device remains in the authentication mode at 2210. However, if that person is determined to be an authorized user, then the computing device is placed in an unlocked mode at 2230 in which access is allowed to one or more applications and/or data to which access was not allowed during the locked mode. At 2240, the controller determines whether or not one or more physical characteristics of the manner in which the person (now determined to be an authorized user) operates the keyboard has changed to an extent consistent with the degree of change expected to occur in such physical characteristics over time (perhaps simply as a result of aging) such that refinement of such physical characteristics stored for that person is needed. If it is determined that such refinement is needed, then the controller enters a refinement mode at 2242 in which the person may be presented with a prompt to type text, perhaps a preselected text, to provide input indicative of the current physical characteristics with which the person now operates the keyboard. Regardless of whether refinement was determined to be necessary, or not, a determination is made at 2250 by the controller of whether or not one or more of the physical characteristics of the manner in which the keyboard is being operated has changed to an extent indicating that a change has occurred in who is operating the keyboard. If it is determined that such a change has occurred, then the computing device reverts to being in the locked mode at 2210. However, if it is determined that no such change in who is operating the keyboard has occurred, then a check is made at 2260 for whether or not the person has operated the keyboard (or some other component of a user interface of the computing device) to explicitly log out of using the computing device, thereby explicitly indicating that the computing device is to return to being in the locked mode at 2210. However, if it is determined that the person has not logged out, then a check is made at 2270 for whether or not a predetermined period of time has elapsed since the computing device last detected activity indicative of the person continuing to use the computing device. If that period of time has elapsed, then a check is made at 2272 as to whether one or more sensors has detected the continued presence of the person during the predetermined period of time. If both the predetermined period of time has elapsed, and the person is not in the proximity of the computing device, then it is taken as an indication that the person is no longer making use of the computing device, and the computing device returns to being in the locked mode at 2210. However, if that period of time has not elapsed, or if the one or more sensors has detected the continued presence of the person during the predetermined period of time, then it is taken as an indication that the person is likely still making some degree of use of the computing device, and a check is again made at 2240 by the controller for an indication of a change in the manner that the person operates the keyboard that is to an extent requiring entry into the refinement mode. Despite the depiction of a specific order in which the determinations at 2240, 2250, 2260 and 2270 are made, it should be noted that these determinations are made repeatedly throughout the time that the computing device is in the unlocked mode, and that these determinations need not be made in any specific order. Furthermore, one or more of these determinations may be made substantially simultaneously. FIG. 6 illustrates one embodiment of a logic flow 2300. The logic flow 2300 may be representative of some or all of the operations executed by one or more embodiments described herein. More specifically, the logic flow 2300 may illustrate operations performed by components of the computing device 1000, possibly including one or both of the processor circuits 250 and 550. At 2310, while a component of a computing device (e.g., the processor circuit 250 of computing device 1000, amidst executing the control routine 245) is in an authentication mode, awaiting a signal from a keyboard communicatively coupled to that computer device (e.g., the keyboard 120) indicating that at least one key of that keyboard (e.g., one of the keys 130) has been pressed, and indicating at least one physical characteristics of the manner in which that key at least one key was pressed. As previously discussed, the manner in which the signal is conveyed from that keyboard may be a digital serial transmission of a message comprising a series of bits identifying the at least one key with a scan code and conveying the at least one physical characteristic as a binary value specifying a range (e.g., a value for a velocity, a pressure, etc.). At 2320, the component of the computing device compares the at least one physical characteristic to at least one stored physical characteristic associated with one or more users authorized to use the computing device (the at least one stored physical characteristic possibly being stored as part of the pattern data 242 stored within the storage 240). As previously discussed, the at least one stored physical characteristic may be stored in an operating environment that is maintained separately from another operating environment meant to be made accessible for use by an authorized user of the computing device. At 2330, the component of the computing device determines if the keypress is associated with at least one authorized user of the computing device based on that comparison. As previously discussed, the determination may be conveyed by the component from within one operating environment to a processor circuit within another operating environment, that again, is maintained separately from the operating environment in which the at least stored physical characteristic is maintained. FIG. 7 illustrates an embodiment of an exemplary processing architecture 3100 suitable for implementing various embodiments as previously described. More specifically, in one embodiment, the processing architecture 3100 may comprise or be implemented as part of one or more of the various aforedescribed embodiments of the computing device 1000. The processing architecture 3100 includes various elements commonly employed in digital processing, including without limitation, one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, etc. As used in this application, the terms "system" and "component" are intended to refer to an entity of a computing device in which digital processing is carried out, that entity being hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by this depicted exemplary processing architecture. For example, a component can be, but is not limited to being, a process running on a processor circuit, the processor circuit itself, a storage device (e.g., a hard disk drive, multiple storage drives in an array, etc.) that may employ an optical and/or magnetic storage medium, an software object, an executable sequence of instructions, a thread of execution, a program, and/or an entire computing device (e.g., an entire computer). By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computing device and/or distributed between two or more computing devices. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bidirectional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to one or more signal lines. Each message may be a signal or a plurality of signals transmitted either serially or substantially in parallel. As depicted, in implementing the processing architecture 3100, the computing device 1000 comprises at least the processor circuit 550, the storage 540, the controller 200 and coupling 555. As will be discussed, the computing device may further comprise an input/output (I/O) interface 510, a storage controller 560, a storage 565, a visual interface 580 and/or the network interface 590. In turn, the controller 200 comprises at least the processor circuit 250, the storage 240, a keyboard interface 220 and coupling 255. As has been discussed, the controller 200 may further comprise the keyboard interface 520. Within the storage 540 is stored at least the security routine 542, and has been previously discussed, the storage 540 may also store the operating system 545, the mapping data 543, the local application 546 and/or the local data 548. Within the storage 240 is stored at least the control routine 245 and the pattern data 242, and has been previously discussed, the storage 240 may also store the testing data 248 and/or the mapping data 243. Coupling 555 is comprised of one or more buses, transceivers, buffers, crosspoint switches, and/or other conductors and/or logic that couples at least the processor circuit 550 to the storage 540 and the controller 200. Coupling 555 may further couple the processor circuit 550 to one or more of the I/O interface 510, the storage controller 560, the storage 565, the visual interface 580, and the network interface 590 (depending on which of these are also present). Similarly, coupling 255 is comprised of one or more buses, transceivers, buffers, crosspoint switches, and/or other conductors and/or logic that couples at least the processor circuit 250 to the storage 240 and the keyboard interface 220. Coupling 255 may further couple the processor circuit 250 to the keyboard interface 520 in embodiments in which the keyboard interface is present and is at least partially implemented with digital circuitry. With each of the processor circuits 250 and 550 being so coupled by couplings 255 and 555, respectively, the processor circuits 250 and 550 are able to perform the various tasks in response to executing sequences of instructions of at least the control routine 245 and the security routine 542, respectively, as has been described at length, above. Each of the couplings 255 and 555 may be implemented with any of a variety of technologies or combinations of technology by which signals are optically and/or electrically conveyed. Further, at least portions of one or both of couplings 255 and 555 may employ timings and/or protocols conforming to any of a wide variety of industry standards, including without limitation, Accelerated Graphics Port (AGP), CardBus, Extended Industry Standard Architecture (E-ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI-X), PCI Express (PCI-E), Personal Computer Memory Card International Association (PCMCIA) bus, HyperTransport™, QuickPath, and the like. As previously discussed, each of the processor circuits 250 and 550 may comprise any of a wide variety of commercially available processors, and each of the storages 240 and 540 may comprise any of a wide variety of types of storage device. Again, it should be noted that although each of the storages 240 and 540 are depicted as a single block in FIG. 7, either or both may comprise more than one distinct storage device that may be based on differing storage technologies. However, as depicted, a distinction is made between the storage 540 and the storage 565, the storage 565 being explicitly coupled to coupling 555 through the storage controller 560. This depiction is in recognition of the commonplace use of one type of device providing relatively rapid reading and relatively rapid writing capabilities such that the processor circuit 550 enabling more rapid manipulation of data (but possibly using a "volatile" technology such that what is stored therein is lost with a loss of power), alongside use of another type of storage device providing relatively high density of non- volatile storage (but likely using a technology that provides somewhat slower reading and writing capabilities). Thus, in embodiments in which the storage controller 560 and the storage 565 are present, one or more of the security routine 542, the mapping data 543, the operating system 545, the local application 546 and/or the local data 548 may initially be stored within the storage 565, and then copied into the storage 540 for more rapid access by the processor circuit 550. Any of a variety of interface technologies may be employed in coupling the storage controller 560 to the storage 565 in which timings and/or protocols may be employed that conform to one or more industry standards, including without limitation, USB and IEEE 1394. Further, one or both of the storages 540 and 565 may comprise articles of manufacture in the form of a computer-readable storage medium to store logic, such as a media 564 of which the storage 565 may be comprised. Examples of a computer-readable storage medium (e.g., the media 564) may include any tangible media capable of storing electronic data, including without limitation, volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re- writeable memory, and so forth. More specifically, where such a distinction as has been described above is made between the storages 540 and 560, the storage 565 may comprise a hard disk drive (HDD), a magnetic floppy drive (FDD), or an optical disk drive (e.g., a CD-ROM or DVD drive); and correspondingly, the media 564 may comprise one or more hard disk platters, a floppy diskette, or an optical disk, respectively. Examples of logic may include executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. Embodiments may also be at least partly implemented as instructions contained in or on a non-transitory computer-readable medium, which may be read and executed by one or more processors to enable performance of the operations described herein. As previously discussed, and where present, the network interface 590 comprises circuitry providing at least some of the requisite functionality to enable access to a network based on either a wired or wireless technology, and may also be at least partially implemented with sequences of instructions of the operating system 545 executed by the processor circuit 550. As also previously discussed, the network to which the computing device 1000 is coupled through the network interface 590 may entails the use of any of a variety of types of conductive cabling or wireless linkage employing signaling and/or protocols that may conform to any of a wide variety of industry standards. Where present, the I/O interface 510 provides any of a variety of types of input/output interface employing any of a variety of forms of wired or wireless signaling to enable the coupling of various input and/or output devices to the computing device 1000 such that the processor circuit 550 is able to interact with those input and/or output devices through the I/O interface 510. As depicted, the I/O interface 510 couples an additional user input device 110 (i.e., a user input device other than the keyboard 520) to the computing device 1000, which as depicted, is a touchpad. Other examples of input and/or output device that may be coupled to the computing device 1000 through the I/O interface 510 include, without limitation, microphones, remote controls, stylus pens, card readers, finger print readers, virtual reality interaction gloves, graphical input tablets, joysticks, other keyboards, retina scanners, the touch input component of touch screens, trackballs, various sensors, laser printers, inkjet printers, mechanical robots, etc. The wired and/or wireless signaling technologies employed by the I/O interface 510 in coupling such devices to the computing device 1000 may employ signaling and/or protocols that conform to any of a variety of industry standards, including without limitation, IEEE 1284 (more commonly referred to as a "parallel port interface"), RS-232C, RS-422, IEEE 1394, USB, AccessBus, etc. Where present, the visual interface 580 provides any variety of types of interface employing any of a variety of forms of signaling to couple the display 180 to the computing device to convey images to the display 180 to be visually presented thereon, including the various prompts and/or preselected text previously discussed. The wired and/or wireless signaling technologies employed by the visual interface 580 in coupling the display 180 to the computing device 1000 may employ signaling and/or protocols that conform to any of a variety of industry standards, including without limitation, any of a variety of analog video interfaces, Digital Video Interface (DVI), DisplayPort, etc. Turning to the controller 200, the keyboard interface 220 employs a multitude of conductors in its coupling to the keyboard 120, thereby forming a relatively direct coupling to the keys 130 of the keyboard 120, along with whichever ones of the detectors 131-133 that may be present. This enables the controller 200 to more directly receive indications of keypresses and key releases in a manner enabling the controller 200 to more accurately determine the timings of these events, along with more directly receiving indications of velocities and/or pressures employed in a person operating the keys 130, as has been previously discussed in detail. Further, in embodiments in which the keyboard interface 520 is implemented by the controller 200 in circuitry and/or via execution of the control routine 245, the controller 200 presents data indicating keypresses (and perhaps, also key releases) to the processor circuit 550 in a manner that mimics a long used form of keyboard interface that the operating system 545 may have been written with the assumption of being provided. As previously discussed, while the controller 200 relays data indicating keypresses (and perhaps, also key releases) to the processor circuit 550, the controller does not relay data indicating physical characteristics of the manner in which the keyboard 120 is operated (e.g., velocities and pressures). FIG. 8 illustrates an embodiment of an exemplary processing architecture 3200 suitable for implementing various embodiments as previously described. More specifically, in one embodiment, the processing architecture 3200 may comprise or be implemented as part of one or more of the various aforedescribed embodiments of the computing device 1000. The processing architecture 3200 is similar to the processing architecture 3100 of FIG. 1. in many ways, and has been previously stated, like reference numerals are used to refer to like elements throughout. As depicted, in implementing the processing architecture 3200, the computing device 1000 comprises at least the processor circuit 550, the storage 540, the controller 200 and coupling 555. Also as depicted, the computing device may further comprise a storage controller 560, a storage 565, a visual interface 580, the network interface 590 and/or sideband couplings 556. In turn, as depicted, the controller 200 comprises at least the processor circuit 250, the storage 240, an I/O interface 210 and coupling 255. Also as depicted, the controller 200 may further comprise the I/O interface 510. Within the storage 540 is stored at least the security routine 542, and has been previously discussed, the storage 540 may also store the operating system 545, the mapping data 543, the local application 546 and/or the local data 548. Within the storage 240 is stored at least the control routine 245 and the pattern data 242, and has been previously discussed, the storage 240 may also store the testing data 248 and/or the mapping data 243. One difference in the processing architecture 3200 from the processing architecture 3100 is that the controller 200 in the processing architecture 3200 may comprise the I/O interface 510 in place of the keyboard interface 520 such that the controller 200 emulates or simulates the presence of the I/O interface 510 in a manner similar to what has been described with regard to the keyboard interface 520. Correspondingly, another difference is that the controller 200 in the processing architecture 3200 comprises the I/O interface 210 in place of the keyboard interface 220. These specific differences reflect a larger difference in the processing architecture 3200 from the processing architecture 3100 in which the keyboard 120 is not as directly coupled to the controller 200, and is instead, coupled to the controller 200 through either a wired or wireless digital serial interface in which indications of keypresses, key releases, velocities and/or pressures are received by the controller 200 as messages comprised of signals representing bits that are digitally serially conveyed to the controller 200 by the keyboard 120. Further, as suggested by the coupling of the additional user input device (which as depicted, is a mouse in FIG. 8) may employ the same type of wired or wireless digital serial interface as the keyboard 120. As previously discussed, such a digital serial interface may be based on any of a variety of cabling-based or wireless technologies, and may employ signaling and/or protocols conforming any of a wide variety of industry standards. Where the digital serial interface is (to at least some degree) an implementation of USB, then mimicking the I/O interface 510 in a manner that presents what appears to be a longstanding implementation of a USB interface controller to accommodate assumptions made in the creation of the operating system 545 may entail hiding aspects of the identity or range of capabilities of the keyboard 120 from the processor circuit 550. More precisely, such mimicry may include presenting the keyboard 120 in a manner that it is perceived in the system environment 1550 as being an ordinary text entry keyboard coupled to the computing device 1000 via a USB cable and identifying itself solely as a "human interface device" for text entry, rather than being a keyboard capable of providing data concerning physical characteristics of the manner in which it is operated. Thus, presenting such a false indication of the nature and capabilities of the keyboard 120 may be part of what is required to maintain sufficient separation between the operating environments 1250 and 1550 as to maintain security, especially of data concerning the manner in which one or more persons operate the keyboard 120. Still another difference in the processing architecture 3200 from the processing architecture 3100 is that the controller 200 may be further coupled to one or both of the storage controller 560 and the network controller 590 via sideband couplings 556, enabling an exchange of signals between the controller 200 and one or both of the storage controller 560 and the network controller 590 without employing coupling 555. This specific difference reflects another larger difference in the processing architecture 3200 from the processing architecture 3100 in which the controller 200 may perform a greater range of functions in the processing architecture 3200. Such a greater range of functions may include more directly controlling more aspects of securing applications and/or data either stored within the computing device 1000 or to which the computing device 1000 has access. Through such more direct access, the controller 200, in response to determining that a person operating the keyboard is not an authorized user, may more directly signal the storage controller 560 to deny access to applications and/or data stored within the storage 565 and/or may more directly signal the network interface 590 to deny access to whatever network the computing device 1000 may be coupled to through the network interface 590. Further, it may be that one or more of the control routine 245, the testing data 248 and the mapping data 243 are obtained by the controller 200 from one or the other of storage 565 or the remote storage 965, and then stored in the storage 240. More specifically, the processor circuit 250 may be caused by the control routine 245 to directly signal one or both of the storage controller 560 and the network interface 590 to retrieve an updated version of the control routine 245, the testing data 248 and/or the mapping data 243 from a portion of the media 564 or 964 that is not accessible to the processor circuit 550. Yet further, having such a form of more direct access to the network 590 enables other additional security features to be implemented. As previously discussed, a second layer of security, perhaps entail the use of encryption and/or the mimicry of seeking authentication as a person by the controller 200, may be employed between the operating environments 1250 and 1550 in some embodiments. With the network 590 being made more directly accessible to the controller 200, such a second layer of security could be implemented between the controller environment 1250 and the remote environment 1950 of the remote server 900 in which, after authenticating a person operating the keyboard 120 as an authorized user, the processor 250 may be caused to more directly operate the network interface 590 to present itself to the remote server 900 as a would-be user seeking to be authenticated as an authorized user. Alternatively or additionally, where it is deemed that the security of a network between the computing device 1000 and the server 900 is sufficient that the transmission of data representing patterns with which people operate keyboards is able to be done with a sufficient degree of security, it may be that the pattern data 242 is stored in the storage 940 of the remote server 900, instead of in the storage 240 of the controller 200. In this arrangement, either data representing characteristics with which a person operates the keyboard 120 is transmitted by the controller 200 through the network interface 590 to the remote server 900 to be compared to the pattern data 242 by the processor circuit 950, or the pattern data 242 may be received by the controller 200 from the remote server 900 through the network interface 590. Either way, the ability of the controller 200 to more directly operate the network interface 590 in either of such exchange of data is relied upon to keep such data isolated from the system environment 1550. The various elements of the computing device 1000 may comprise various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. However, determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation. Some embodiments may be described using the expression "one embodiment" or "an embodiment" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Further, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects. What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. The detailed disclosure now turns to providing examples that pertain to further embodiments. The examples provided below are not intended to be limiting. An example computer-implemented method comprises receiving a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to a computing device, and indicative of at least one physical characteristic associated with the keypress; comparing the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the computing device; and determining if the keypress is associated with at least one authorized user of the computing device based on the comparison. The above example computer-implemented method, the at least one physical characteristic being selected from a group comprising a velocity at which the key is pressed, a pressure exerted to press the key, an amount of time during which the key is held in a fully pressed state, a pressure exerted to hold the key in the fully pressed state, a velocity at which the key is released from the fully pressed state, and an amount of time elapsing from pressing the key to pressing another key. Either of the above examples of computer-implemented methods, the receiving of the signal from the keyboard comprising receiving a digitally serially transmitted message from the keyboard. Any of the above examples of computer-implemented methods, comprising presenting a visual prompt on a display of the computing device requesting a user of the computing device to enter text to enable authentication prior to the user being authenticated as an authorized user. Any of the above examples of computer-implemented methods, the visual prompt comprising a preselected text to enter. Any of the above examples of computer-implemented methods, the preselected text being selected to cause a user to use a predetermined quantity of digits to operate a predetermined quantity of keys of the keyboard. Any of the above examples of computer-implemented methods, comprising placing the computing device in an unlocked mode allowing access to a first data in response to determining that the keypress is associated with at least one authorized user and in response to the computing device being in a locked mode denying access to the first data. Any of the above examples of computer-implemented methods, comprising allowing access to a limited subset of available functionality of the computing device during the locked mode, the limited subset of available functionality comprising an opportunity to enter text. Any of the above examples of computer-implemented methods, comprising presenting a visual prompt on a display of the computing device requesting a user of the computing device to enter text in response to the user attempting to access the first data while the computing device is in the locked mode, the visual prompt comprising a preselected text to enter. Any of the above examples of computer-implemented methods, comprising placing the computing device in the locked mode in response to determining that the at least one physical characteristic has changed since a last authentication of an authorized user to an extent consistent with a different user operating the keyboard in place of the authorized user; and determining that the different user is not an authorized user. Any of the above examples of computer-implemented methods, comprising placing the computing device in the locked mode in response to a predetermined period of time having elapsed since the computing device was last interacted with by an authorized user. Any of the above examples of computer-implemented methods, comprising refining the at least one stored physical characteristic in response to determining that the at least one physical characteristic has changed to an extent consistent with a physical change of an authorized user. An example machine-readable medium comprising a sequence of instructions that when executed by a computing device, causes the computing device to carry out any of the above examples of computer-implemented methods. An example apparatus comprises a first processor circuit; and a first storage communicatively coupled to the first processor circuit and storing a first sequence of instructions that when executed by the first processor circuit, causes the first processor circuit to: receive a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to the apparatus, and indicative of at least one physical characteristic associated with the keypress; compare the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the apparatus; and determine if the keypress is associated with at least one authorized user of the apparatus based on the comparison. The above example apparatus, the at least one physical characteristic being selected from a group comprising a velocity at which the key is pressed, a pressure exerted to press the key, an amount of time during which the key is held in a fully pressed state, a pressure exerted to hold the key in the fully pressed state, a velocity at which the key is released from the fully pressed state, and an amount of time elapsing from pressing the key to pressing another key. Either of the above examples of apparatus, the keyboard being communicatively coupled to the apparatus through an interface through which the apparatus receives the signal from the keyboard as a digitally serially transmitted message from the keyboard. Any of the above examples of apparatus, comprising the keyboard, a plurality of keys of the keyboard being directly scanned by the apparatus. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause a visual prompt to be presented on a display requesting a user of the apparatus to enter text to enable authentication prior to the user being authenticated as an authorized user. Any of the above examples of apparatus, the visual prompt comprising a preselected text to enter. Any of the above examples of apparatus, the preselected text being selected to cause a user to use a predetermined quantity of digits to operate a predetermined quantity of keys of the keyboard. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause the apparatus to be placed in an unlocked mode allowing access to a first data in response to determining that the keypress is associated with at least one authorized user and in response to the apparatus being in a locked mode denying access to the first data. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause access to be allowed to a limited subset of available functionality of the apparatus during the locked mode, the limited subset of available functionality comprising an opportunity to enter text. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause a visual prompt to be presented on a display requesting a user of the apparatus to enter text in response to the user attempting to access the first data while the apparatus is in the locked mode, the visual prompt comprising a preselected text to enter. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause the apparatus to be placed in the locked mode in response to the first processor circuit being caused to determine that the at least one physical characteristic has changed since a last authentication of an authorized user to an extent consistent with a different user operating the keyboard in place of the authorized user, and the first processor circuit being caused to determine that the different user is not an authorized user. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to cause the apparatus to be placed in the locked mode in response to a predetermined period of time having elapsed since the apparatus was last interacted with by an authorized user. Any of the above examples of apparatus, the first processor circuit being caused by executing the first sequence of instructions to refine the at least one stored physical characteristic in response to determining that the at least one physical characteristic has changed to an extent consistent with a physical change of an authorized user. Any of the above examples of apparatus, comprising the display, a second processor circuit, and a second storage communicatively coupled to the second processor circuit and storing a second sequence of instructions, the first processor circuit causing a visual prompt to be presented comprises the first processor circuit being caused by executing the first sequence of instructions to signal the second processor circuit, and the second processor circuit being caused by executing the second sequence of instructions to present the visual prompt on the display in response to the signal. Any of the above examples of apparatus, comprising a second processor circuit, and a second storage communicatively coupled to the second processor circuit and storing a second sequence of instructions, the first processor circuit causing the apparatus to be placed in one of the locked mode and the unlocked mode comprises the first processor circuit being caused by executing the first sequence of instructions to signal the second processor circuit, and the second processor circuit being caused by executing the second sequence of instructions to place the apparatus in one of the locked mode and the unlocked mode in response to the signal. Another example apparatus comprises a first processor circuit; a second processor circuit; a first storage communicatively coupled to the first processor circuit and storing a first sequence of instructions that when executed by the first processor circuit, causes the first processor circuit to: receive a signal indicative of a keypress of at least one key of a keyboard communicatively coupled to the apparatus, and indicative of at least one physical characteristic associated with the keypress, compare the at least one physical characteristic to at least one stored physical characteristic associated with at least one authorized user of the apparatus, determine if the keypress is associated with at least one authorized user of the apparatus based on the comparison, and signal the second processor circuit to place the apparatus in an unlocked mode allowing access to a first data in response to determining that the keypress is associated with at least one authorized user and in response to the apparatus being in a locked mode denying access to the first data; and a second storage communicatively coupled to the second processor circuit and storing a second sequence of instructions that when executed by the second processor circuit, causes the second processor circuit to place the apparatus in the unlocked mode in response to receiving the signal from the first processor circuit to place the apparatus in the unlocked mode. The above other example apparatus, the first processor circuit being caused by executing the first sequence of instructions to signal the second processor circuit to present a visual prompt on a display requesting a user of the apparatus to enter text, and the second processor circuit being caused by executing the second sequence of instructions to present the visual prompt on the display in response to receiving the signal from the first processor circuit to present the prompt on the display.
The invention provides a sensor including a first sensor element (105) formed in a first substrate (110) and at least one optical element formed in a second substrate, the first and second substrates being configured relative to one another such that the second substrate forms a cap (115) over the first sensor element. The cap (115) includes a dif tractive optical element and an aperture stop (901) which collectively determine the wavelength of incident radiation (125) that is allowed through the cap (115) and onto the element (105).
Claims 1. An electromagnetic radiation sensor having at least a first sensor element formed in a first substrate, and at least a first cap formed in a second substrate, the first and second substrates being arranged relative to one another such that the first cap is provided over the first sensor element thereby providing a first cell, and wherein the first cap includes a diffractive optical element and an aperture stop to operably provide for a transmission of incident radiation of predetermined wavelength onto the sensor element. 2. The sensor of claim 1 wherein the first sensor element is formed from a multilayer structure having at least one absorber layer provided therein, characteristics of the absorber layer contributing to the wavelength response sensitivity of the first sensor element. 3. The sensor of claim 1 wherein the operating wavelength region of the sensor is at least partially determined by the dimensions of first sensor element, the characteristics of the diffractive optical element and nature of the aperture stop. 4. The sensor of claim 1 wherein at least a first and second sensor element are formed in the first substrate and at least a first and second cap are formed in the second substrate, the relative arrangement of the first and second substrates providing at least first and second cells. 5. The sensor of claim 4 wherein a first cell provides an output based on a first response characteristic and a second cell provides an output based on a second response characteristic. 6. The sensor of claim 5 wherein each of the caps for the first and second cells includes a diffractive optical element and an associated aperture stop. 7. The sensor of claim 1 wherein the aperture stop is provided on a lower surface of the cap. 8. The sensor of claim 1 wherein the aperture stop is provided on an upper surface of the cap. 9. The sensor as claimed in claim 5 wherein the output of the second cell is lower than that of the first cell. 10. The sensor as claimed in claim 5 wherein the response characteristics of each of the first and second cells are a function of at least one of the following properties of each cell: a) the optical response characteristic, b) the electrical response characteristic, c) the thermal response characteristic, d) the nature of the radiation source used to illuminate each of the first and second cells. 11. The sensor as claimed in claim 5 wherein the cap for the first sensor element allows a transmission of incident radiation through the cap and onto the sensor element and the cap for the second sensor element blocks a transmission of radiation through the cap and onto the second sensor element, the aperture stop being provided as part of the cap for the first sensor. 12. The sensor as claimed in claim 5 wherein the cap for the second sensor element filters at least a portion of radiation that is incident on the second cell, each of the first and second cells including a diffractive optical element and associated aperture stop configured to selectively allow different incident radiation of different wavelengths through to their respective sensor elements. 13. The sensor of claim 1 wherein the cap for the sensor element includes an anti- reflective coating to improve the throughput of radiation incident on the cap. 14 The sensor of claim 5 wherein the cap for the second sensor element includes an optically opaque coating so as to prevent transmission of radiation through the cap and onto the second sensor element. 15. The sensor as claimed in claim 1 wherein the arrangement of the first and second substrates relative to one another define a cavity between the cap and the sensor element below. 16. The sensor as claimed in claim 4 wherein the arrangement of the first and second substrates relative to one another define a cavity between each of the caps and their respective sensor elements and wherein each of the cavities for the first and second sensor elements are in fluid communication with one another. 17. The sensor as claimed in claim 4 wherein the arrangement of the first and second substrates relative to one another define a cavity between each of the caps and their respective sensor elements and wherein each of the cavities for the first and second sensor elements are isolated from the other of the cavities for the first and second sensor element. 18. The sensor as claimed in claim 1 wherein the first and second substrates are provided in silicon. 19. The sensor as claimed in claim 4 wherein the first and second sensor elements are infra-red sensor elements. 20. The sensor as claimed in claim 15 wherein the ambient conditions and composition within the cavity can be specified. 21. The sensor as claimed in claim 21 wherein the cavity is provided at a pressure lower than ambient pressure. 22. The sensor as claimed in claim 21 wherein the cavity is populated with a gaseous composition selected for the application with which the sensor is to be used. 23. The sensor as claimed in claim 22 wherein the gaseous composition comprises a gas having a thermal conduction less than the thermal conduction of nitrogen. 24. The sensor as claimed in claim 1 wherein the diffractive optical element is formed in an inner surface of the cap. 25. The sensor as claimed in claim 1 wherein the diffractive optical element is formed in an outer surface of the cap. 26 The sensor as claimed in claim 1 wherein diffractive optical elements are formed in both an outer surface and inner surface of the cap, the combination of the optical elements adjacent to and remote from the cavity forming a compound lens. 27. The sensor as claimed in claim 1 wherein a plurality of sensor elements are formed in the first substrate and the diffractive optical element is configured to selectively guide radiation of specific wavelengths to preselected ones of the plurality of sensor elements. 28. The sensor as claimed in claim 27 including a complex lens arrangement of two or more diffractive optical elements configured to selectively transmit incident radiation of predetermined wavelength onto the incident sensor elements. 29. The sensor as claimed in claim 4 wherein the caps for the first and second element are formed in the same second substrate, the sensor additionally comprising an outer cap, the outer cap being orientated over the second substrate, the outer cap including an optical element. 30. The sensor as claimed in claim 1 wherein on arranging each of the first and second substrates relative to one another the cap is formed by side walls extending upwardly from the first substrate and supporting a roof therebetween, the roof being in a plane substantially parallel to the sensor element. 31. The sensor as claimed in claim 4 wherein each of the first and second sensor elements are adjacent to one another, each of the caps provided thereabove having side walls extending upwardly from the first substrate and supporting a roof therebetween, the roof being in a plane substantially parallel to the sensor element below and wherein each of the caps share a common central column, that extends downwardly from the roof, thereby defining chambers for each of the first and second sensor elements. 32. The sensor as claimed in claim 31 wherein the chamber for the second sensor element is treated to prevent a transmission of radiation through the cap and onto the second sensor element. 33. The sensor as claimed in claim 32 wherein the treatment includes a doping of the side walls of the chamber. 34. The sensor as claimed in claim 32 wherein the treatment includes the application of a reflective coating on the roof of the cap for the second sensor element. 35. The sensor as claimed in claim 31 wherein the central column does not extend fully from the roof to the first substrate, such that a gap is defined between a lower surface of the column and an upper surface of the first substrate. 36. The sensor as claimed in claim 35 wherein the width of the gap is comparable with the wavelength of the incident radiation being sensed. 37. The sensor as claimed in claim 35 wherein the provision of the gap allows for an equalisation of pressure between the chambers for the first and second sensor elements. 38. The sensor as claimed in claim 4 wherein each of the first and second sensor elements are provided as a bolometer. 39. A gas analyser including at least one sensor element formed in a first substrate and at least one diffractive optical element and an associated aperture stop formed in a second substrate, the first and second substrates being configured relative to one another such that the second substrate forms a cap over the at least one sensor element, the at least one diffractive optical element and associated aperture stop being configured to guide incident radiation on the cap through the cap and onto the at least one sensor element, the distribution of incident radiation incident onto the at least one sensor element being determined by a determined relationship between the aperturestop and diffractive optical element, the incident radiation guided having a wavelength indicative of the presence of a specific gas. 40. The analyser of claim 39 further including at least one reference sensor element formed in the first substrate and having a cap for the at least one reference sensor element formed in a second substrate, the reference sensor element providing an output that is useable with the output of the at least one sensor element to provide an analysis of the gas. 41. The analyser of claim 40 wherein the output of the reference sensor element is independent of the output of the at least one sensor element. 42. The analyser of claim 40 wherein the cap of the reference sensor element shields the reference sensor element from the incident radiation on the cap such that the reference sensor element provides an output independent of the intensity of the incident radiation. 43. The analyser of claim 40 wherein the cap of the reference sensor includes a diffractive optical element configured to allow selective transmission of incident radiation of a different wavelength onto the reference sensor element to that transmitted onto the sensor element. 44. The gas analyser of claim 39 including a plurality of sensor elements each having a specific wavelength response, the output of the plurality of sensor elements providing a gas wavelength signature spectrum. 45. A method of forming a sensor, the method including the steps of: forming at least one sensor element in a first substrate, forming a diffractive optical element and at least one aperture stop in a second substrate,bonding the first and second substrates together such that the second substrate is orientated relative to the first substrate so as to provide the diffractive optical element and aperture stop over the sensor element, the diffractive optical element and aperture stop being configured to guide incident radiation onto the sensor element. 46. The method of claim 45 wherein the method of forming the least one sensor element in a first substrate includes the step of forming a reference sensor element in the first substrate and the step of forming the diffractive optical element and at least one aperture stop in a second substrate includes the step of forming a shielding cap and wherein the second substrate provides the shielding cap over the reference sensor, the shielding cap serving to modify a transmission of radiation incident on the cap through and onto the reference sensor element. 47. An electromagnetic radiation sensor fabricated in a semiconductor process, the sensor including first and second sensing elements formed in a first substrate, each of the first and second sensing elements having a respective cap defined thereabove, the caps being formed in a second substrate and mountable onto the first substrate and wherein the cap formed over the first sensing element allows a selective transmission of radiation through the cap onto the sensing element and the cap formed over the second sensing element allows a selective transmission of radiation of a different wavelength through the cap onto the sensing element such that each of the first and second sensing elements are responsive to radiation of different wavelengths and wherein the sensor includes a narrow bandwidth filter to tune the response characteristics of the sensor to predetermined wavelengths.
Title A Sensor ELECTROMAGNETIC RADIATION SENSOR WITH DIFFRACTIVE OPTICAL ELEMENT AND APERTURE STOP The present invention relates to sensors and in particular to a sensor formed from two substrates using semiconductor processing techniques. The invention more particularly relates to an arrangement incorporating narrowband response characteristics for use in applications such as gas sensors or the like. Background Sensors are well known in the art. When formed in a semiconductor material such as silicon or germanium such sensors may be provided as mechanical structures, for example as a MEMS arrangement, or electro-magnetic (EM) radiation sensors such as infra-red (IR) sensors. By using materials such as silicon it is possible to form the sensor in one or more layers of the wafer from etching and other semiconductor processing techniques so as to result in a desired configuration. Due to the delicate nature of the sensors and their sensitivity to the surrounding environment it is known to provide a protective cap over the sensor, the cap serving to isolate the environment of the sensor from the ambient environment where the sensor is operable. Current infrared absorption gas sensors frequently use a discrete thermal sensor, e.g. a thermopile, with an external thin film filter which provides a received energy wavelength response that is tuned to the gas absorption band of interest. While generally effective, this provides a solution which requires more assembly operations and therefore cost, resulting in a more expensive device. Creation of a low cost means of providing sensor and filtering function, possibly together with signal processing electronics all in one device has beendifficult as the thin film filters are specialised and difficult to manufacture with the standard IC processing equipments and materials. Summary These and other problems are addressed in accordance with the teaching of the present invention by a sensor formed from two substrates using semiconductor processing techniques. The two substrates are arranged relative to one another so as to provide a first sensing element in one substrate and a cap for that sensing element above that sensing element so as to form a sensor cell. The cap is configured to incorporate an optical element which selectively focuses incident radiation onto the sensing element. Desirably the optical element is a diffractive optical element which, will be appreciated, is a passive component that redirects chosen wavelengths of the incoming light to a predefined position on the sensing element. As the ultimate position is related to the wavelength of the incident radiation and the specifics of the diffractive optical element, the optical element may be used in conjunction with an aperture stop to modify the narrowband response characteristics of the sensor cell. The sensor may include two or more cells, each having a sensing element formed in a first substrate and a cap for that sensing element formed in a second substrate. Where two or more cells are provided, it is desirable that at least a first and second cell differ from one another in their response characteristics and at least one of the cells is configured to provide for narrowband filtering. By providing two co-located cells whose response characteristics differ, it is possible to reference the output of a first cell using that of a second cell. This may be useful in a plurality of applications including that of gas sensors.Accordingly, a first embodiment of the invention provides an electromagnetic sensor according to claim 1. These and other features of the invention will be understood with reference to the following drawings which are provided for an understanding of the teaching of the invention and are exemplary embodiments and are not intended to limit the invention in any way. Brief Description Of The Drawings The present invention will now be described with reference to the accompanying drawings in which: Figure 1 is a cross section through an illustrative embodiment of a sensor for practicing the present invention. Figure 1a is a section through portion of a multi-layer sensor element that may be usefully employed within a sensor of the present invention. Figure 2 is a perspective view from above of the sensor of Figure 1. Figure 3 is an example of a methodology that may be employed for forming the sensor of Figure 1. Figure 4A is an example of a first pattern that may be used to define an optical element in accordance with the teachings of the present invention. Figure 4B is an example of a second pattern that may be used to define an optical element in accordance with the teachings of the present invention. Figure 4C is an example of a third pattern that may be used to define an optical element in accordance with the teachings of the present invention. Figure 5 is a plan schematic showing an example of a sensor including multiple sensor elements in accordance with an illustrative embodiment of the invention. Figure 6 is an example of a pattern that may be used to define an optical element suitable for use with multiple sensor elements in Figure 5 in accordance with the teachings of the present invention.Figure 7 shows a further embodiment where the sensor includes a reference element. Figure 8 shows a modification to the arrangement of Figure 7. Figure 9 shows an exemplary embodiment of a sensor configuration that may be used within the context of the present invention. Figure 10 shows a ray diagram for a capping arrangement incorporating an aperture stop in accordance with the teaching of the present invention. Figure 11 shows a filter response of a CO2 lens (ray tracing) using a 66.6μm2 centre pixel area, the response having been considered at 0° and 2° angle of incidence. Figure 12 shows a CO2 lens filter response based on diffraction theory using a 60μm2 pixel area. Figure 13 shows reflectance and transmittance spectrum for an AR coated silicon substrate. Figure 14 shows reflectance and transmittance spectrum for an HR coated 1.42μm silicon substrate. Detailed Description Of The Drawings The invention will now be described with reference to exemplary embodiments of Figures 1 to 14. Although the invention has application in any electromagnetic (EM) radiation sensing environment, for the ease of explanation it will now be described with reference to a preferred illustrative embodiment, that of a silicon wafer-based gas sensor. While it is possible for each of the embodiments illustrated hereinafter to be used in combination with one another it will be understood that the invention is not to be construed in this limiting fashion as features and components of one embodiment may or may not be used with those of another embodiment. In this way the invention is only to be limited insofar as deemed necessary in the light of the appended claims.In our earlier co-assigned applications such as US 11/584,725 we have described a number of structures that provide electromagnetic radiation sensors. Such structures were discussed with reference to fabrication in first and second substrate so as to enable provision of a cap element in a first substrate that is then locatable over sensing devices -such as bolometers or the like provided in a second substrate. By incorporation of optical elements, such as diffractive optical elements (DOEs) into the cap, it is possible to selectively focus light of a desired wavelength through the cap and onto the sensing devices on the second substrate. It will be understood that as incident radiation on a DOE is diffracted according to its specific wavelengths, that use of a suitably patterned DOE with a corresponding aperture stop may be used to selectively filter the incident radiation such that only that radiation meeting a predetermined wavelength criteria is transmitted through the cap and onto the sensing element provided below. Applications of such structures with reference to gas sensors is particularly advantageous in that selective wavelength filtering could be used to determine the presence or otherwise of specific gas constituents with the ambient environment. To fabricate such devices, a sensor device (or array of repeating sensor devices) is manufactured on one wafer substrate and a capping wafer is manufactured on a separate substrate. The capping wafer is joined to the sensor wafer and bonded to it under controlled ambient conditions, the preferred embodiment being under vacuum conditions. This bonded wafer arrangement can be singulated or sawn into individual capped sensor chips or cells for final packaging and sale. Such capping methodologies are well described in US Application No. 20030075794 of Felton et al which is assigned to the Assignee of the present invention, and the contents of which are incorporated herein by reference.Figure 1 shows a cross section through a sensor device 100. The device includes a sensor or sensing element 105 formed in a first silicon wafer 110 or what is sometimes called a sensor die. As shown in Figure 1 A which is as will be appreciated highly schematic, the sensing element is desirably formed from a multilayer structure or stack having at least one absorber layer 160 provided therein. Also provided within such a multilayer structure are typically a number of dielectric layers 165 and at least one resistor layer 170, the electrical properties of which can be monitored as an indicator of changes in incident radiation on the sensor element. It will be appreciated that such a structure has both a planar dimension and a depth dimension. Both these parts of the sensor element or optical pixel have an impact on the responsiveness of the sensor device. By suitably modifying the stack structure it is possible to broadly tune the responsiveness of the sensor element to a wavelength region. It will be appreciated that an example of such a modification could be resultant from adjusting the relative thicknesses of the absorber layer(s) and the associated dielectric layers around it so as to shift the broad response of the overall stack into the desired wavelength region. A second part of the sensing element which has an effect on the optical characteristics of the sensing element is its physical planar dimension. It will be appreciated from an examination of Figure 1 that while the sensing element 105 may occupy a specific region within the first silicon wafer or substrate that its optical response area 105a may be smaller than that region. The sensing element 105 may not be optically sensitive across its planar area in that only a sub-region 105a of the sensing element may be active. It is the planar sensitive or active region 105a and its X-Y dimension that can be usefully employed in determining the fine tuning aspects of the responsiveness of the sensor device 100.As part of the sensor device 100, a cap 115 consisting of a silicon lid, into which patterns 120 are etched, to form an individual diffracting optical element, is also provided. An aperture stop 901 is also included as part of the cap. The combination of the diffractive optical element and the aperture stop provide elements of a narrowband filter for the sensor. These elements in combination with the planar physical dimensions of the active region 105a of the sensing element are usefully employed in determination of the narrow band response characteristics of the sensor. This will be described in more detail below. Two possible approaches to implementing such diffractive optical element (DOE) are known as amplitude modulation and phase modulation respectively. In the case of amplitude modulation, the surface pattern consists of areas that allow transmission of the radiation and areas that block the radiation. In the case of phase modulation the pattern consists of height variations on the surface that effectively modify the relative phase of the radiation as a function of the relative height differences of the pattern. In this illustrated embodiment the pattern is provided on an interior surface 135 of the cap, but it will be appreciated that it could also be provided on an exterior surface 140. It will also be appreciated that the pattern, whose geometry is exaggerated for ease of viewing, includes a plurality of ridges 150 whose distance apart and depth is related to the wavelength of light with which the optical element is being used. The cap is typically formed in a second silicon wafer or capping die. This pattern 120 defined in the diffracting optical element cap 115 is capable of focusing incident radiation 125 of a given frequency onto the sensing element 105. This can be a focusing onto a specific plane of the sensor or onto a specific point on the sensor or indeed of focusing different frequencies onto different points. The cap 115 is bonded to the first wafer using a bond or seal material 130 and the bonding defines a sealed cavity 145, which can be at a different pressure than ambient pressure, typically a lower pressure. Alternatively, the sealed nature ofthis cavity and the manufacturing process allows the ambient gas within the cavity to be different to air, for example one could use Xenon which has a lower thermal conductivity than air. It will be understood that xenon is provided only as an example of the type of other gas that may be usefully employed within the teaching of the present invention. Although a silicon cap is substantially opaque to incident light in the visible spectrum and therefore it may be considered that it occludes the light from impinging on the sensing element within, it will be appreciated that silicon allows a transmission of light in the infra-red frequencies of the EM spectrum and therefore for this exemplary application, the provision of an IR gas sensor, it is a suitable material. Figure 2 shows an example of an assembled sensor device from which it will be seen that the sensing element is covered by the cap provided above it. A typical process flow for manufacture of the sensor is shown in Figure 3. Firstly, the sensor wafer 110 is manufactured using techniques that will be well known to those in the art (Step 300). The capping wafer is also manufactured (Step 310) separately. The manufacture of this capping wafer includes the etching of a desired pattern on either or both of the outer 140 or inner surface 135 of the cap. A structure for forming a narrowband filter in the form, for example of an aperture stop, may also be included at this stage. The aperture stop could be provided on either the outer 140 or inner 135 surfaces of the cap separate to the desired pattern, or could be integrally formed as part of the optical element. It will be understood that the aperture stop determines the amount of incident radiation that may be passed through the cap. The dimension and orientation of the aperture stop will determine which wavelengths of the incident radiation are subsequently incident onto the radiation sensing element in the substrate below. An anti-reflective coating may additionally be added to the cap surface, either inner or outer. Once the desired components on each of the two wafer substrates are provided, the wafers maybe brought together so as to be bonded (Step 320). Ideally, this bonding is achieved under vacuum conditions. Once the two wafers have been brought together individual chips may be singulated or defined within the total area of the wafers by removing the areas of the second wafer that do not define the cap (Step 330). In this manner a plurality of individual chips or sensors may be provided in one process flow. It will be understood that the nature of the pattern defining the optical element and the geometry of the aperture stop will affect how the sensor performs. Figure 4 shows examples of pattern types, which can be implemented using either an amplitude modulation or a phase modulation approach, which may be used to define diffractive optics in the sensor cap. It will be understood that the teaching of the invention insofar as it relates to diffractive optical elements is not to be construed as being limited in any fashion to the following exemplary arrangements. The example of Figure 4A is optimised for a focusing of parallel input light of wavelength 10 micrometer down to a focal plane 300 micrometer away using a sinusoidal variation in the height of the diffractive optical element for a phase modulation approach. The relative heights of the sinusoid are represented by the gray scale variation in the pattern, for an amplitude modulation approach the gray scale would represent the transmission efficiency of the pattern. The example of Figure 4B is designed for a focusing of parallel input light of wavelength 10 micrometer down to a focal plane 370 micrometer away but in this case the black and white pattern represents a single step height variation to implement the grating of the phase modulated diffractive optical element rather than a sinusoidal variation. The example in Figure 4C also uses a single step height variation to implement the diffractive optical element but in this case it is designed to focus parallel input light of wavelength 10μm down to a focal plane 10 micrometer away. It will be understood that these three examples are illustrative of the type of pattern that may be used and that different design requirements regarding the control of the focus plane orindependent control over different wavelength components within the incident radiation are also possible with this approach and are covered by this invention. These examples, consisting of black and white circles in Figure 4B and 4C can represent either a transmission pattern or a phase modulation pattern that focuses the light, but suffer in that losses in transmission are also achieved. It will be appreciated however that the design of the pattern may be optimised to achieve lower loss criteria such as for example introducing curved side walls in the ridge features defining the grating, as represented by the grayscale diagram of Figure 4A. The cap provided by the present invention is advantageous in a number of aspects. It serves to: 1 ) protect the membrane during subsequent handling, 2) it also provides a housing for the sensing membrane that can be evacuated during manufacture, and 3) it can be patterned and etched in such a way as to focus the incident infra red radiation onto a single point to amplify the signal or onto an array to create an image of a scene. In particular, the pattern can be such as to implement an optical element having a diffractive optical element. The creation of an optical element for this application is advantageous in that the lens can be implemented in silicon rather than the more exotic (and expensive) materials required heretofore for an infrared refractive lens. The advantage resulting from the use of diffractive optics in the silicon cap is that the lenses can be patterned and etched at the wafer batch level using well established processes and bonded to the sensor wafers, resulting in a cost effective lens compared to the refractive lens technologies heretofore employed. This approach may be applicable to other electromagnetic radiation sensors in addition to the infrared application described here. For example the cap could be made of quartz or in some cases standard glasses such as pyrex or possibly sapphire if the sensor is to be used for applications other than IR sensors.In some applications it may also be useful to be able to use the lens/cap configuration to focus different wavelengths within the incoming radiation onto different sensors enclosed by the cap. Figure 5 is a schematic illustration of one such example where four sensing elements 501 , 502, 503, 504 are provided within the same cap arrangement. It will be appreciated that suitable designing of the lens arrangement may allow for an optimisation of the sensor to focus one particular wavelength while defocusing (rejecting) others. This would allow individual intensity measurement of different wavelength components within the infrared radiation, a capability that could be very useful in for example gas analysis such as alcohol breath samplers where there is a desire to monitor the level of ethyl alcohol in the breath of a person. As alcohol has specific absorbance peaks in the IR spectrum, the focusing of radiation coincident with these peaks onto specific ones of the sensors elements 501 , 502, 503, 504 provided in an array below the cap will enable the discrimination of any change in the intensity of the radiation at those specific frequencies therefore serve as an indicator of alcohol present in a sample. As each of the sensor elements are configured to react to incident radiation of a suitable frequency, when that radiation is incident on the individual sensors, an analysis of the performance of each of the sensor elements indicates the presence or absence of the material for which it is designed to react to providing a gas wavelength signature of the gas being analysed. Figure 6 is an example of a diffractive optical element (DOE) design using an amplitude modulation approach that could be used in combination with the sensor arrangement in Figure 5 to focus each one of four distinct wavelengths within the incident radiation onto one of the four sensing elements 501 , 502, 503, 504 that are shown in Figure 5. Such a design or pattern could be fabricated by creating a single step in the lens or providing multiple steps of different heights. It will be appreciated that the invention is not intended to be limited in any way as to the fabrication of a DOE in that it is intended toencompass all methods of manufacture be they single step, multiple step or other variants. It will be understood that the techniques of the present invention provide an efficient way to provide an IR sensor array such as for example a 60X60 array. Such configurations are desirable for applications such as IR imaging where a sensor array of the present invention may be used to replace conventional IR arrays. Current IR arrays do not have the lens and sensor array integrated in a low cost unit as provided for by this invention. Current conventional IR arrays provide a vacuum package with an IR transparent window or lens in the package rather than the wafer level solution described by this invention. The dimensions of a sensor in accordance with the present invention are typically of the order of micro to millimetres. For example when targeting radiation of a wavelength of 10 micrometers, a cap may be dimensioned to have a collection area of about 1 mm2 and be of a height of about 160 micrometers above the sensor element. These dimensions are however purely for illustrative purposes only and it is not intended to limit the present invention to any one set of dimension criteria. The fabrication of the sensor of the present invention has been described with reference to an etch process. Typically this etch will be of the type of process known as deep reactive ion etching (RIE) which inherently produces substantially vertical sidewalls (approximately 90 degrees). One of the advantages of such a process is that with such verticality less space is required for the cavity sidewalls. This directly affects the size of the "window" and thus the overall size of the cap which can be made. By reducing the cap size there is a reduction in the area required on the chip- with a corresponding reduction in the "wasted" space under and around the cap edges.Heretofore, a sensor in accordance with the teaching of the invention has been described with reference to a sensing device with a transparent window. The invention also provides in certain embodiments for the fabrication of a second cell also incorporating a sensing device, which provides a different response to that of the first cell. This second cell then may be considered a reference cell, which differs from the first sensing cell in that its response may be used in combination with the sensing cell to allow for a discrimination in the response of the sensing cell. One example of this is to make the reference cell totally opaque so its sensor sees only the cap (i.e. 300K) in the case of IR sensors, but one could make the reference partially opaque so there was always a known fraction of the ambient radiation getting through. There would be advantages to this in applications for gas sensors where the reference cell could be illuminated with radiation coming through the same optical path as the sensing side except for the gas to be sensed. This would remove spurious dependencies of the signal on e.g. water vapour. A further example would be where the optical characteristics of the second cell are the same as that of the first cell but it is selectively illuminated with radiation of a different frequency, i.e. a different source of radiation, so as to provide an output which is different to but which can be compared with that of the first cell. In all cases however it will be understood that the second cell is configured to provide a different response output to that of the first cell with the variance in response of this second reference cell may be provided by altering the characteristics of the cap used for the second cell being used to reference or calibrate the output of the first cell. Typical embodiments will employ a reference cell with an optically opaque window. Such opacity may be used to provide a "dark" cell, one which will provide a signal output that is independent of the level of radiation being sensed by the first cell. Figure 7 shows an example of such an arrangement. The samereference numerals will be used for components already described with reference to prior Figures. In this arrangement a sensor device 700 includes a first cell 710 which provides an output indicative of the level of radiation incident on the sensor device and a second cell 720 which provides an output which is independent of the level of radiation incident on the sensor device. The first and second cells each include an IR sensor 105 formed on a first substrate 110 and each have a cap 716, 726 provided thereabove. The capping of each cell serves to define a controlled volume above each sensor, which as described above can be suitably evacuated or filled with a specific gas depending on the application. The second cell 720 differs from the first in that it is configured so as to prevent the transmission of radiation through the cap and onto the sensor 105. This may be achieved by providing an optically opaque layer 730 on the cell. The second cell can therefore be considered a reference cell, whose output is independent of the incident radiation. The output of this second cell can then be used to calibrate the output of the first cell, whose signal output will be determined by the intensity of the incident radiation thereon. Alternatively the DOE pattern chosen for the second cell could be used to selectively filter radiation of a second wavelength to that of the first cell, such that each cell provides an output in response to radiation of different wavelengths. This is particularly useful in the context of gas sensors, where one cell can be tuned to a first desired wavelength peak and the second to a second different peak. Relative scaling between the two peaks can be used to give an indicator of presence or otherwise of a specific gaseous compound. It will be understood that by providing such a reference cell, that a sensor device in accordance with the teaching of the invention enables a detection of radiation by providing for a comparison between outputs of an exposed sensor and those of a reference darkened or otherwise differentiated response sensor.In this device only the optical properties of the reference sensor are changed, the thermal and electrical properties are the same as those of the illuminated sensor. In this way an accurate and precision sensing of incoming radiation is possible- be that IR radiation or any other type of electromagnetic radiation such as that in the visible spectrum. The arrangement of the two cells shown in Figure 7 is of two distinct cells, each being formed separately. Alternative arrangements such as that shown in Figure 8, may provide a single cap 800 which is micro-machined to define two cavities or chambers 805, 810, one of which 805 is locatable over the illuminated element and the second 810 over the non-illuminated element. Each of the two defined areas has an IR sensitive element 105 and may be formed using any convenient process. The interior of the cap cavities may be filled with any desirable gas ambient (e.g. Air, Nitrogen, Argon, Xenon) or indeed simply provided as a vacuum. The cap is sealed to the substrate using a sealing process which can provide the necessary level of hermetic seal. Such techniques will be apparent to the person skilled in the art. The shield 730 which blocks the IR radiation is fabricated using (conveniently) a thin metal layer which will reflect incoming radiation. In order to avoid heating the cap non- uniformly, desirably the IR blocking layer should be a reflector, not an absorber. As shown in Figure 8, a gap 820 in the sealing may be left between the individual lid chambers to allow the pressure in each chamber to equalise, independent of leak rate of the overall cap. Such an arrangement addresses the issue with many MEMS based IR sensor devices which are very sensitive to the ambient pressure. In order to define the two chambers, a column 825 is provided. The column extends downwardly from the top 830 of the cap 800, and terminates at the gap 820 between the two chambers. The column 825 may be coated with or doped to minimize the leakage of radiation between the two cavities 805, 810. Typicaldimensions for the column are 50-100 microns wide and 170 microns high. The gap is typically of the order of 6 microns high which is of the order of the wavelength of the IR radiation being monitored so it is unlikely that any radiation could transfer through the gap from the illuminated cavity to the non-illuminated. However, if required further guarantees of the integrity of the dark cavity could be achieved by providing a step pattern - similar to a saw tooth arrangement- so as to allow the equalisation of pressure but occlude the transfer of radiation. To further reduce the level of IR contamination within the un-illuminated cavity side, the walls of the separation region may also be coated with a reflecting metal (or other IR type barrier) to block IR which has been reflected from the illuminated surface. Alternatively this region may be treated (e.g. heavily doped to sufficient density using for example a polysilicon material or oxidized to sufficient thickness) in such a way as to absorb any reflected IR. The absorbing of the radiation is a preferred way to achieve the blocking of IR through the internal portions of the cavity as it ensures that it is taken out of the cavity as opposed to just bounced to another region- which would be the case in a reflective solution. The absorption provided by the side walls serves to damp down reflections to prevent the creation of spurious signals within each cell. A further suitable technique could be to simply space the non-illumination sensor sufficiently from the illumination sensor so that the radiation will be absorbed naturally in the silicon. It will be understood that a sensor arrangement in accordance with the teaching of the invention provides for the use of high thermal conductivity materials for the cap so as to ensure that the two sensing devices are exposed to the same temperature surface, thus again minimizing thermal contamination problems. While described with reference to silicon it will be understood that other materials such as germanium could also be used.By using a capping arrangement such as that described herein it is possible to locate the illuminated and non-illuminated sensors adjacent to one another. As a result they can be fabricated at the same fabrication efficiency and the only difference between the two is the optical environment in which they operate. This is particularly useful for sensors that are used in high sensitivity applications where low differences in output between the two sensors (the reference and the active) are indicative of an actual measurement. By providing at least two cells which differ in their response characteristics it is possible to define such active and reference cells as has been just described. The provision of the differing response characteristics can be implemented in any one of a number of different manners, for example by modifying the optical response characteristics, the electrical characteristics, the thermal response characteristics or even by keeping all these three characteristics the same and just illuminating each cell with a different source of irradiation. The arrangements described heretofore are advantageous in that they enable the selective wavelength filtering of incident radiation onto sensing devices provided in the substrate below the cap. By providing one or more diffractive optical elements (DOEs) within the capping arrangement it is possible to tune the sensing devices for particular wavelengths. In this regard it will be appreciated that DOE lenses are highly wavelength sensitive (i.e. suffer from chromatic aberration). This property can be exploited for spectroscopic applications, one specific example being in gas sensors. As an example of an application in the gas sensing environment, we will now consider the example of detection of carbon dioxide CO2 concentrations by measuring the relative absorption at two discrete wavelengths in the infrared corresponding to a strong and weak absorption lines.By using a first and second sensing chambers, each of the chambers can be tuned to an appropriate wavelength by provision of a suitable lens in the cap defining each of the two chambers. The gas sensing DOE lenses described herein are based on the principal of dividing the available collection aperture between two lenses, so one lens is designed for example to collect radiation having a wavelength of 4.26μm (the CO2 absorption line) and the other for 3.6μm (a reference line). This is at least 50% inefficient, since the useful CO2 radiation falling on the reference lens is rejected and likewise for the reference wavelength falling on the CO2 lens. Ideally, use of a single lens that efficiently spatially separates the two wavelengths would be preferable. This is possible in principal, but requires the ability to generate thick volume gratings which is not compatible using single etch step gratings. To write a thick volume grating would require the ability to controllably vary the refractive index in a 3D volume, a process achieved today using photorefractive holographic materials. As will be appreciated from the discussion above, DOE lens design is a well established practice in the general case. The details of the quantisation used in defining the DOE surface have an influence on both the diffraction efficiency and bandwidth of the response. For a given design wavelength and focal length, the grating radii are fixed. However, there is freedom in the choice of maximum etch depth and number of quantisation steps. Generally, the greater the etch depth, the narrower the bandwidth and, more quantisation steps gives better diffraction efficiency. Unfortunately, using more than one mask to define the grating increases the efficiencies of the harmonics which would then require separate filtering. In addition, deep etches are technologically a bit more difficult and do not provide sufficiently narrow filter responses. As a result, it will be understood that one can limit the process to single step etches of minimum depth and obtain additional filtering function by employing an on axis stop and limiting the effective receiving area of the targetpixel. Such an on-axis stop or aperture stop was mentioned above with reference to Figure 1 but the ray diagram of Figure 9 shows how such a stop may be employed. In this exemplary arrangement, the stop 901 is located on the bottom surface of the silicon and rejects both the red light 905 which has a longer wavelength (the rays shown as — rays) and blue light 910 (-.-.-.-.-.-.-. rays) having a shorter wavelength than the desired predetermined design wavelength 915 (...—...—) referred to in the key to the drawing as Green. The 'green' light is unaffected by the stop 901 and therefore is incident on the sensing element 105. The 'blue' light would come to a focus behind the array and the 'red' focuses in front of it. If the receiving pixel (or an aperture defined on the pixel) has dimensions less than that indicated on Figure 9, it can only detect within the band defined by the blue and red wavelengths. The spot diagram for three wavelengths (4.16, 4.26 and 4.36μm) is given in Figure 10 for a 250μm radius stop. The stop generates a clearly defined shadow on the focal plane in the blue and red wavelengths. It will be understood that a pixel smaller than this shadow can only detect wavelengths within the band defined by the red and blue wavelengths. If the pixel size is physically bigger than the shadow, its effective size can be reduced by delineating, using standard lithographic techniques, an aperture on the pixel which is commensurate or smaller than the shadow. It will be understood that for a given filter bandwidth the aperture stop must be larger if placed on the top surface of the cap as opposed to the bottom surface illustrated with reference to Figure 10. Using ray tracing techniques, given distribution of incident rays on the DOE lens, the subsequent distribution of rays on the detector pixel can be calculated as a function of wavelength and angle of incidence. Typical results are shown in Figure 11 for a collimated beam incident on the lens both at normal incidence 0° and with a 2° offset. For on axis radiation, the full width half maximum (FWHM)bandwidth is better than 200nm but is clearly very sensitive to incidence angle. For this example the DOE, focal length 370μm, had a width of 1 mm and the pixel receiver width was 66.6μm. The frequency response has also been calculated using diffraction theory. The results are shown in Figure 12 for a (60μm)2 pixel and are comparable with those from ray tracing. Since diffraction effects are not accounted for by the ray tracing that method gives a slightly narrower bandwidth despite the use of a larger pixel. Diffraction theory gives a FWHM bandwidth of 160nm. This, of course, relies on the input beam being a plane wave at normal incidence. If the input beam contains a finite bandwidth of incidence angles the response will be broadened in line with Figure 12. It will be appreciated that if a DOE is designed to bring the mth order (here m = 1) diffracted light of wavelength λd to a focal point, then, the other wavelengths, λfP, also brought to the same focal point will be given by, . _ m For the example design wavelength of 4.26μm and m = 1, the 2nd, 3rd, nth order harmonics are 2.13, 1.42, 1.065, 0.85μm... The diffraction efficiency for the even harmonics is zero, for the third harmonic and fifth they are 4.5% and 1.6% respectively. The latter will be significantly damped by the Si absorption however, this is not true of λ = 1.42μm. It will be understood therefore that any gas with an absorption line in the vicinity (±70nm) could lead to cross sensitivity. For example, water vapour has an absorption line centred on 1.38μm. However, the absorption coefficient is -100 times less than CO2, added to a 10 fold decrease in the DOE diffraction efficiency at this wavelength, the sensitivity to water vapour will be three orders of magnitude down on that for CO2.To provide for improvements in response characteristics anti reflection (AR) or high reflection (HR) coatings can be applied to the top and/or bottom surfaces of the cap. For the wavelengths in question for CO2 this filter can use typical integrated circuits fabrication materials such as SisN4 and SiO2. For example, an AR coating of a 462nm/258nm oxide/nitride pair deposited on silicon will yield a zero reflectance (100% transmittance) at 4.26μm. This can be compared to the 30% reflectance from a bare silicon interface at this wavelength. The reflectance and transmittance spectrum is shown in Figure 13. For a HR coating, consider we wish to suppress radiation at 1.42μm. An HR pair of 185nm/256nm nitride/oxide films on silicon gives the response spectrum show in Figure 14. The transmission at 1.42mm is suppressed by a factor 2 (4 if the coating is applied on both silicon interfaces) with respect to the CO2 wavelength. The inclusion of such an additional set of thin layers necessitates modifying the actual DOE lens design but this will be well understood by those skilled in the art. It will be understood that the sensors described herein have been illustrated with reference to exemplary embodiments. It will be understood that the features of any one embodiment may be used with those of another embodiment or indeed can be applied independently of the structural features of the other embodiment. Applications for such sensors can be in a plurality of environments such as IR to Digital converters, both single pixel and arrays. Further applications include single point thermal measurement systems, e.g., digital thermometers, intruder alarms, people counting sensors, and into infrared cameras to thermally image scenes. These and other applications will be readily apparent to the person skilled in the art on review of the teaching setforth here before. Therefore while the invention has been described with reference to preferred embodiments it will be understood that it is not intended that the invention be limited in any fashion except as may be deemed necessary in the light of the appended claims. By providing an aperture stop it is possible to selectively obstruct the transmission of certain parts of the radiation onto the sensing portion of the sensor. In this way a narrowband response can be generated as required. It will be understood that provision of an on-axis stop can provide for a limiting of the effective receiving area of the sensing device. It will be understood that the "stop" that was described with reference to Figure 9 was a circular stop provided in a lower surface of the cap and configured to prevent the transmission of directly transmitted light through the cap. It will be understood that such an arrangement provides a narrow band filter and the specifics of how such a narrow band filter are provided within the cap are not to be construed as being limited to that which was described herein for the purposes of explanation. Within the context of the present invention, any stop or narrow band filter which in combination with a diffractive optical element enables the selective transmission of incident radiation through the cap and onto the sensing element below could be usefully employed. It will be understood that it is not intended to limit the teaching of the invention to any one specific geometrical configuration. For example, the inverse of the circular obstruction of Figure 9 will be a 'traditional' aperture stop, i.e., it defines the outer useful part of the DOE lens. Such an arrangement may be usefully employed where it is intended to stop rays originating from the extreme edges of the DOE for examples where the quality may be poorer with respect to the centre or, perhaps, you may wish to have a circular lens rather than the current rectangular lens. This will ensure the image is also circular. The additional circular obstruction then defines the inner useful part of the lens. A narrow, ring shaped, clear aperture might have beneficial filtering properties for otherapplications. Within this context it will be appreciated that it is not intended to limit the present invention to any one specific geometrical arrangement except as may be deemed necessary in the light of the appended claims. The words upper, lower, inner and outer are used for ease of explanation so as to illustrate an exemplary illustrative embodiment and it in not intended to limit the invention to any one orientation. Similarly, the words comprises/comprising when used in this specification are to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers , steps, components or groups thereof. Furthermore although the invention has been described with reference to specific examples it is not intended to limit the invention in any way except as may be deemed necessary in the light of the appended claims, and many modifications and variations to that described may be made without departing from the spirit and scope of the invention. Indeed where integers or components are described with reference to any one specific figure it will be understood that such integers or components these could be interchanged or replaced with those from other figures or elsewhere without departing from the teaching of the invention.
PROBLEM TO BE SOLVED: To provide a magnetic tunnel junction storage element for a spin transfer torque magnetoresistive random access memory (STT-MRAM) bit cell.SOLUTION: The element includes: a bottom electrode layer (150); a pinned layer (160) adjacent to the bottom electrode layer; a dielectric layer (70) encapsulating a portion of the bottom electrode layer and the pinned layer, the dielectric layer including sidewalls that define a hole adjacent to a portion of the pinned layer; a tunneling barrier (190) adjacent to the pinned layer; a free layer (200) adjacent to the tunneling barrier; and a top electrode (210) adjacent to the free layer. The width of the bottom electrode layer and/or the pinned barrier in a first direction is greater than the width of a contact area between the pinned layer and the tunneling barrier in the first direction. Also disclosed is a method of manufacturing an STT-MRAM bit cell.
A memory device having a magnetic tunnel junction (MTJ) storage element, the MTJ storage element comprising: a bottom electrode; a pinned layer adjacent to the bottom electrode; a dielectric encapsulating the bottom electrode and a portion of the pinned layer. A dielectric layer including sidewalls defining holes adjacent to a portion of the pinned layer, a tunneling barrier adjacent to the pinned layer, a free layer adjacent to the tunneling barrier, and an adjacent to the free layer The width of the bottom electrode and / or the pinned layer in a first direction is greater than the width of the contact area between the pinned layer and the tunneling barrier in the first direction, Memory device.The memory device of claim 1, wherein a portion of one of the tunneling barrier and the free layer is disposed along a sidewall of the hole and perpendicular to the bottom electrode and the pinned layer.The memory device of claim 1, wherein the top electrode fills a portion of the hole above the free layer.The tunneling barrier may have a U-shaped cross section with a first leg and a second leg, the first leg extending along a sidewall of the hole. Memory device described in.5. The memory device of claim 4, wherein the free layer has a U-shaped cross section and is nested within the U-shaped tunneling barrier.The device according to claim 1, which is integrated and applied in an electronic device selected from the group consisting of a set top box, a music player, a video player, an entertainment unit, a navigation device, a communication device, a PDA, a fixed position data unit, and a computer. Memory device.The memory device according to claim 1, which is a spin transfer torque magnetoresistive random access memory (STT-MRAM).A method of manufacturing a memory device having a magnetic tunnel junction (MTJ) storage element, comprising: forming a bottom electrode on a substrate; forming a pinned layer on the bottom electrode; the bottom electrode and the pinned layer. Depositing a dielectric layer on the layer, patterning and etching a hole having a sidewall to the pinned layer in the dielectric layer, depositing a tunneling barrier layer in the first portion of the hole Forming a tunneling barrier on the pinned layer; depositing a free layer within the second portion of the hole, such that the free layer is above the tunneling barrier; Depositing a top layer on top of the layer.The method according to claim 8, wherein the width of the bottom electrode and / or the pinned layer in a first direction is larger than the width of the contact region between the pinned layer and the tunneling barrier in the first direction. .9. The method of claim 8, wherein a portion of one of the tunneling barrier and the free layer is formed along sidewalls of the hole and perpendicular to the bottom electrode and the pinned layer.9. The method of claim 8, wherein the top electrode fills the remaining portion of the hole above the free layer.9. The device of claim 8, wherein the tunneling barrier has a U-shaped cross section with a first leg and a second leg, the first leg extending along a sidewall of the hole. Method described.The method according to claim 12, wherein the free layer has a U-shaped cross section and is nested within the U-shaped tunneling barrier.9. The method of claim 8, further comprising cleaning the bottom electrode layer and the pinned layer prior to depositing the dielectric layer.15. The method of claim 14, further comprising patterning and etching the bottom electrode and the pinned layer prior to the cleaning.16. The method of claim 15, further comprising exposing the bottom electrode and the pinned layer to a magnetic annealing process in vacuum prior to patterning and etching the bottom electrode and the pinned layer.9. The method of claim 8, wherein the tunneling barrier, the free layer and the top electrode are deposited over the dielectric layer and the hole.9. The method of claim 8, including removing portions of the tunneling barrier, the free layer, and the top electrode located above the hole opening.Removing the portion of the tunneling barrier, the free layer and the top electrode chemically mechanical polishes the portion of the tunneling barrier, the free layer and the top electrode located above the hole opening. The method of claim 18, comprising.The memory device is integrated and applied in an electronic device selected from the group consisting of a set top box, a music player, a video player, an entertainment unit, a navigation device, a communication device, a PDA, a positional data unit, a computer, The method of claim 8.9. The method of claim 8, wherein the memory device is a spin transfer torque magnetoresistive random access memory (STT-MRAM).A memory device having a magnetic tunnel junction (MTJ) storage element, bottom conductive means for electrically connecting the MTJ storage element, and adjacent to the bottom conductive means for retaining a first polarization. A first magnetic means, a side wall defining a hole adjacent to a portion of the first magnetic means, and a first insulating means for encapsulating the bottom conductive means and a portion of the first magnetic means A second magnetic means for holding a second polarization which is reversible, and the first magnetic means and the second magnetic means being separated to form the first magnetic means and the second magnetic means. A second insulating means for passing a tunneling current between the magnetic means, and a top conducting means adjacent to the second magnetic means for electrically connecting the MTJ storage element; The bottom in the direction of Width of the conductive means and / or said first magnetic means is greater than the width of the contact region between the said and the first magnetic means in the first direction the second insulating means, a memory device.A portion of one of the second insulating means and the second magnetic means is disposed along a sidewall of the hole and perpendicular to the bottom conducting means and the first magnetic means. The memory device according to Item 22.23. The memory device of claim 22, wherein the top conductive means fills a portion of the hole above the second magnetic means.The second insulating means has a U-shaped cross section with a first leg and a second leg, the first leg extending along the side wall of the hole 23. The memory device of claim 22.26. The memory device of claim 25, wherein the second magnetic means has a U-shaped cross-section and is nested within the U-shaped second insulation means.The device according to claim 22, wherein the device is integrated in an electronic device selected from the group consisting of a set top box, a music player, a video player, an entertainment unit, a navigation device, a communication device, a PDA, a fixed position data unit, and a computer. Memory device.The memory device according to claim 22, being a spin transfer torque magnetoresistive random access memory (STT-MRAM).A method of manufacturing a memory device having a magnetic tunnel junction (MTJ) storage element, comprising: forming a bottom electrode on a substrate; forming a pinned layer on the bottom electrode; the bottom electrode and the pinned layer. Depositing a dielectric layer on the layer; patterning and etching a hole having a sidewall to the pinned layer in the dielectric layer; depositing a tunneling barrier layer in a first portion of the hole Forming a tunneling barrier on the pinned layer; depositing a free layer within the second portion of the hole such that the free layer is above the tunneling barrier; Depositing a top layer on top of the layer.The method according to claim 29, wherein the width of the bottom electrode and / or the pinned layer in a first direction is larger than the width of the contact area between the pinned layer and the tunneling barrier in the first direction. .30. The method of claim 29, wherein a portion of one of the tunneling barrier and the free layer is formed along sidewalls of the hole and perpendicular to the bottom electrode and the pinned layer.30. The method of claim 29, wherein the top electrode fills the remaining portion of the hole above the free layer.30. The method of claim 29, wherein the tunneling barrier has a U-shaped cross section with a first leg and a second leg, the first leg extending along a sidewall of the hole. Method described.34. The method of claim 33, wherein the free layer has a U-shaped cross-section and is nested within the U-shaped tunneling barrier.30. The method of claim 29, further comprising the step of cleaning the bottom electrode layer and the pinned layer prior to depositing the dielectric layer.36. The method of claim 35, further comprising patterning and etching the bottom electrode and the pinned layer prior to the cleaning step.37. The method of claim 36, further comprising exposing the bottom electrode and the pinned layer to a magnetic annealing process in vacuum prior to patterning and etching the bottom electrode and the pinned layer.30. The method of claim 29, wherein the tunneling barrier, the free layer and the top electrode are deposited over the dielectric layer and the hole.30. The method of claim 29, comprising removing portions of the tunneling barrier, the free layer and the top electrode located above the hole opening.The memory device is integrated and applied in an electronic device selected from the group consisting of a set top box, a music player, a video player, an entertainment unit, a navigation device, a communication device, a PDA, a positional data unit, a computer, 30. The method of claim 29.30. The method of claim 29, wherein the memory device is a spin transfer torque magnetoresistive random access memory (STT-MRAM).
Spin transfer torque magnetoresistive random access memory (STT-MRAM) with magnetic tunnel junction (MTJ) storage element and MTJThe disclosed embodiments relate to Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) cells and methods of manufacturing the same. In particular, the exemplary embodiments are directed to magnetic tunnel junction (MTJ) memory elements usable in STT-MRAM cells and methods of manufacturing the same.Magnetoresistive random access memory (MRAM) is a non-volatile memory technology that uses magnetic elements. For example, a spin transfer torque magnetoresistive random access memory (STT-MRAM) uses electrons that are spin polarized as they pass through a thin film (spin filter). STT-MRAM is a spin transfer torque RAM (STT-RAM, Spin Transfer Torque RAM), a spin torque transfer magnetization switching RAM (Spin-RAM, Spin Torque Transfer Magnetization Switching RAM), a spin momentum transfer RAM (SMT-RAM, Spin Momentum) Also known as Transfer RAM).FIG. 1 shows a conventional STT-MRAM bit cell 100. The STT-MRAM bit cell 100 includes a magnetic tunnel junction (MTJ) storage element 105, a transistor 110, a bit line 120, and a word line 130. For example, as shown in FIG. 1, the MTJ storage element is formed from a pinned layer and a free layer, each of which can hold a magnetic field or magnetic polarization, with an insulating (tunneling barrier) layer It is separated. The polarization of the free layer is reversible such that the polarization of the pinned layer and the free layer is substantially opposite or opposite. The resistance of the electrical path through the MTJ varies depending on the orientation of the polarization of the pinned and free layers. As known, this change in resistance can be used to program and read the bit cell 100. The STT-MRAM bit cell 100 also includes a source line 140, a sense amplifier 150, a read / write circuit 160, and a bit line reference 170. Those skilled in the art should understand that the operation and configuration of memory cell 100 are well known in the art. Further details regarding such memory cells are given, for example, in [1], which is incorporated herein by reference in its entirety.Referring to FIGS. 2 (a)-(c), a conventional MTJ storage device generally first patterns the bottom pinned layer to form a single damascene and stacks the tunneling barrier / free layer / top electrode stack. It is deposited and formed by performing a chemical mechanical polishing (CMP) step.For example, as shown in FIG. 3, a conventional MTJ storage element generally uses a physical vapor deposition (PVD) to form the top metal layer (eg, M3) of a metal stack (eg, interconnect 40) Above, by depositing a stack of MTJ and hard mask layer. The stack of MTJ and hard mask layers is typically bottom electrode layer 50 (which may be, for example, made of tantalum), pinned layer 60, tunneling barrier layer 90, free layer 100, and hard mask or top electrode layer 110 (for example, And Ta / TaN, which may be made of Ti / TiN).In the conventional method, the first step typically comprises bottom electrode layer 50 (eg Ta), pinned layer 60, tunneling barrier 90, free layer 100 and hard mask layer (Ta / TaN, Ti / TN) Depositing the The pinned layer 60 can include one or more layers or films (eg, pinned layer stacks). The MTJ stack is then subjected to a magnetic annealing process in vacuum. Then, a pattern is applied to the MTJ stack using a lithography method. The patterned cell size may be larger than the final size. Each layer described above can be composed of one or more layers or membranes.Next, the MTJ stack is etched. The etching process includes trimming the resist size and pattern hard mask, removing the resist, etching the free layer 100, and etching the pinned layer 60 and the bottom electrode layer 50. Then, the MTJ stack is cleaned. The cleaning process is usually compatible with low-k and MTJ cleaning. Next, a passivation layer is deposited to protect the MTJ storage element and the inter-layer dielectric (ILD) 70. A combination stack may be required with a low deposition temperature to protect the MTJ and promote adhesion between the MTJ and the ILD. Finally, the MTJ and ILD are polished using less aggressive chemical mechanical polishing (CMP) to prevent delamination.As shown in FIG. 3, a conventional STT-MRAM bit cell formed by the conventional method includes a substrate 10, a word line 20, and a contact 30 to V SS (not shown). Bottom electrode layer 50 is formed on the top metal layer of interconnect 40. The pinned layer 60, the tunneling barrier layer 90, the free layer 100 and the top electrode 110 are formed on the bottom electrode layer 50. The ILD layer 70 is formed across the MTJ cell.M.Hosomi他、“A Novel Nonvolatile Memory with Spin Transfer Torque Magnetoresistive Magnetization Switching: Spin−RAM”、Proceedings of IEDM Conference、2005年An exemplary embodiment is directed to a spin transfer torque magnetoresistive random access memory (STT-MRAM) cell and a method of manufacturing the same. In particular, embodiments relate to magnetic tunnel junction (MTJ) storage elements of STT-MRAM cells and methods of manufacturing the same.For example, the illustrative embodiment is directed to a memory device having a magnetic tunnel junction (MTJ) storage element, the MTJ storage element comprising a bottom electrode, a pinned layer adjacent to the bottom electrode, and a bottom electrode And a dielectric layer enclosing a portion of the pinned layer, the dielectric layer including sidewalls defining holes adjacent to the portion of the pinned layer, a tunneling barrier adjacent to the pinned layer, and a free layer adjacent to the tunneling barrier And a top electrode adjacent to the free layer. The width of the bottom electrode and / or pinned layer in the first direction is greater than the width of the contact region between the pinned layer and the tunneling barrier in the first direction.Another exemplary embodiment is directed to a method of manufacturing a memory device having a magnetic tunnel junction (MTJ) storage element, the method comprising: forming a bottom electrode on a substrate; Forming a pinned layer on top, depositing a dielectric layer on the bottom electrode and the pinned layer, patterning and etching holes with sidewalls in the dielectric layer to the pinned layer, and Depositing a tunneling barrier layer in one portion to form a tunneling barrier on the pinned layer, a free layer in the second portion of the holes, the free layer being on the tunneling barrier Depositing a top layer on the free layer and depositing a top layer on the free layer.An exemplary embodiment is directed to a memory device having a magnetic tunnel junction (MTJ) storage element, the MTJ storage element comprising: bottom conductive means for electrically connecting the MTJ storage element; Bottom conductive means and first magnetic means adjacent to the conductive means and including a first magnetic means for retaining the first polarization and a sidewall defining a hole adjacent to a portion of the first magnetic means; The first insulating means for partially enclosing, the second magnetic means for holding the second polarization which is reversible, and the first magnetic means and the second magnetic means A second insulating means for conducting a tunneling current between one magnetic means and a second magnetic means, and a top conducting means adjacent to the second magnetic means for electrically connecting the MTJ storage element And The width of the bottom conducting means and / or the first magnetic means in the first direction is greater than the width of the contact area between the first magnetic means and the second insulating means in the first direction.Another exemplary embodiment comprises a method of fabricating a memory device having a magnetic tunnel junction (MTJ) storage element, the method comprising: forming a bottom electrode on a substrate; and a pinned layer on the bottom electrode. Forming, depositing a dielectric layer on the bottom electrode and the pinned layer, patterning and etching the hole with sidewalls to the pinned layer in the dielectric layer, and in the first portion of the hole Depositing a tunneling barrier layer to form a tunneling barrier over the pinned layer; depositing a free layer within the second portion of the holes such that the free layer is above the tunneling barrier; Depositing a top layer over the free layer.The accompanying drawings assist in the description of the embodiments and are provided solely for the purpose of illustrating the embodiments and are not intended to limit the embodiments.1 illustrates a conventional spin transfer torque magnetoresistive random access memory (STT-MRAM) cell array.(A) is a cross-sectional view of a conventional STT-MRAM cell. (B) is an enlarged view of a portion of a conventional STT-MRAM cell according to (a). (C) is an enlarged view of the conventional MTJ cell concerning (a).FIG. 1 is a schematic cross-sectional view of a conventional STT-MRAM bit cell.FIG. 1 is a schematic cross-sectional view of an STT-MRAM bit cell in one of various manufacturing steps.FIG. 1 is a schematic cross-sectional view of an STT-MRAM bit cell in one of various manufacturing steps.FIG. 1 is a schematic cross-sectional view of an STT-MRAM bit cell in one of various manufacturing steps.FIG. 1 is a schematic cross-sectional view of an STT-MRAM bit cell in one of various manufacturing steps.FIG. 1 is a schematic cross-sectional view of an STT-MRAM bit cell.FIG. 7 is a flow chart illustrating an exemplary method of manufacturing an STT-MRAM bit cell.FIG. 1 is a schematic cross-sectional view of an MTJ storage element of an STT-MRAM bit cell.FIG. 1 is a schematic cross-sectional view of an MTJ storage element of an STT-MRAM bit cell.FIG. 1 is a schematic cross-sectional view of an MTJ storage element of an STT-MRAM bit cell.FIG. 1 is a schematic cross-sectional view of an MTJ storage element of an STT-MRAM bit cell.Aspects of embodiments of the present invention are disclosed in the following description and accompanying drawings directed to specific embodiments of the present invention. Alternate embodiments may be created without departing from the scope of the present invention. Additionally, well known elements of the embodiments are not described in detail or will not be described in order not to obscure the details of the embodiments of the present invention.In the present application, the expression "as an example" is used in the sense of "serving as an example, example or illustration". The embodiments described herein by way of example are not necessarily to be construed as preferred or advantageous over other embodiments. Similarly, the phrase "embodiment" does not require that all embodiments of the present invention include the described feature, advantage, or mode of operation. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present invention.In the present application, unless otherwise specified, the singular form also includes the plural form. Furthermore, in the present application, the terms "comprising", "having" and "including" specify the presence of the described feature, integer, step, action, element and / or component, It does not exclude the presence or addition of the other features, integers, steps, operations, elements, components and / or sets thereof.The disclosed embodiment recognizes that conventional methods can make it difficult to control the etch stop at the bottom electrode of the MTJ. Also, with an incomplete post-etch process, polymer residue may remain on the MTJ sidewalls, a portion of which may be conductive and form leakage paths, thereby reducing the MR (magnetic resistance) ratio. In addition, the barrier oxide layer near the MTJ sidewalls may be affected by the process flow (ie, ashing and cleaning processes) resulting in a thicker tunneling barrier near the MTJ sidewalls. The effect of thicker tunneling barriers is more pronounced in scaled down features.The exemplary embodiments can advantageously reduce the number of masks used in the manufacturing process. For example, instead of three masks, two photomasks can be used. Also, according to the present embodiment, heavy metal etching processes for critical dimensions, such as those at the interface between pinned layers, tunneling barriers, free layers, tunneling barriers, are not necessary. Furthermore, the leakage path induced by the polymer crossbeams of the side walls can be reduced or eliminated.Thus, according to the present embodiment, the tunneling barriers of the MTJ are not exposed to the ashing and cleaning processes. Furthermore, this embodiment can provide a large bottom fixed layer compared to the conventional method, and can minimize the influence of the floating field of the bottom fixed layer on the top free layer.An exemplary embodiment of a method of manufacturing a spin transfer torque magnetoresistive random access memory (STT-MRAM) cell and an embodiment of an STT-MRAM cell will now be described with reference to FIGS.FIG. 4 shows a schematic cross-sectional view of a partial STT-MRAM bit cell formed in accordance with an illustrative embodiment. The STT-MRAM bit cell comprises a substrate 10, a word line 20, a contact 30 to Vss (not shown), and an interconnect 40. The interconnect 40 comprises, for example, metal layers M1, M2 and M3 (eg Cu or W) connected together in series by via interconnects V1, V2 and V3. A dielectric (eg, an oxide layer) is filled around the layer of interconnect 40. The top metal layer M3 of the interconnect 40 is polished using, for example, a chemical mechanical polishing (CMP) method. Those skilled in the art will recognize that any level of metal layer or via can be polished to form an MTJ storage element thereon.As shown in FIG. 5, an exemplary embodiment may be implemented, for example, by depositing a bottom electrode layer 150 (eg, Ta) and a pinned layer 160 on top of the polished top metal layer M3 of the interconnect 40. Forming the MTJ bottom electrode. The pinned layer 160 comprises a stack (ie, multiple layers). Next, bottom electrode layer 150 and pinned layer 160 are exposed to a magnetic annealing process in vacuum. Then, a pattern is applied to the MTJ electrode using a lithography method. The bottom electrode layer 150 and pinned layer 160 are then etched down to the oxide layer and cleaned to form individual bottom electrodes as shown in FIG. Bottom electrode layer 150 and pinned layer 160 are shown as being offset from interconnect 40. However, other arrangements can be provided. For example, bottom electrode layer 150 and pinned layer 160 may be aligned with interconnect 40. The size of bottom electrode layer 150, pinned layer 160 and interconnect 40 is not limited to the illustrated configuration. For example, the size of bottom electrode layer 150 and pinned layer 160 may be large, small, or the same as the size of interconnect 40.According to an exemplary embodiment, lithography and etching methods are not applied to form the critical dimensions of the MTJ storage element. That is, by not exposing the interface of the tunneling barrier 190 (see, eg, FIG. 7) to the pinned layer 160 (see, eg, FIG. 5) and the free layer 200 (see, eg, FIG. 7), etching or cleaning. Some of the above mentioned problems in can be avoided.Next, an interlayer dielectric (ILD) 70 is deposited on bottom electrode layer 150 and pinned layer 160 to pattern and etch holes 180 down to pinned layer 160 in ILD 70, as shown in FIG. Referring to FIG. 10, dimension X1 of bottom electrode layer 150 and pinned layer 160 may be larger than dimension X2 of the contact area between pinned layer 160 and tunneling barrier 190 to pattern holes 180 into ILD 70. And, the tolerance in etching can be increased. The ILD 70 may be the same as or different from the dielectric filled around the interconnect 40.FIG. 7 shows forming the tunneling barrier 190, the free layer 200 and the top electrode 210 over the ILD 70 and the hole 180. In particular, as shown in FIG. 7, tunneling barrier 190 is disposed such that tunneling layer 190 is disposed on the sidewalls of hole 180 and perpendicular to the bottom electrode (eg, bottom electrode layer 150 and pinned layer 160). Are formed on the ILD 70 and the hole 180. Then, the free layer 200 is formed on the tunneling barrier 190 so that a portion of the free layer 200 is also perpendicular to the bottom electrode layer 150 and the pinned layer 160. By forming the top electrode 210 on at least a portion of the free layer 200 located in the hole 180, at least the remaining portion of the hole 180 is filled. As shown in FIG. 7, the top electrode 210 can be formed on the entire free layer 200.Next, in the exemplary method, portions of the tunneling barrier 190, the free layer 200, and the top electrode 210 located above the holes 180 are removed, for example, by polishing (eg, chemical mechanical polishing (CMP)). As shown in FIG. 8, an STT-MRAM bit cell with MTJ storage elements is formed.The exemplary embodiments can advantageously reduce the number of photomasks used during the process. For example, instead of three masks, two photomasks can be used. Also, according to the present embodiment, the heavy metal etching process for the critical dimension is not necessary. Furthermore, the leakage path induced by the polymer crossbeams of the side walls can be reduced or eliminated.Furthermore, according to the present embodiment, the tunneling barriers of the MTJ are not exposed to ashing and cleaning processes. Furthermore, the present embodiment can provide a large bottom pinned layer compared to the prior art, and can minimize the influence of the floating field of the bottom pinned layer on the top free layer.FIG. 9 is a flowchart illustrating an exemplary method of manufacturing an STT-MRAM bit cell according to one embodiment. The method comprises depositing a bottom electrode layer and a pinned layer (e.g. 910) on the metal layer, and patterning and etching the bottom electrode layer and the pinned layer to form a bottom electrode of the MTJ storage element (e.g. 920). And). Next, the method comprises depositing a dielectric layer (eg 930) on the bottom electrode layer and the pinned layer, and patterning and etching holes in the dielectric layer to the pinned layer (eg 940). Including. The method further includes placing a tunneling barrier, a free layer, and a top electrode above the hole, with a portion of one of the tunneling barrier and the free layer along the sidewall of the hole and perpendicular to the bottom electrode layer and the pinned layer. As described, including depositing (eg, 950). In addition, the method includes removing (e.g., 960) portions of the tunneling barrier, free layer and top electrode located above the hole opening.According to an exemplary method, an isolated MTJ storage element can be provided. As mentioned above, the exemplary embodiments can advantageously reduce the number of photomasks used in the process. For example, instead of three masks, two photomasks can be used. Also, according to an exemplary embodiment, heavy metal etching for critical dimensions is not required. Furthermore, the leakage path induced by the polymer crossbeams of the side walls can be reduced or eliminated.Furthermore, according to this embodiment, the tunneling barriers of the MTJ are not exposed to ashing and cleaning processes. Furthermore, this embodiment can provide a large bottom fixed layer compared to the prior art, and can minimize the influence of the floating field of the bottom fixed layer on the top free layer.For example, as shown in FIG. 10, one embodiment of a magnetic tunnel junction (MTJ) storage element includes a bottom electrode layer 150 and a pinned layer 160 adjacent (eg, above or above) the bottom electrode layer 150. Including. A dielectric layer 70 encapsulates the bottom electrode layer 150 and a portion of the pinned layer 160. Dielectric layer 70 includes sidewalls that define holes 180 (see, eg, FIG. 6) adjacent (eg, above or exposing portions of pinned layer 160) a portion of pinned layer 160. The tunneling barrier 190 is adjacent to (eg, above or above) the pinned layer 160. The free layer 200 is adjacent to (eg, above or above) the tunnel layer 190. Top electrode 210 is adjacent to (eg, above or above) free layer 200.As shown in the embodiment of FIG. 10, the dimension X1 of the bottom electrode layer 150 and / or the pinned layer 160 can be larger than the dimension X2 of the contact area between the pinned layer 160 and the tunneling barrier 190, and tunneling The tolerance for patterning and etching holes 180 in the ILD 70 to receive the barrier 190, free layer 200 and top electrode 210 can be increased. Also, as shown in FIG. 10, a portion of one of the tunneling barrier 190 and the free layer 200 is disposed along the sidewalls of the hole 180 and perpendicular to the bottom electrode layer 150 and the pinned layer 160. The top electrode 210 fills a portion of the hole 180 adjacent (eg, above or above) the free layer 200.Those skilled in the art will recognize that in other embodiments, the dimensions of bottom electrode layer 150 and / or pinned layer 160 may be the same as or less than tunneling barrier 190 as shown in FIG. I want to. As shown in FIG. 11, a portion of one of tunneling barrier 190 and free layer 200 is disposed along the sidewalls of hole 180 and perpendicular to bottom electrode layer 150 and pinned layer 160. The top electrode 210 fills a portion of the hole 180 adjacent (eg, above or above) the free layer 200.In contrast, in the conventional MTJ memory element and its manufacturing method, the bottom electrode layer 50, the pinned layer 60, the tunneling barrier layer 90, the free layer 100 and the top electrode 110 are exposed to patterning and etching as shown in FIG. As shown in FIG. 5, each of the bottom electrode layer 50, the pinned layer 60, the tunneling barrier layer 90, the free layer 100, and the top electrode 110 has the same dimension X0. Also, in a conventional MTJ storage element, the barrier oxide layer near the MTJ sidewall can be affected by the process flow (i.e., the ashing and cleaning process), and as shown in FIG. Barriers 90 occur. The effects of thicker tunneling barriers 90 are noticeable in scaled down features.According to an exemplary method, an isolated MTJ storage element can be provided. As mentioned above, the exemplary embodiments can advantageously reduce the number of photomasks used in the process. For example, instead of three masks, two photomasks can be used. Also, according to an exemplary embodiment, heavy metal etch processes for critical dimensions are not required. Furthermore, the leakage path induced by the polymer crossbeams of the side walls can be reduced or eliminated.Thus, according to the present embodiment, the tunneling barrier of the MTJ is not exposed to the ashing and cleaning processes to reduce or prevent the tunneling barrier near the sidewall of the MTJ from becoming thick. Furthermore, the present embodiment can provide a large bottom pinned layer compared to the prior art, and can minimize the influence of the floating field of the bottom pinned layer on the top free layer.Mobile data unit such as a mobile phone, a portable computer, a portable personal communication system (PCS), a personal data assistant (PDA), etc., a GPS enabling system In a fixed position data unit, such as a navigation device, set top box, music player, video player, entertainment unit, measurement facility, or any other device that stores or retrieves data or computer instructions, or a combination thereof It should be understood that it can. Thus, embodiments of the present disclosure may be suitably employed in devices that include active integrated circuits that include a memory having an MTJ storage element as disclosed herein.The devices and methods described above can be designed and configured for GDSII and GERBER computer files stored on computer readable media. These files are provided to the manufacturer that manufactures devices based on these files. The resulting product is a semiconductor wafer, which is cut into semiconductor dies and packaged into semiconductor chips. Chips are employed in the devices described above.Thus, embodiments may include or be provided by a machine readable medium or computer readable medium for implementing instructions in a machine to convert the processor and other co-functional elements when executed by the processor. Implement functionality as described herein.While the above disclosure is directed to exemplary embodiments, it should be noted that various changes and modifications can be made without departing from the scope of the present invention as defined by the appended claims. The functions, steps and / or actions of the method claims in accordance with the embodiments described herein need not be performed in any particular order. Furthermore, although elements of the embodiments may be described and claimed in the singular, the plural is also contemplated, unless specifically limited to the singular.Reference Signs List 10 substrate 20 word line 30 contact 40 interconnect 70 interlayer dielectric (ILD) 150 bottom electrode layer 160 pinned layer 190 tunneling barrier 200 free layer 210 top electrode
A microelectronic package may be fabricated with debug access ports formed either at a side or at a bottom of the microelectronic package. In one embodiment, the debug access ports may be formed within an encapsulation material proximate the microelectronic package side. In another embodiment, the debug access ports may be formed in a microelectronic interposer of the microelectronic package proximate the microelectronic package side. In a further embodiment, the debug access ports may be formed at the microelectronic package bottom and may include a solder contact.
CLAIMSWhat is claimed is:1. A microelectronic package, comprising:a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface;at least one microelectronic device attached to the microelectronic interposer first surface;an encapsulation material disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to the at least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; andat least one debug access port formed proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device.2. The microelectronic package of claim 1, wherein the at least one debug access port is formed at the microelectronic interposer first surface.3. The microelectronic package of claim 2, wherein the at least one debug access port comprises a debug trace formed on or in the microelectronic interposer first surface and a solder bump formed on the debug trace. 4. The microelectronic package of claim 1, wherein the at least one debug access port comprises at least one debug trace formed within the microelectronic interposer.5. The microelectronic package of claim 4, wherein the at least one debug trace comprises a plurality of debug traces in a stacked configuration relative to themicroelectronic interposer first surface and the microelectronic interposer second surface.6. The microelectronic package of claim 4, wherein the at least on debug access port comprises at least one probe contact proximate the microelectronic interposer side and electrically connected to the at least one debug trace. 7. The microelectronic package of claim 1, wherein the at least one debug access port comprises at least one debug trace formed in or on the microelectronic interposer second surface.8. The microelectronic package of claim 7, wherein the at least one debug access port further includes at least one solder bump formed on the at least one debug trace.9. A method of fabricating a microelectronic package, comprising:forming a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface; attaching at least one microelectronic device to the microelectronic interposer first surface;disposing an encapsulation material over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to at the least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; andforming at least one debug access port proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device.10. The method of claim 9, wherein forming the at least one debug access port comprises forming the at least one debug access port at the microelectronic interposer first surface.11. The method of claim 10, wherein forming the at least one debug access port comprises forming a debug trace on or in the microelectronic interposer first surface and forming a solder bump on the debug trace.12. The method of claim 11, wherein forming the debug trace on or in the microelectronic interposer first surface and forming the solder bump on the debug trace further comprises: forming a portion of the debug trace and the solder bump within a dicing street; and forming the microelectronic package side by cutting through the encapsulation material and the microelectronic interposer within the dicing street, which removes portion of the debug trace and the solder bump within the dicing street.13. The method of claim 9, wherein forming the at least one debug access port comprises forming at least one debug trace within the microelectronic interposer.14. The method of claim 13, wherein forming the at least one debug trace comprises forming a plurality of debug traces in a stacked configuration relative to the microelectronic interposer first surface and the microelectronic interposer second surface. 15. The method of claim 13, wherein forming the at least on debug access port comprises forming at least one probe contact proximate the microelectronic interposer side and electrically connected to the at least one debug trace.16. The method of claim 9, wherein forming the at least one debug access port comprises forming at least one debug trace in or on the microelectronic interposer second surface.17. The method of claim 16, wherein forming the at least one debug access port further includes forming at least one solder bump on the at least one debug trace. 18. An electronic system, comprisinga microelectronic substrate; anda microelectronic package attached to the microelectronic substrate, wherein the microelectronic package comprises:a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface; at least one microelectronic device attached to the microelectronic interposer first surface;an encapsulation material disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to the at least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; and at least one debug access port formed proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least onemicroelectronic device.19. The electronic system of claim 18, wherein the at least one debug access port is formed at the microelectronic interposer first surface.20. The electronic system of claim 19, wherein the at least one debug access port comprises a debug trace formed on or in the microelectronic interposer first surface and a solder bump formed on the debug trace. 21. The electronic system of claim 18, wherein the at least one debug access port comprises at least one debug trace formed within the microelectronic interposer.22. The electronic system of claim 21, wherein the at least one debug trace comprises a plurality of debug traces in a stacked configuration relative to the microelectronic interposer first surface and the microelectronic interposer second surface.23. The electronic system of claim 21, wherein the at least on debug access port comprises at least one probe contact proximate the microelectronic interposer side and electrically connected to the at least one debug trace.24. The electronic system of claim 18, wherein the at least one debug access port comprises at least one debug trace formed in or on the microelectronic interposer second surface. 25. The electronic system of claim 24, wherein the at least one debug access port further includes at least one solder bump formed on the at least one debug trace.
MICROELECTRONIC PACKAGE DEBUG ACCESS PORTS AND METHODS OF FABRICATING THE SAMETECHNICAL FIELDEmbodiments of the present description generally relate to the field of fabricating microelectronic packages, and, more particularly, to debug access ports formed in or on the microelectronic package.BACKGROUNDThe microelectronic industry is continually striving to produce ever faster and smaller microelectronic packages for use in various electronic products, including, but not limited to, computer server products and portable products, such as portable computers, electronic tablets, cellular phones, digital cameras, and the like. One way to achieve these goals is to fabricate System-In-Package (SIP) microelectronic packages wherein an entire electronic system is formed in a single microelectronic package, which may include processors, application specific integrated circuit (ASIC) devices, volatile memory, nonvolatile memory, power systems, wireless communication devices, and the like. Such SIP microelectronic packages are generally attached to a microelectronic substrate, such as a motherboard, with interconnects, such as solder balls, in a flip-chip configuration. As the microelectronic devices within the microelectronic package are fully encapsulated, there is no way to access internal circuitry within the microelectronic devices for debugging purposes except through the interconnects. However, once the microelectronic package is attached to the motherboard, the interconnects are no longer accessible for debugging purposes. One option for debugging would be to fabricate probe points on the microelectronic substrate, such as a motherboard. This would be undesirable for various reasons, including taking up valuable space on the microelectronic substrate, thereby hampering the drive to reduce the size of electronic products. Another option for debugging would be to remove or desolder the microelectronic package from the motherboard and test the failed microelectronic package on a dedicated debug board. However, three issues arise with desoldering. First, initial debug requires the preservation of the electrical state of the microelectronic package, which will be lost through desoldering and, thus, valuable data is lost. Second, the microelectronic packages have a limited number attachment, desoldering, and reworking processes that they can go through before becoming non-functional, as will be understood to those skilled in the art. Third, debugging sometimes needs to be done in the field at a customer's site where desoldering is not possible. Therefore, it is important to develop ways to debug a microelectronic package without requiring probe points on the microelectronic substrate and without requiring the removal of the microelectronic package from the microelectronic substrate.BRIEF DESCRIPTION OF THE DRAWINGSThe subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. It is understood that the accompanying drawings depict only several embodiments in accordance with the present disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings, such that the advantages of the present disclosure can be more readily ascertained, in which:FIG. 1 illustrates a cross-sectional view of a microelectronic package, according to an embodiment of the present description.FIG. 2 illustrates an oblique view of a microelectronic package having debug access ports, according to embodiments of the present description.FIG. 3 illustrates a top view of the adjacent microelectronic packages prior to dicing, according to embodiments of the present description.FIG. 4 illustrates a side view along line 4-4 of FIG. 3 after dicing, according to an embodiment of the present description.FIG. 5 is a flow chart of a process of fabricating a debug access port of a microelectronic package, according to the present description.FIG. 6 illustrates a computing device in accordance with one implementation of the present description.DESCRIPTION OF EMBODFMENTSIn the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the claimed subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the subject matter. It is to be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the claimed subject matter. References within this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present description. Therefore, the use of the phrase "one embodiment" or "in an embodiment" does not necessarily refer to the same embodiment. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the claimed subject matter. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the subject matter is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the appended claims are entitled. In the drawings, like numerals refer to the same or similar elements or functionality throughout the several views, and that elements depicted therein are not necessarily to scale with one another, rather individual elements may be enlarged or reduced in order to more easily comprehend the elements in the context of the present description.The terms "over", "to", "between" and "on" as used herein may refer to a relative position of one layer with respect to other layers. One layer "over" or "on" another layer or bonded "to" another layer may be directly in contact with the other layer or may have one or more intervening layers. One layer "between" layers may be directly in contact with the layers or may have one or more intervening layers.Embodiments of the present description include a microelectronic package fabricated with debug access ports formed either at a side or at a bottom of themicroelectronic package. In one embodiment, the debug access ports may be formed within an encapsulation material proximate the microelectronic package side. In another embodiment, the debug access ports may be formed in a microelectronic interposer of the microelectronic package proximate the microelectronic package side. In a further embodiment, the debug access ports may be formed at the microelectronic package bottom and may include a solder contact.In the production of microelectronic packages, microelectronic devices are generally mounted on microelectronic substrates, such as interposers, which provide electrical communication routes between the microelectronic devices within the microelectronic package and/or with external components. These microelectronic packages are, in turn, attached to a microelectronic substrate, such as a motherboard.As shown in FIG. 1, a microelectronic package 100 may comprise at least one microelectronic device 110, such as a microprocessor, a chipset, a graphics device, a wireless device, a memory device, an application specific integrated circuit, combinations thereof, or the like, attached to a first surface 122 of a microelectronic interposer 120 through a plurality of solder interconnects 142 in a configuration generally known as a flip-chip or controlled collapse chip connection ("C4") configuration. The device-to-interposer solder interconnects 142 may extend from interconnection pads 114 on an active surface 112 of themicroelectronic device 110 and interconnection pads 124 on the microelectronic interposer first surface 122. The microelectronic device interconnection pads 114 may be in electrical communication with integrated circuitry 118 (shown genetically as a dashed box) within the microelectronic device 110. The microelectronic interposer 120 may include at least one conductive trace 126 extending therethrough forming conductive path from themicroelectronic device 110 to at least one microelectronic package interconnection pad 128 on or proximate a second surface 132 of the microelectronic interposer 120. Themicroelectronic interposer 120 may reroute a fine pitch (center-to-center distance between the microelectronic device interconnection pads 114) of the microelectronic deviceinterconnection pads 114 to a relatively wider pitch of the microelectronic package interconnection pads 128.It is understood that although FIG. 1 illustrates the microelectronic device 110 being connected to the microelectronic interposer 120 with the device-to-interposer solder interconnects 142 with a flip-chip technique, the embodiments of the present description are not so limited, as the microelectronic device 110 may also to be connected to themicroelectronic interposer 120 by any known electrical structure, including, but not limited to, lead frames, bond wires, and the like.As further shown in FIG. 1, the microelectronic device 110 may be encapsulated with an encapsulation material 150, such as an epoxy. The encapsulation material 150 may also encapsulate the microelectronic interposer first surface 122 and extended to at least one side 134 of the microelectronic interposer 120 to form an encapsulation material side 152 that may be substantially planar to the microelectronic interposer side 134. The microelectronic interposer side 134 and the encapsulation material side 152 comprise a side 160 of the microelectronic package 100. The microelectronic interposer second surface 132 may be proximate an attachment surface 170 of the microelectronic package 100.As shown in FIG. 1, the microelectronic interposer conductive traces 126 may include at least one debug trace 210, wherein the debug trace 210 may form a conductive route from the microelectronic device 110 to the microelectronic package side 160 (shown on the left hand side of the figure) and/or the debug trace 210 may form a conductive route from the microelectronic device 110 to the microelectronic package attachment surface 170 (shown on the right hand side of the figure).The microelectronic package 100 may be attached to a microelectronic substrate180, such as printed circuit board, a motherboard, and the like, through a plurality of solder interconnects 144. The package-to-substrate solder interconnects 144 may extend between the microelectronic package interconnection pads 128 and substantially mirror-image interconnection pads 182 on an attachment surface 184 of the microelectronic substrate 180. The microelectronic substrate interconnection pads 182 may be in electrical communication with conductive routes (shown as dashed lines 186) within the microelectronic substrate 180. The microelectronic substrate conductive routes 186 may provide electrical communication routes to external components (not shown).Both the microelectronic interposer 120 and the microelectronic substrate 180 may be primarily composed of any appropriate material, including, but not limited to, bismaleimine triazine resin, fire retardant grade 4 material, polyimide materials, liquid crystal polymer, polybenzoxazole, epoxy resin, silica-filled epoxy, glass reinforced epoxy matrix material, and the like, as well as laminates or multiple layers thereof. The microelectronic interposer conductive traces 126, including the debug traces 210, and the microelectronic substrate conductive routes 186 may be composed of any conductive material, including but not limited to metals, such as copper, aluminum, gold, silver, nickel, alloys thereof, and the like. The fabrication processes for the microelectronic interposer 120 and the microelectronic substrate 180 are well known in the art and for the sake of brevity and conciseness will not be precisely discussed or further illustrated herein.The device-to-interposer solder interconnects 142 and the package-to-substrate solder interconnects 144 can be made of any appropriate solder material, including, but not limited to, lead/tin alloys, such as 63% tin / 37% lead solder, and high tin content alloys (e.g. 90% or more tin), such as tin/bismuth, eutectic tin/silver, ternary tin/silver/copper, eutectic tin/copper, and similar alloys. The solder may be reflowed, either by heat, pressure, and/or sonic energy to secure the solder between the respective interconnections pads, as will be understood to those skilled in the art.FIG. 2 illustrates various configurations of debug access ports grouped as type A,B, and C. In one embodiments, a debug access port A may comprise the debug trace 210 formed on or in microelectronic interposer second surface 132. The debug trace 210 may include a contact pad 218, which may be larger than the debug trace 210 to have an appropriate dimension to contact a debug probe (not shown), as will be understood to those skilled in the art. In a further embodiment, a solder bump 222 may be formed on the debug trace 210, such as on the contact pad 218 of the debug trace 210, as shown. As will be understood to those skilled in the art, the solder bump 222 can be latched on with a specifically designed external debug probe (not shown).In another embodiments shown in FIG. 2, a debug access port B may comprise the debug trace 210 formed within microelectronic interposer 120. The debug trace 210 may simply terminate at the microelectronic interposer side 134, wherein the debug trace 210 may be contacted by an external debug probe (not shown). In a further embodiment, as previous discussed, the microelectronic interposer 120 may be formed in layers; thus, there may be a plurality of debug traces 210 in a stacked configuration relative to the microelectronic interposer first surface 122 and the microelectronic interposer second surface 132. In still a further embodiment, the debug access port B may further include a probe contact 216 that may be formed at the microelectronic interposer side 134 and connected to the debug trace 210 (shown in shadow lines). The probe contact 216 may be any known microelectronic interposer structure, such as a blind via, a buried via, or a plated through hole. The probe contact 216 may be larger than the debug trace 210 to enable easier contact with an external debug probe (not shown).In another embodiments shown in FIGs. 2-4, a debug access port C may comprise the debug trace 210 formed in or on the microelectronic interposer first surface 122. The debug trace 210 may simply terminate at the microelectronic interposer side 134, wherein the debug trace 210 may be contacted by an external debug probe (not shown). In another embodiment, a solder ball or bump 212 may be formed on the debug trace 210. As will be understood to those skilled in the art, the microelectronic package 100 may be formed as a plurality packages (not shown) on a large microelectronic interposer (not shown), wherein individual microelectronic packages 100 are singulated from other packages by cutting material (such as with a wafer saw or with laser ablation) between the packages in an area known as a dicing street 240 (see FIG. 3). As shown in FIG. 3, which is a top plan view of the microelectronic package 100 of FIG. 2 (the encapsulation material 150 of FIG. 2 is not shown for clarity), the debug trace 210 may include an enlarged landing portion 214 to which the solder bump 212 is attached. In one embodiment, a portion of the debug trace 210 and the solder bump 212 may be positioned such that half of the solder bump 212 extends into the dicing street 240; thus, a portion of the debug trace 210 is removed and the solder bump 212 is substantially cut in half during package singulation, which will maximize the surface area of the solder bump 212 at the microelectronic package side 160 (see FIG. 2), as shown in FIG. 4, wherein FIG. 4 illustrates the debug access port C along line 4-4 of FIG. 3 after singulation. As shown in FIG. 4, the solder bump 212 may extend into the encapsulation material 150. It is noted that, as shown in FIG. 4, a solder resist material 242 may patterned on the microelectronic interposer first surface 122 and the debug trace landing portion 214 for the formation of the solder bump 212, as will be understood to those skilled in the art.FIG. 5 is a flow chart of a process 300 of fabricating a microelectronic package according to an embodiment of the present description. As set forth in block 302, a microelectronic interposer may be formed having a front surface, an opposing back surface, and at least one side extending between the first surface and the second surface. At least one microelectronic device may be attached to the microelectronic interposer first surface, as set forth in block 304. As set forth in block 306, an encapsulation material may be disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to at the least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side. At least one debug access port may be formed proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device, as set forth in block 308.FIG. 6 illustrates an electronic or computing device 400 in accordance with one implementation of the present description. The computing device 400 houses a board 402. The board may include a number of microelectronic components, including but not limited to a processor 404, at least one communication chip 406A, 406B, volatile memory 408 (e.g., DRAM), non-volatile memory 410 (e.g., ROM), flash memory 412, a graphics processor or CPU 414, a digital signal processor (not shown), a crypto processor (not shown), a chipset 416, an antenna, a display, a display (touchscreen display), a touchscreen controller, a battery, an audio codec (not shown), a video codec (not shown), a power amplifier (AMP), a global positioning system (GPS) device, a compass, an accelerometer (not shown), a gyroscope (not shown), a speaker, a camera, and a mass storage device (not shown) (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the microelectronic components may be physically and electrically coupled to the board 402. In some implementations, at least one of the microelectronic components may be a part of the processor 404.The communication chip enables wireless communications for the transfer of data to and from the computing device. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device may include a plurality of communication chips. For instance, a first communication chip may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.At least one of the microelectronic components may comprise a microelectronic device within a microelectronic package, wherein the microelectronic package may comprise a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface, wherein themicroelectronic interposer second surface comprises a microelectronic package attachment surface; at least one microelectronic device attached to the microelectronic interposer first surface; an encapsulation material disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to the at least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; and at least one debug access port formed proximate the least one of the microelectronic package side and the microelectronic package attachment surface, wherein the debug access port is electrically connected to the at least one microelectronic device.In various implementations, the computing device may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device may be any other electronic device that processes data.It is understood that the subject matter of the present description is not necessarily limited to specific applications illustrated in FIGs. 1-6. The subject matter may be applied to other microelectronic devices and assembly applications, as well as any appropriate electronic application, as will be understood to those skilled in the art.The following examples pertain to further embodiments, wherein Example 1 is a method of fabricating a microelectronic package, comprising a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface; at least one microelectronic device attached to the microelectronic interposer first surface; an encapsulation material disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to the at least onemicroelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; and at least one debug access port formed proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device.In Example 2, the subject matter of Example 1 can optionally include the at least one debug access port being formed at the microelectronic interposer first surface.In Example 3, the subject matter of Example 2 can optionally include the at least one debug access port comprising a debug trace formed on or in the microelectronic interposer first surface and a solder bump formed on the debug trace.In Example 4, the subject matter of Example 1 can optionally include the at least one debug access port comprising at least one debug trace formed within the microelectronic interposer.In Example 5, the subject matter of Example 4 can optionally include the at least one debug trace comprising a plurality of debug traces in a stacked configuration relative to the microelectronic interposer first surface and the microelectronic interposer second surface.In Example 6, the subject matter of Example 4 can optionally include the at least on debug access port comprising at least one probe contact proximate the microelectronic interposer side and electrically connected to the at least one debug trace.In Example 7, the subject matter of Example 1 can optionally include the at least one debug access port comprising at least one debug trace formed in or on themicroelectronic interposer second surface.In Example 8, the subject matter of Example 7 can optionally include the at least one debug access port further including at least one solder bump formed on the at least one debug trace.The following examples pertain to further embodiments, wherein Example 9 is a method of fabricating a microelectronic package, comprising forming a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface; attaching at least one microelectronic device to the microelectronic interposer first surface; disposing an encapsulation material over the at least one microelectronic device and the microelectronic interposer, wherein theencapsulation material includes at least one side which is substantially planar to at the least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; and forming at least one debug access port proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device.In Example 10, the subject matter of Example 9 can optionally include forming the at least one debug access port comprising forming the at least one debug access port at the microelectronic interposer first surface.In Example 11, the subject matter of Example 10 can optionally include forming the at least one debug access port comprising forming a debug trace on or in themicroelectronic interposer first surface and forming a solder bump on the debug trace.In Example 12, the subject matter of Example 11 can optionally include forming the debug trace on or in the microelectronic interposer first surface and forming the solder bump on the debug trace further comprising forming a portion of the debug trace and the solder bump within a dicing street, and forming the microelectronic package side by cutting through the encapsulation material and the microelectronic interposer within the dicing street, which removes portion of the debug trace and the solder bump within the dicing street.In Example 13, the subject matter of Example 9 can optionally include forming the at least one debug access port comprising forming at least one debug trace within the microelectronic interposer.In Example 14, the subject matter of Example 13 can optionally include forming the at least one debug trace comprising forming a plurality of debug traces in a stacked configuration relative to the microelectronic interposer first surface and the microelectronic interposer second surface.In Example 15, the subject matter of Example 13 can optionally include forming the at least on debug access port comprising forming at least one probe contact proximate the microelectronic interposer side and electrically connected to the at least one debug trace.In Example 16, the subject matter of Example 9 can optionally include forming the at least one debug access port comprising forming at least one debug trace in or on the microelectronic interposer second surface.In Example 17, the subject matter of one of Examples 16 can optionally include forming the at least one debug access port further including forming at least one solder bump on the at least one debug trace.The following examples pertain to further embodiments, wherein Example 18 is an electronic system comprising a microelectronic substrate, and a microelectronic package attached to the microelectronic substrate, wherein the microelectronic package comprises a microelectronic interposer having a first surface, an opposing second surface, and at least one side extending between the first surface and the second surface; at least one microelectronic device attached to the microelectronic interposer first surface; an encapsulation material disposed over the at least one microelectronic device and the microelectronic interposer, wherein the encapsulation material includes at least one side which is substantially planar to the at least one microelectronic interposer side and wherein the at least one encapsulation material side and the at least one microelectronic interposer side comprise a microelectronic package side; and at least one debug access port formed proximate the least one of the microelectronic package side and the microelectronic interposer second surface, wherein the debug access port is electrically connected to the at least one microelectronic device.In Example 19, the subject matter of Example 18 can optionally include the at least one debug access port being formed at the microelectronic interposer first surface.In Example 20, the subject matter of Example 19 can optionally include the at least one debug access port comprising a debug trace formed on or in the microelectronic interposer first surface and a solder bump formed on the debug trace.In Example 21, the subject matter of Example 18 can optionally include the at least one debug access port comprising at least one debug trace formed within the microelectronic interposer.In Example 22, the subject matter of Example 21 can optionally include the at least one debug trace comprising a plurality of debug traces in a stacked configuration relative to the microelectronic interposer first surface and the microelectronic interposer second surface.In Example 23, the subject matter of Example 21 can optionally include the at least on debug access port comprising at least one probe contact proximate themicroelectronic interposer side and electrically connected to the at least one debug trace.In Example 24, the subject matter of Example 18 can optionally include the at least one debug access port comprising at least one debug trace formed in or on the microelectronic interposer second surface.In Example 25, the subject matter of Example 24 can optionally include the at least one debug access port further including at least one solder bump formed on the at least one debug trace. Having thus described in detail embodiments of the present description, it is understood that the present description defined by the appended claims is not to be limited by particular details set forth in the above description, as many apparent variations thereof are possible without departing from the spirit or scope thereof.
A system, device and method for communicating between a host device and a plurality of peripheral devices wherein the communications utilize a single interface that is supported by the host. The host includes a plurality of class drivers and miniport drivers. Each of the class drivers implements functionality associated with one or more of the plurality of peripheral devices. Each miniport driver provides an interface by which one or more of the class drivers communicate with one or more of the plurality peripheral devices using class protocols, wherein the miniport drivers communicate through a single host interface supported by the host. An embedded controller interfaces with the plurality of peripheral devices using the respective native bus protocols of the peripheral devices and wherein the embedded controller interfaces with the plurality of miniport drivers using the single host interface.
CLAIMSWHAT IS CLAIMED IS:1. A system for communicating between a host and a plurality of peripheral devices, the system comprising:a plurality of class drivers on the host, wherein each of the class drivers implements functionality associated with one or more of the plurality of peripheral devices;a plurality of miniport drivers on the host, wherein each miniport driver provides an interface by which one or more of the class drivers communicate with one or more of the plurality peripheral devices using class protocols, wherein the miniport drivers communicate through a single host interface supported by the host; andan embedded controller that interfaces with the plurality of peripheral devices using the respective native bus protocols of the peripheral devices and wherein the embedded controller interfaces with the plurality of miniport drivers using the single host interface. 2. The system according to claim 1, further comprising:a bus controller driver on the host, wherein the bus controller driver implements a first portion of the single host interface and wherein the bus controller driver interfaces with the miniport drivers using a selective subset of the respective native bus protocols of the one or more of the plurality of peripheral devices; anda bus controller on the host, wherein the bus controller implements a second portion of the single host interface and wherein the bus controller interfaces with the embedded controller using the single host interface.3. The system according to claim 2, wherein the first portion of the single host interface implemented by the bus controller driver implements the bus management processes needed to communicate using the single host interface.4. The system according to claim 2, wherein the second portion of the single host interface implemented by the bus controller implements the bus transactions needed to communicate using the single host interface.5. The system according to claim 2, wherein the embedded controller comprises firmware that implements the single host interface and uses the single host interface to communicate with the bus controller. 6. The system according to claim 1 wherein information used by the host to interoperate with the plurality of peripheral devices is communicated between the host and the embedded controller using the single host interface and wherein the information used by the host to interoperate with the plurality of peripheral devices is communicated between the embedded controller and the plurality of peripheral devices using the respective native bus protocols of the plurality of peripheral devices.7. The system according to claim 1, wherein the single host interface is an interface selected from the group consisting of: eSPI (Enhanced Serial Peripheral Interface), LPC, a serial interface, I2C interface, USB interface, SPI interface, and CAN interface.8. A device for communicating with a plurality of peripheral devices, the device comprising:a plurality of class drivers, wherein each of the class drivers implements functionality associated with one or more peripheral devices of the plurality of peripheral devices;a plurality of miniport drivers, wherein each miniport driver provides an interface by which one or more of the class drivers communicate with the one or more peripheral devices using class protocols, wherein the miniport drivers communicate through a single host interface supported by the host; andan embedded controller that interfaces with the plurality of peripheral devices using the respective native bus protocols of the peripheral devices and wherein the embedded controller interfaces with the plurality of miniport drivers using a single host interface.9. The device according to claim 8, further comprising:a bus controller driver, wherein the bus controller driver implements a first portion of the single host interface and wherein the bus controller driver interfaces with the miniport drivers using a selective subset of the respective native bus protocols of the one or more peripheral devices; anda bus controller, wherein the bus controller implements a second portion of the single host interface and wherein the bus controller interfaces with the embedded controller using the single host interface. 10. The device according to claim 9, wherein the first portion of the single host interface implemented by the bus controller driver implements the bus management processes needed to communicate using the single host interface.11. The device according to claim 9, wherein the second portion of the single host interface implemented by the bus controller implements the bus transactions needed to communicate using the single host interface.12. The device according to claim 9, wherein the embedded controller comprises firmware that implements the single host interface and uses the single host interface to communicate with the bus controller.13. The device according to claim 8, wherein the single host interface is an interface selected from the group consisting of: eSPI (Enhanced Serial Peripheral Interface), LPC, a serial interface, I2C interface, USB interface, SPI interface, and CAN interface.14. A method for communicating between a host and a plurality of peripheral devices, the method comprising:providing functionality associated with the plurality of peripheral devices wherein the functionality is provide on the host by a plurality of class drivers;transmitting first communications between the plurality of class drivers and the plurality of peripheral devices, wherein the first communications implement the peripheral device functionality on the host and wherein the first communications are transmitted by a plurality of miniport drivers using class protocols;transmitting second communications between the plurality of miniport drivers and an embedded controller, wherein the second communications are transmitted using a single host interface; andtransmitting third communications between an embedded controller and the plurality of peripheral devices, wherein the third communications are transmitted using the respective native bus protocols of the plurality of peripheral devices.15. The method according to claim 14, wherein the second communications are transmitted via a bus controller driver that implements a first portion of the single host interface; and via a bus controller that implements a second portion of the single host interface.16. The method according to claim 15, wherein the first portion of the single host interface implemented by the bus controller driver implements the bus management processes needed to communicate using the single host interface. 17. The method according to claim 15, wherein the second portion of the single host interface implemented by the bus controller implements the bus transactions processed needed to communicate using the single host interface.18. The method according to claim 15, wherein the embedded controller comprises firmware that implements the single host interface and uses the single host interface to communicate with the bus controller.19. The method according to claim 14, further comprising transforming the first and third communications that utilize the class protocols and the respective native bus protocols of the plurality of peripheral devices into the third communications that utilize the single host interface.20. The method according to claim 14, wherein the single host interface is selected from the group consisting of: a serial interface, I2C interface, USB interface, SPI interface, and CAN interface.
UNIFYING CLASS DEVICE INTERFACE WITH ONE HOST INTERFACEBY USING EMBEDDED CONTROLLERCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 62/000,089 tiled on May 19, 2014, which is incorporated herein in its entirety.TECHNICAL FIELDThe present disclosure relates to peripheral device communications with a host device via a single host interface.BACKGROUND OF THE INVENTIONWith the advent of the personal computer, there has been a steady proliferation in the variety of human interface devices (HIDs) that provide mechanisms for human users to provide input and receive output from computer programs executing on a host device. As the capabilities of the personal computer have advanced, so has the variety and sophistication in peripheral HIDs that are available to users. This has resulted in many different hardware and software interfaces for communicating between peripheral devices and a host device.The set of software interfaces that are utilized by a host device in supporting communications with peripheral devices are typically organized as stacked layers of interfaces. Each layer in the stack is comprised of software programs that implement a particular aspect of the functionality required for the operation of the peripheral device by the host device. The bottom layers of the stack are software programs that interface the processor of the host device with hardware buses that are used to transmit signals to and from the peripheral devices. The top layers of the stack are software programs that provide an interface by which human users or other software programs can operate the peripheral devices.When a new peripheral device is installed for use by a host device, part of that installation processes includes verifying whether the stack of the host device includes all software necessary to communicate with the new peripheral device. In many cases, this installation process requires at least some updates to the software stack to include device- specific software that is required to fully utilize the new peripheral device. Installing device- specific software does not always address all compatibility issues. In order for the host device to interface with the new peripheral device, the host device must support the low-level bus communication protocol that is required by the peripheral device. Support for a bus communication protocol by a host device usually requires a hardware-level bus implementation that is ideally implemented when the host device is designed and manufactured.Early generations of peripheral HIDs, which included devices such as keyboards and mice, interfaced with the host device using serial ports. Many of these early, serial port HIDs communicated with the processor of the host device via a Low Pin Count (LPC) bus. Support for the LPC bus is typically implemented on the host device by dedicated pins on the host device processor. Other bus protocols can be similarly implemented at the hardware- level of the host device. Host devices manufacturers choose which bus-protocols to support at the hardware level, which dictates whether the host device will be compatible with certain peripheral devices.As new types of peripheral HIDs have entered the marketplace, the LPC bus serial interface used by peripherals gave way to new peripheral device interfaces. However, the resulting growth in the number of proprietary interfaces used by peripheral HIDs became untenable for host device manufacturers to support. Largely in response to this quandary, a consortium of hardware and software manufactures developed the Universal Serial Bus (USB), which provides a standardized interface for peripherals devices to communicate with a host device. USB was quickly adopted throughout the industry and has further encouraged the proliferation in peripheral HIDs.Despite the popularity of USB, the hardware interfaces and bus protocols utilized by peripheral devices have continued to evolve. New interfaces continue to be introduced and existing interfaces are adapted for use by new classes of peripherals. In some cases, only software updates are required to support a new interface. For instance, efforts to further standardize peripheral device communications have resulted in the new peripheral device interfaces. The HID-USB protocol standardizes HID communications using the USB protocol. The HID-I2C protocol similarly standardizes HID communications using the I2C protocol. As long as a host device includes hardware support for the USB and I2C bus protocols, the host device can support peripheral devices that utilize the HID-USB or HID- 12C protocols through updates to the host device's software stack.In other cases, new peripheral interfaces will require hardware support by the host device. For instance, as sensors continue to be adopted as components of peripheral devices, new interfaces (such as I2C) are being used by this relatively new class of peripheral devices. . eSPI is a new interface replacing LPC as a single host interface to the embedded controller (EC). As with other bus protocols, support for the eSPI bus is ideally implemented by dedicating processor pins of the host device to implement this eSPI bus, in addition to including the software necessary to implement the eSPI bus protocol. Host device manufacturers must remain forward-looking in deciding whether to include hardware support for new bus protocols that are used by new classes of peripheral devices. Legacy peripheral devices place similar pressures on host device manufacturers. The need to continue to provide support for popular peripheral devices often compels host device manufacturers to continue supporting legacy hardware interfaces. Thus, host device manufacturers face pressure to include support for emerging bus protocols used by new peripheral devices while still maintaining support for legacy bus protocols. Furthermore, host device manufactures must remain adaptable in seamlessly supporting updates to existing bus protocols. Updates to the software interfaces available for use by a host device are relatively easy to accomplish when compared to updates to hardware-level interfaces. For instance, updating a class device driver to support new peripheral device functionality is relatively easy for a host device to support versus adding support for a new bus protocol, such as eSPI. Accordingly, there is a need for a host device that can utilize existing bus protocol hardware to support new bus protocols that would otherwise require additional hardware support by the host device. SUMMARY OF THE INVENTIONIn order to alleviate the burden on host devices to provide support for all popular peripheral device interfaces, a need exists for a mechanism by which the host can communicate with peripheral device while only using a single protocol, all while allowing the peripheral devices and their associated software to continue to operate using their respective native communication protocols. The need also exists for this mechanism to be configurable in order to add support for new peripheral device protocols. According to embodiments, a system for communicating between a host and a plurality of peripheral devices is provided. The host includes a plurality of class drivers and miniport drivers. Each of the class drivers implements functionality associated with one or more of the plurality of peripheral devices. Each miniport driver provides an interface by which one or more of the class drivers communicate with one or more of the plurality peripheral devices using class protocols, wherein the miniport drivers communicate through a single host interface supported by the host. An embedded controller interfaces with the plurality of peripheral devices using the respective native bus protocols of the peripheral devices and wherein the embedded controller interfaces with the plurality of miniport drivers using the single host interfaceAnother embodiment comprises a bus controller driver on the host, wherein the bus controller driver implements a first portion of the single host interface and wherein the bus controller driver interfaces with the miniport drivers using a selective subset of the respective native bus protocols of the one or more of the plurality of peripheral devices; and a bus controller on the host, wherein the bus controller implements a second portion of the single host interface and wherein the bus controller interfaces with the embedded controller using the single host interface. In another embodiment, the first portion of the single host interface implemented by the bus controller driver implements the bus management processes needed to communicate using the single host interface. In another embodiment, the second portion of the single host interface implemented by the bus controller implements the bus transactions needed to communicate using the single host interface. In another embodiment, the embedded controller comprises firmware that implements the single host interface and uses the single host interface to communicate with the bus controller. In another embodiment, the information used by the host to interoperate with the plurality of peripheral devices is communicated between the host and the embedded controller using the single host interface and wherein the information used by the host to interoperate with the plurality of peripheral devices is communicated between the embedded controller and the plurality of peripheral devices using the respective native bus protocols of the plurality of peripheral devices. In another embodiment, the single host interface is an interface selected from the group consisting of: eSPI (Enhanced Serial Peripheral Interface), LPC, a serial interface, I2C interface, USB interface, SPI interface, and CAN interface. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art, by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.Figure 1 is a block diagram illustrating a stacked set of software interfaces utilized by a conventional host device for supporting communications with a set of peripheral devices.Figure 2 is a block diagram illustrating a stacked set of software interfaces utilized by one embodiment of a host device for supporting communications with a set of peripheral devices.Figure 3 illustrates a software and firmware stack utilized by an embodiment that supports communications with I2C HIDs over a single host interface.DETAILED DESCRIPTIONOperation of a peripheral device using a host device requires ensuring compatibility of the peripheral device with the software and hardware interfaces provided by the host device. In many cases, determining hardware compatibility is only a matter of ascertaining whether the host device supports the type of hardware bus required by the new peripheral device. In many instances, hardware compatibility is determined when the host device and the peripheral device are designed and manufactured. In some instances, a host device may be modified by re-configuring general purpose hardware in order to support new peripheral devices. If a peripheral device is determined to be compatible with the hardware interface provided by a host device, software compatibility can then be ascertained.From a software perspective, compatibility requires that the host device execute the software that is necessary to operate the peripheral device. As described, most host device systems typically organize this software into stacked layers of interfaces. Each layer provides a specialized set of services used to implement the supported communication pathways between the host device and compatible peripheral devices. Supporting new peripheral devices requires ensuring that the correct software interfaces are available at each layer of the stack. In some cases, existing libraries already in use by the host device are adequate for supporting a new peripheral device. In other cases, updates software programs in one or more levels of the stack are required.In conventional systems, the highest layer in the stack is comprised of application software programs. These device-specific programs provide an interface, which may include a user-interface, by which the features and functionality provide by peripheral device are operated. Operating in the layer below the application software are device driver programs that implement the low-level instructions that implement the features provided by the application software. Most systems rely on class device driver programs that provide standardized libraries of driver programs that can be used to operate a variety of peripheral programs of a specific type. For instance, a HID class USB driver will include common functionality for interoperating with HIDs using the USB protocol. A similar HID class I2C driver can implement the same common HID functionality using the I2C protocol. Certain device-specific capabilities of a peripheral device may not be supported by class drivers. These scenarios may require that miniclass driver programs also be installed in order to implement device-specific functionality. The miniclass driver is configured to interoperate with the class driver, with the miniclass driver providing device-specific functionality and the class driver providing general functionality.Below the class drivers are software programs that implement the communication protocols by which the host device communicates with the peripheral device. These lower level layers include bus protocol drivers and bus controller programs. Each bus protocol driver implements the instructions for communicating with a peripheral device via a specific bus protocol, such as USB and I2C. A bus controller implements the instructions for actually transmitting data along one of the supported hardware buses. Together, these two layers implement the bus protocols used to transmit information between a host device and peripheral devices. In conventional systems, the host device must provide a hardware implementation in support of a bus protocol. In some cases, these protocols are implemented using dedicated pins in order to interface the bus used by the protocol with the host device processor. Since these bus protocols implementations are often hardware dependent, new bus protocols cannot be readily supported by a host device. Figure 1 illustrates a typical system of stacked interfaces utilized in a conventional host device 100 in order to support communications with a set of peripheral devices l lOa-f. In a conventional system, supporting a marketable variety of peripheral devices places a significant burden on the host device 100. The host device 100 must be designed and manufactured such that it supports a diverse set of hardware and software interfaces in order to provide robust compatibility with a marketable variety of peripheral devices. As described, this compatibility must be forward-looking such that the capabilities of the host device 100 can be updated in response to the adoption of new technologies in the peripheral marketplace. The compatibility must also be backward looking in order for the host device 100 to continue providing support for popular legacy devices.At the top layer in the conventional system of Figure 1 , application software 1 15a-f provides an interface to the features and functionality provided by the peripheral devicesH Oa-f. In some instances, this application software 115a-f may provide an interface by which the host device 100 allows a human user to interact with the peripheral devices 1 10a- f.In other instances, the application software 1 15a-f may provide only a software interface.The remaining of the stacked layers illustrated in Figure 1 is utilized by host device 100 in order to link the application software 1 15a-f to the peripheral devices 11 Oa-f.The stacked layers illustrated in the conventional system of Figure 1 allow the host device 100 to support communications with peripheral devices via a range of communication protocols. The touchscreen software supporting a touchscreen HID 1 15e executes on the CPU of the host device and communicates via I2C with a peripheral touchscreen HID 1 10b. Software for implementing features of a keyboard 1 15d receives inputs from a peripheral USB keyboard HOe. The host device 100 also executes external display software 115a that communicates with an external monitor 110a via a serial port interface. Host device 100 also executes a battery monitor software program 1 15b that interfaces via I2C with an external battery 110c. Host device 100 also executes software supporting external data storage 1 15e that interfaces with a USB flash drive H Od. Host device 100 also executes video game software 1 15f that interfaces via a Serial Peripheral Interface (SPI) with a peripheral video gamepad 1 lOf.In the conventional system depicted in Figure 1 , in order for the host device 100 to provide support for each of the peripheral devices 11 Oa-f, the host device 100 also utilizes layers of components such as class drivers 120a-f, transport minidrivers 135a-b, bus protocol drivers 125a-d and bus controllers 130a-d. Each of these layers implements aspects of the lower-level communication protocols used by peripheral devices HOa-f. For instance, in support of a serial port peripheral device such as external monitor 110a, the external display software 115a running on the host device 100 invokes a component such as a serial port class driver 120a that provides the external display software 1 15a with a high-level, serial port software interface. This serial port class driver 120a implements serial port communications functions that are commonly utilized by monitors. The serial port class driver 120a, in turn, invokes a component such as a serial port driver 125a to manage the serial port connection and the transmission of data via the serial port. The serial port driver 125a relies on a component such as a serial port bus controller 130a to manage the actual transactions on the serial port bus, by which data is transmitted to and from the external monitor 1 10a.In a similar fashion, the conventional system of Figure 1 also includes support for other peripheral interfaces. For peripherals designed to communicate using the I2C protocol, the host device 100 relies on a component such as an I2C driver 125b in order to manage I2C connections between the host device and peripheral devices such as the HID touchscreen 100b and external battery 1 10c. Supporting these I2C peripherals further requires the host device 100 to include a component such as an I2C bus controller for managing data transactions on the I2C bus.The touch screen software 1 15c running on the host device 100 invokes a component such as a HID class driver 120a that provides the touch screen software 1 15c with a high- level, HID class interface. This HID class driver 120c implements HID protocol communication functions that are commonly utilized by HID protocol based peripheral devices. The HID class driver 120c, in turn, invokes a component such as a HID-I2C transport minidriver 135a, which implements the communication functions utilized by HID class devices using the I2C protocol. The HID-I2C transport minidriver 135a, in turn, invokes the I2C driver 125b to manage I2C connections and transmission of data via I2C bus.Class drivers may implement common functionality using multiple communication protocols. The host device 100 utilizes various class interfaces that are capable of operating the I2C driver 125b. In the system of Figure 1, the battery class driver 120b and the HID class driver 120c provide the ability to utilize the I2C protocol via the I2C driver 125b. The battery class driver 120b implements functions commonly employed by peripheral battery devices. The HID class driver 120c implements functions commonly used by HIDs and supports both the I2C and USB communication protocols using the HID-I2C and HID-USB transport minidrivers 135a-b. In some host devices, two separate HID class drivers may be used, each one supporting a different communication protocol.Similarly, the conventional host device 100 includes a component such as a USB driver 125c for managing connections to USB-enabled peripheral devices, such as USB flash drive HOd and USB keyboard HOe. The USB driver 125c relies on HID-USB transport minidriver 135b, which implements the communication functions utilized by HID class devices using the USB protocol. Supporting these USB peripherals further requires the host device 100 to include a component such as a USB bus controller 130c for managing transactions on the USB bus. The host device 100 utilizes various class drivers to operate the USB driver 125c, with each class driver implementing common USB functionality used by different types of peripheral devices. Storage class driver 120d implements common functionality used by peripheral storage devices and the HID class driver 120c implements common functionality used by various types of HIDs. The conventional host device 100 also includes a SPI driver 125d for managing connections to SPI-enabled devices, such as the video gamepad 1 lOf. In order to provide support for the video gamepad 1 lOf, the host executes gamepad software 1 15f that invokes a component such as an SPI class driver 120a that provides a high-level SPI software interface. The SPI class driver 120a implements common SPI functionality used by peripheral game pad devices. The SPI class driver relies on a component such as the SPI driver 125d for managing connections to SPI-enabled devices. The SPI driver 125d relies on a component such a SPI bus controller 130d for managing the actual transactions on the SPI bus.As described above, an ideal host device is able to support both popular current and legacy protocols and also remain adaptable to include future protocols. As the number of peripheral devices that are supported by a host device increases, so does the complexity of the stacked layers of interfaces used to support the peripheral devices. As these stacked layers increase in complexity, effectively managing updates to components of the stack becomes increasingly difficult due to the many interdependencies that develop within the stacked layers. Addressing these demands, embodiments of the invention provide the ability for a host device to interface with a range of peripheral devices that utilize different communication protocols through a single interface. One of various embodiments is illustrated in Figure 2. According to the embodiment of Figure 2, each of the peripheral devices described with respect to Figure 1 connect to the host device 200 through a single host interface 260. The host device 200 relies on an embedded controller (EC) 215 to interface with each of the peripheral devices 210a-f. The embedded controller 215 is configured to interface with each of the peripheral devices 210a-f through each peripheral device's native interface and to interface with the host device 200 through the single host interface 260. Per this configuration, bus-level transactions with each of the peripheral devices 210a-f are managed by the embedded controller 215 on behalf of the host device 200. With all bus-level transactions funneled through the interface between the host device200 and the embedded controller 215, the host device does not need to support any other hardware interfaces in order support the peripheral devices 210a-f. In the embodiment of Figure 2, the single host interface 260 is implemented using the LPC hardware interface. Other embodiment may utilize other hardware interfaces as the single host interface 260. For instance, other embodiments may implement the single host interface using eSPI (Enhanced Serial Peripheral Interface Bus), I2C or a PCI hardware interface. Some embodiments may provide the ability to configure the hardware interface that is used as the single host interface.Regardless of the hardware interface that is used as the single host interface 260, this selection is transparent to the peripheral devices 210a-f. Each of the peripheral devices 210a- f communicates with the host device 200 using the same native bus interface as in a conventional system. For instance, external monitor 210a still communicates with host device using the same serial port interface used in a conventional system. However, instead of interfacing directly with a hardware interface provided by the processor of the host device, the peripheral devices 2 lOa-f interface with a hardware interface provided by the embedded controller 215. The application software programs 205a-f associated with each of the peripheral devices 210a-f are also unaffected by use of a single host interface 260 by the embedded controller 215. Each application software program 205a-f communicates with its corresponding peripheral device 210a-f using the same class drivers 220a-c and any miniclass drivers used in the conventional system of Figure 1. As a result, the functionality provided by the peripheral devices 210a-f is unaffected by the embedded controller 215 serving as an intermediary that bridges the low-level bus transactions with the peripheral devices 210a-f. In the conventional system of Figure 1 , the communication protocols are implemented using software components such as bus protocol drivers 125a-d and bus controllers 130a-b. Each of the bus protocol drivers 125a-d implements a software interface corresponding to one of the hardware bus protocols that is supported by host device 100. For instance, the host device 100 executes an I2C driver 125b, which provides a software interface by which the host device 100 communicates with I2C devices such as external battery 1 10c and touchscreen 1 10b. These bus protocol drivers 125a-d implement the I2C instructions that are invoked by the host device 100 in order to communicate with I2C peripheral devices.The hardware bus controllers 130a-b of the conventional system of Figure 1 are software programs utilized by the host device 100 in order to implement the individual bus- level transactions needed to communicate with peripheral devices according to their supported native bus protocol. These bus controller programs provide the instructions for actually transmitting data on the individual buses that are supported by the host device. The bus controller programs implement the bus transactions used by all devices that utilize that particular protocol. For instance, I2C bus controller 130b implements the low-level software used to transmit data on the physical I2C bus and mediates access to the I2C bus on behalf of all peripheral devices that are communicating with the host device 100 via I2C. In the conventional system of Figure 1 , both the HID touchscreen 1 10b and the battery device 1 10c communicate with the host device 100 via the I2C bus controller 130b. Determining whether a new peripheral device can be operated by host device 100 in a conventional system requires ensuring that all of the software programs necessary to operate the new peripheral device are installed and accessible to the host device 100. For instance, when installing the USB keyboard 1 lOe for use by the personal computer, all of the software necessary for operating the keyboard by the host device 100 must be installed on the personal computer. The device-specific application software 115d that provides features of the keyboard must be installed. A suitable class driver 120c for operating keyboard devices must be identified in the class driver libraries of the personal computer or otherwise must be installed. Software programs implementing the USB protocol must also be installed or identified. A device-specific USB driver 125c must be installed that implements the USB communications necessary to operate the keyboard. A transport minidriver 135b to interface the class driver 120c and the device-specific USB driver must also be installed or identified. And, a general purpose USB bus controller 130c must be identified, or otherwise installed. In the embodiment of Figure 2, the same application software 205a-f and class drivers 220a-e are used as in the conventional system of Figure 1. Consequently, the modifications required to convert a conventional system to an embodiment are transparent to the application software 205a-f and class drivers 220a-e. The peripheral devices 210a-f are also unchanged. From the perspective of the peripheral devices 210a-f and the application software 205a-f and class drivers 220a-e executing on the host device 200, no changes are apparent since the native interface being used to communicate between these components is unchanged from a conventional host device. For instance, the USB HID keyboard 210e and the corresponding application software 205d and class driver 220c still invoke the USB protocol to communicate between them. However, rather than transport minidrivers 135a-b, device- specific bus protocol drivers 125a-d and bus controllers 130a-d present in the conventional system of Figure 1 , embodiments instead utilize miniport drivers 240a-d and a single controller driver 245 and a single bus controller 250 for implementing a single host interface by which all peripheral devices will communicate with the host device 200. According to embodiments, components such as miniport drivers 240a-d are used in conjunction with class delivers 220a-e. The miniport drivers 240a-d implement the communications used by the class drivers 220a-e. Miniport drivers 240a-d implement the communications functions used by a class of devices and are configured to interoperate with the bus-level communications functions provided by the controller driver 245. For instance, in the embodiment of Figure 2, the HID miniport driver 240 implements communications used by HID class devices and is configured to forward these communications to the embedded controller 215 using the bus communication protocol implemented by the controller driver 245 and the controller 250. Communications received by the controller 250 are then translated from the protocol of the single host interface 260 to the native protocol used by the peripheral devices 210a-f. The analogous translations are made by these components for communications originating at the peripheral devices 210a-f and flowing to the application software 205a-f. . In this manner, the miniport drivers 240a-d serve as bridge on the host device 200 between the class specific protocol like the HID-protocol and the native bus protocol utilized by the peripheral devices through the single bus protocol that is implemented by the host device 200.Similar to the bus protocol drivers 125a-d and the bus controllers 130a-d of the conventional host device in Figure 1 , the controller driver 245 and the controller 250 implement the bus communications for host device 200. However, rather than implement every bus protocols that are deemed necessary by the designers of the host device 200, the controller driver and 245 and the controller 250 implement a single bus protocol. In the embodiment of Figure 2, this single bus protocol is LPC, but can be any other communication protocol that can be supported by the host device 200 and the embedded controller 215. Regardless of the bus protocol that is actually implemented, the ability of the host device 200 to rely on the single host interface 260 to communicate with all peripherals means that the host device 200 need only implement hardware support for the single bus protocol utilized by the single host interface 260. In some embodiments, this single host interface may be configurable. Such embodiments still benefit from only having to support a limited number of bus protocols, rather than the universe of bus protocols required by the set of peripheral devices that will be supported by the host device.Embodiments also benefit from maintaining full control of the bus communication between the peripherals and the host device processor. Certain bus protocols require proper bus mastering by components that utilize the bus. Even though the bus protocol required by a peripheral may be supported by the host device, differences in low level aspects of the bus communications, such as bus mastering, can result in error conditions and/or inefficient operation of this communication link. Since the bus controllers 130a-d and the bus protocol drivers 125a-d in a conventional system potentially interface directly with a variety of peripherals, updates to these components must ensure that backwards compatibility is maintained. Thus, updates to the software of these bus protocol layers that are made in order to accommodate a new peripheral device can be difficult to implement effectively. Consequently, a host device benefits by having to support bus communications with only the embedded controller, which is better suited to accommodating any such incongruities in the low level bus protocol implementations that may be used by different peripheral devices.In order for the use of a single host interface 260 by the host device 200 to be transparent to the peripheral devices 210a-f, the host device relies on embedded controller 215 to bridge bus transactions with the peripheral devices 210a-f. In some embodiments, the embedded controller is comprised of three main components. One of these components is the firmware used to implement the bus transactions using the bus protocol chosen for the single host interface 260. In the embodiment of Figure 2, the firmware of the embedded controller 215 communicates with the processor of the host device 200 using an LPC bus supported in the host device 200 hardware. The firmware transmits communications between the peripheral devices 210a-f and the host device 200, but bridges the communications using the bus protocol utilized by the single host interface 260.The embedded controller 215 of the embodiment of Figure 2 is further comprised of a super I/O component. This super I/O component translates between the native bus protocol communications used by the device native interface firmware and the bus protocol utilized by the single host interface 260. The super I/O component need not process the actual information transmitted between the host and of the peripheral device and instead need only translate the protocol used to transmit these communications. In some embodiments, the super I/O component of the embedded controller 215 communicates with the host device 200 using a memory/mailbox interface that is supported by the processor of the host device 200.The embedded controller 215 of the embodiment of Figure 2 is further comprised of device native interface firmware. The device native interface firmware implements the communications with the peripheral devices 210a-f according to their native bus protocol. This firmware implements the bus transactions for each of the bus protocols that are supported on behalf of the host device 200. New bus protocols can be supported by updating this firmware, without affecting the host device 200 and thus does not require updates to the hardware interfaces of the host device 200. In some embodiments, the device native interface firmware of the embedded controller 215 may implement a set of generic and device-specific class drivers configured to support bus transactions in the native bus protocols used by the peripheral device 210a-f.Figure 3 illustrates the software and firmware stack for an embodiment that supports communications with HID devices that utilize the I2C using LPC as a single host interface. In the embodiment of Figure 3, the host device 300 executes an HID class driver 315 that implements HID functionality for use by the host. In some embodiments, the HID class driver 315 will implement functionality by which a user can interact with supported HID peripheral devices 340 and 345. In other embodiments, the HID class driver 315 will only implement functionality used by the host 300 to interoperate with supported HID peripheral devices 340 and 345. The HID class driver 315 interfaces with an HID-EMI miniport driver 320 in order to communicate with the peripheral HID devices 340 and 345. The HID-EMI miniport driver 320 communicates with the HID class driver using class specific HID protocol utilized by the HID devices 340 and 345. The HID-EMI miniport driver 320 translates these HID protocol communications to the LPC/EMI bus protocol 310 that is used as the single host interface that is supported by the host device 300. The HID-EMI miniport driver 320 interfaces with the LPC/EMI bus driver 325, which implements the LPC/EMI bus protocol 310 used as the single host interface used by the host device 300.On the embedded controller 305, the LPC/EMI protocol is implemented in firmware 330. The firmware component 330 transmits the translated peripheral device communications between the host and the embedded controller by interfacing with the LPC/EMI bus driver 325 of the host device 300. Also executing on the embedded controller 305, an I2C firmware driver 335 interfaces with the peripheral devices 340 and 345 in the native I2C bus protocol.Although the foregoing specification describes specific embodiments, numerous changes in the details of the embodiments disclosed herein and additional embodiments will be apparent to, and may be made by, persons of ordinary skill in the art having reference to this description. In this context, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of this disclosure. Accordingly, the scope of the present disclosure should be determined by the following claims and their legal equivalents.
The present invention describes a method including: opening a door of a shell, the shell enclosing a stack of full-contact rings; determining a full-contact ring within the stack to engage; opening split ends of the full-contact ring to release a large wafer; lowering the large wafer from the full-contact ring; closing the split ends of the full-contact ring; extracting the large wafer from the shell; and closing the door of the shell.
IN THE CLAIMS We claim: 1. A method comprising: opening a door of a shell, said shell enclosing a stack of full-contact rings; determining a full-contact ring within said stack to engage; opening split ends of said full-contact ring to release a large wafer; lowering said large wafer from said full-contact ring; closing said split ends of said full-contact ring; extracting said large wafer from said shell; and closing said door of said shell. 2. The method of claim 1 wherein said split ends are interlocked until opening. 3. The method of claim 1 comprising lowering said large wafer a distance of 3 mm when said stack of full-contact rings has a pitch of 10-12 mm. 4. The method of claim 1 wherein said large wafer is released from an internal groove in said full-contact ring. 5. A full-contact ring to hold a large wafer comprising: two arms, said arms being semicircular, said arms having split ends located at 6 o'clock; two pillars securing said arms, said pillars being rigid, said pillars located at 11 o'clock and 1 o'clock; and two outriggers supporting said arms, said outriggers being at 8 o'clock and 4 o'clock. 6. The full-contact ring of claim 5 wherein said arms are formed from a poly etherether ketone (PEEK) material filled with milled Carbon fiber (MCF). 7. The full-contact ring of claim 5 wherein said arms are formed from a liquid crystal polymer (LCP) material filled with milled Carbon fiber (MCF). 8. The full-contact ring of claim 5 wherein said arms include an internal groove to hold said large wafer. 9. The full-contact ring of claim 5 wherein said split ends may be interlocked. 10. A shell comprising:a vertical stack of full-contact rings; pillars to support said full-contact rings; ribs to support outriggers on said full-contact rings; and a door. 11. The shell of claim 10 comprising a liquid crystal polymer (LCP) material filled with milled Carbon fiber (MCF) for an in-fab wafer carrier. 12. The shell of claim 10 further comprising a large wafer held by each of said full- contact rings. 13. The shell of claim 10 comprising a Liquid Crystal Polymer (LCP). 14. The shell of claim 10 comprising a polycarbonate material for a wafer shipper.
FULL-CONTACT RING FOR A LARGE WAFER BACKGROUND OF THE INVENTION 1. FIELD OF THE INVENTION [0001] The present invention relates to a field of semiconductor integrated circuit manufacturing, and more specifically, to a method of supporting a large wafer during transport and manufacturing. 2. DISCUSSION OF RELATED ART [0002] Gordon Moore originally observed in 1964 that technology innovation results in a doubling of the number of transistors per unit area on an integrated circuit (IC) chip every 12 months. By 1975, the trend had settled down to a doubling about every 18 months. Over the ensuing decades, the semiconductor industry has adhered closely to Moore's Law in increasing the density of transistors for each generation of IC chips. [0003] Maintaining such a schedule has required a scaling down of the metal oxide semiconductor field effect transistor (MOSFET) that is used in a complementary metal-oxide-semiconductor (CMOS) circuit. The characteristics of the transistor have been improved by implementing various advanced features such as twin well, super-steep retrograde well profile, abrupt source and drain (S/ D) junction, highly doped channel, thinner gate dielectric, and shorter gate length. [0004] The IC chip includes a planar transistor that is formed in a bulk substrate, such as a wafer. The wafer is made from a semiconductor, such as silicon. During processing, a material may be added to, or removed from, the wafer. The material may include an insulator, such as silicon oxide, or a conductor, such as copper. [0005] Some processes that may be used to add the material, partially or completely, to the wafer include chemical vapor deposition, sputtering, electroplating, oxidation, and ion implantation. Other processes that may be used to remove the material, partially or completely, from the wafer include wet etching, dry etching, and chemical-mechanical polishing. As needed, photolithography may be used to restrict the process to a certain portion of the wafer.[0006] Many parameters of the IC chip are monitored during fabrication to ensure that the product specification for performance and reliability will be met even as the design rule becomes tighter. However, as the wafer size becomes larger, such as a diameter of 450 mm, challenges may arise in handling and transporting the wafer without incurring any damage. BRIEF DESCRIPTION OF THE DRAWINGS [0007] Figure 1 is a plan view of a full-contact ring in a close configuration stacked inside a shell according to an embodiment of the present invention. [0008] Figure 2 is a plan view of a full-contact ring in an open configuration according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE PRESENT INVENTION [0009] In the following description, numerous details, such as specific materials, dimensions, and processes, are set forth in order to provide a thorough understanding of the present invention. However, one skilled in the art will realize that the invention may be practiced without these particular details. In other instances, well-known semiconductor equipment and processes have not been described in particular detail so as to avoid obscuring the present invention. [0010] In an embodiment of the present invention as shown in Figure 1, a full- contact ring 100 secures, holds, or supports a substrate, such as a large wafer 200. The large wafer 200 may include an elemental semiconductor, such as silicon, or a compound semiconductor, such as silicon germanium (SiGe) or gallium arsenide (GaAs). [0011] The full-contact ring 100 prevents the large wafer 200 from accumulating damage during manufacture of an Integrated Circuit (IC) chip. The damage may be structural, mechanical, physical, or chemical. The damage may be localized to a portion of an edge, surface, or bulk of the large wafer 200. [0012] In particular, the full-contact ring 100 prevents the large wafer 200 from sustaining damage when stored, transported, or handled between process steps. Damage to the large wafer 200 may result from improper or excessive exposure, contact, shock, or vibration.[0013] The large wafer 200 may be circular. In an embodiment of the present invention, the large wafer 200 has a diameter of 150 millimeters (mm). In an embodiment of the present invention, the large wafer 200 has a diameter of 200 mm. In an embodiment of the present invention, the large wafer 200 has a diameter of 300 mm. In an embodiment of the present invention, the large wafer 200 has a diameter of 450 mm. In an embodiment of the present invention, the large wafer 200 has a diameter of 675 mm. [0014] The large wafer 200 may be circular and flat. In an embodiment of the present invention, the large wafer 200 has a diameter of 150 (+/- 0.2) mm and a thickness of 675 (+/- 15) microns (um). In an embodiment of the present invention, the large wafer 200 has a diameter of 200 (+/- 0.2) mm and a thickness of 725 (+/- 15) microns (um). In an embodiment of the present invention, the large wafer 200 has a diameter of 300 (+/- 0.2) mm and a thickness of 775 (+/- 25) um. In an embodiment of the present invention, the large wafer 200 has a diameter of 450 mm and a thickness selected from a range of 700-1,300 um. In an embodiment of the present invention, the large wafer 200 has a diameter of 450 mm and a thickness selected from a range of 825-925 um. In some situations, the large wafer 200 is thicker than otherwise required so as to accommodate strips, cleans, etches, and reworks. [0015] In an embodiment of the present invention as shown in Figure 1, multiple full-contact rings 100 are nested within a shell 300 for an in-fab wafer carrier or a wafer shipper. In an embodiment of the present invention, the full-contact rings 100 are disassembled to clean the shell 300. In an embodiment of the present invention, the full-contact rings 100 are left in place even when cleaning the shell 300. [0016] In an embodiment of the present invention, a maximum of 5 full-contact rings 100 are stacked inside the shell 300. In an embodiment of the present invention, a maximum of 10 full-contact rings 100 are stacked inside the shell 300. In an embodiment of the present invention, a maximum of 15 full-contact rings 100 are stacked inside the shell 300. In an embodiment of the present invention, a maximum of 20 full-contact rings 100 are stacked inside the shell 300. In an embodiment of thepresent invention, a maximum of 25 full-contact rings 100 are stacked inside the shell 300. [0017] The shell 300 is a housing that provides support and protection for the large wafers 200 held by the full-contact rings 100. In an embodiment of the present invention, the shell 300 keeps dust out, allows purging, and protects the large wafer 200 from damage. [0018] In an embodiment of the present invention, the shell 300 has a width of 539.5 mm and a depth of 505 mm. In an embodiment of the present invention as shown in Figure 1, the shell 300 has corners that are faceted to reduce volume that needs to be purged. In an embodiment of the present invention as shown in Figure 1, the shell 300 has corners that are rounded to reduce stress that may result from the weight of the large wafers 200. [0019] In an embodiment of the present invention, the shell 300 has an outer wall with a thickness of 2.0-3.0 mm. In an embodiment of the present invention, the shell 300 has an outer wall with a thickness of 3.0-4.0 mm. [0020] The shell 300 has an opening in the front wall. In an embodiment of the present invention, the opening occupies most of the front wall. In an embodiment of the present invention, the center of the front wall of the shell 300 is located at a 6 o'clock position. [0021] In an embodiment of the present invention, the portion of the shell 300 surrounding the opening is reinforced by stiffener rods 360 placed near the edges of the opening. In an embodiment of the present invention, the portion of the shell 300 surrounding the opening is reinforced by a stiffener hoop, such as is formed by connecting some, or all, of the stiffener rods 360. [0022] In an embodiment of the present invention, the opening of the shell 300 has a recloseable door 350 with a latch. In an embodiment of the present invention, the opening of the shell 300 has a resealable door 350 with a hinge. The door 350 of the shell 300 is dedicated to barrier protection and is decoupled from retention of the large wafer 200. A key is not used to lock the door 350 of the shell 300.[0023] In an embodiment of the present invention, one or more full-contact rings 100 are evenly arranged inside the shell 300. In an embodiment of the present invention, the full-contact rings 100 are separated by integrated flanges. In an embodiment of the present invention, the full-contact rings 100 are separated by discrete collars. [0024] In an embodiment of the present invention, a vertical stack of full-contact rings 100 is aligned and supported by pillars 310 which are connected to a top flange and a base of the shell 300. The shell 300 is not used as a primary structural support so dimensional variation is minimized. Instead, as load-bearing members, the pillars 310 transfer weight and stress from the full-contact rings 100 and the enclosed large wafers 200 to external handling interfaces that are located above and below the shell 300. [0025] In an embodiment of the present invention, the pillars 310 include rigid structural support bars, such as two long shoulder bolts, that secure the vertical stack of full-contact rings 100 inside the shell 300. In an embodiment of the present invention, support is provided at two locations towards the rear of the full-ring contact 100, such as at 11 o'clock and at 1 o'clock. [0026] In an embodiment of the present invention, the full-contact ring 100 is further supported by tabs or outriggers 120. In an embodiment of the present invention, support is provided at two locations towards the sides of the full-contact ring 100, such as at 8 o'clock and at 4 o'clock. In an embodiment of the present invention, the outriggers 120 of a stack of full-contact rings 100 are supported by ribs 320 inside the shell 300. In an embodiment of the present invention, the outriggers 120 of a stack of full-contact rings 100 are supported by a shelf that runs along part or all of the left and right sides of the inside of the shell 300. [0027] In an embodiment of the present invention, the full-contact ring 100 has no rubbing or sliding parts near the large wafer 200, such as in a hinge, so as to avoid forming, accumulating, spreading, or transferring particulates or contaminants. [0028] In an embodiment of the present invention, the full-contact ring 100 includes two arms 105 that are connected. Each arm 105 of the full-contact ring 100 has a semicircular or "C" shape. Each arm 105 of the full-contact ring 100 has an innergroove 115 and an outer circumference 105. The two arms 105 curve around the sides and approach each other towards the front until the split ends 130 are separated by an adjustable gap. [0029] When the full-contact ring 100 is in a close configuration, the split ends 130 of the two arms 105 are brought into close proximity with a small gap as shown in Figure 1. In an embodiment of the present invention, the split ends 130 are engaged. In an embodiment of the present invention, the split ends 130 are interlocked. In an embodiment of the present invention, the split ends 130 of the two arms 105 are aligned but not locked when the full-contact ring 100 is in a closed position. [0030] When the full-contact ring 100 is in an open configuration, the split ends 130 of the two arms 105 are separated with a large gap as shown in Figure 2. In an embodiment of the present invention, the split ends 130 are disengaged. In an embodiment of the present invention, the split ends 130 are unlocked. [0031] In an embodiment of the present invention, the full-contact ring 100 is further captured and supported towards the front of the shell 300 by pins or a recess 370 located in the door 350 of the shell 300. [0032] In an embodiment of the present invention, the full-contact ring 100 has a flatness of 0.1-0.3 mm. In an embodiment of the present invention, the full-contact ring 100 has a flatness of 0.3-0.7 mm. In an embodiment of the present invention, the full-contact ring 100 has a flatness of 0.7-1.3 mm. [0033] In an embodiment of the present invention, each arm 105 has a (vertical) height of 10-20 mm to minimize sag of the large wafer 200 that is being supported or held. In an embodiment of the present invention, the height of each arm 105 is in a direction perpendicular to a surface of the large wafer 200. [0034] In an embodiment of the present invention, each arm 105 has a (lateral) thickness of 1.5 mm to maximize flexibility. In an embodiment of the present invention, the thickness of each arm 105 is in a direction parallel to a surface of the large wafer 200. [0035] In an embodiment of the present invention, each arm 105 of the full-contact ring 100 includes an inner groove 115. In an embodiment of the present invention,the groove 115 has parallel edges that are chamfered. In an embodiment of the present invention, the groove 115 has a cross-section with a variable radius of curvature that is larger towards an open exterior end of the groove 115 and smaller towards a close interior end of the groove 115. In an embodiment of the present invention, the wall of the cross-section of the groove 115 varies as continuous curves. In an embodiment of the present invention, the radius of curvature of the cross- section of the groove 115 varies as discrete steps. [0036] A first consequence of having a groove with the chamfered cross-section is that the outer edges of the large wafer 200 can move towards the close interior end of the groove 115 more readily when the full contact ring 100 is in the open configuration as shown in Figure 2. The groove 115 can capture, align, and center the large wafer 200 even when the large wafer 200 is not entirely flat or is slightly off- center. [0037] A second consequence of having the groove 115 with the chamfered cross- section is that the outer edges of the large wafer 200 can fit against the interior walls of the groove 115 more securely when the full contact ring 100 is in the close configuration as shown in Figure 1. [0038] In an embodiment of the present invention, the interior wall of the groove 115 touches the upper surface of the large wafer 200 in an approximately parallel way in an area within a distance of 1.5 mm inwards from the edge. [0039] In an embodiment of the present invention, the interior wall of the groove 115 touches the upper surface of the large wafer 200 in an approximately tangential way in a location within a distance of 1.5 mm inwards from the edge. [0040] In an embodiment of the present invention, the entire periphery of the large wafer 200 is supported or held. Consequently, the full-contact ring 100 uniformly distributes the weight of the large wafer 200 and prevents significant movement of the large wafer 200. [0041] In an embodiment of the present invention, the full-contact ring 100 minimizes wafer sag.[0042] In an embodiment of the present invention, the full-contact ring 100 minimizes wafer displacement. [0043] In an embodiment of the present invention, the full-contact ring 100 minimizes wafer rotation. [0044] In an embodiment of the present invention, the full-contact ring 100 minimizes wafer stress. [0045] In an embodiment of the present invention, two adjacent full-contact rings 100 minimize wafer-to-wafer contact. In an embodiment of the present invention, the full-contact ring 100 maintains 10-12 mm-wafer pitch spacing. In an embodiment of the present invention, the full-contact ring 100 maintains 12-14 mm-wafer pitch spacing. [0046] In an embodiment of the present invention, the full-contact ring 100 is actuated by a flexure 400. The flexure 400 engages the full-contact ring 100 to separate the split ends 130. The two arms 105 of the full-contact ring 100 may be spread apart to a larger circumference, thus enlarging the gap between the split ends 130, until the large wafer 200 has sufficient clearance to be moved inside or outside the full-contact ring 100. [0047] In an embodiment of the present invention, the nominal radius of an imaginary circle inscribed by the full-contact ring 100 is increased by 1.5-3.0 mm per side to allow the large wafer 200 to be moved inside or outside. In an embodiment of the present invention, the nominal radius of an imaginary circle inscribed by the full- contact ring 100 is increased by 3.0-4.5 mm per side to allow the large wafer 200 to be moved inside or outside. [0048] The large wafer 200 is loaded or unloaded into the chamfered groove 115 of the full-contact ring 100 from below (the bottom side). A robotic mechanism 500 , such as a 6-axis robotic mechanism, may be used for handling the large wafer 200. Given a vertical pitch of 10 mm between adjacent full-contact rings 100 in a stack, the extraction volume includes a width of 450 mm, a height of 7.9 mm in the middle portion, and a height of 3.381 mm on both left side and right side. In an embodimentof the present invention, the large wafer 200 advances 3 mm outward (towards the door or the front), then drops downwards 3 mm, before exiting the shell 300. [0049] In an embodiment of the present invention, the full-contact ring 100 is formed from a flexible material. The flexible material allows the full-contact ring 100 to be bent repeatedly or deformed continually. [0050] In an embodiment of the present invention, the full-contact ring 100 is formed from a tough material. The tough material allows the full-contact ring 100 to be restored or returned to its original size and shape without sustaining damage. [0051] In an embodiment of the present invention, the full-contact ring 100 is formed from a compliant material. The compliant material allows the full-contact ring 100 to remain in contact with the large wafer 200 that is being supported or held. [0052] Injection molding of a structural part having thin walls requires a resin that has a good balance of temperature resistance, mechanical properties, and chemical resistance. [0053] In an embodiment of the present invention, the full-contact ring 100 is formed from a clean polyetheretherketone polymer (available as VICTREX® PEEK™ from Victrex pic, Lancashire, UK, having a melt viscosity grade of 9OG or 150G) that is impregnated or filled with milled Carbon fiber (MCF) for electrostatic discharge (ESD) protection. [0054] The PEEK material is a semicrystalline thermoplastic polymer compound that demonstrates high temperature resistance (continuous use at a temperature up to 260 degrees Centigrade), exceptional strength and hardness (flexural modulus, as tested at 23 degrees Centigrade, in a range from 4.1 GigaPascals when unfilled to 20.2 GPa when filled), and outstanding chemical resistance (inert to water, pressurized steam, and almost all chemicals except halogen gases, some strong acids, and a few sulfur compounds), and low particle shedding. However, the PEEK material has a high cost. [0055] In an embodiment of the present invention, the full-contact ring 100 is formed from a clean liquid crystal polymer (LCP) impregnated or filled with milled Carbon fiber (MCF). The LCP material is a class of wholly aromatic polyester polymers thatprovides excellent wear resistance, low particle shedding, and low moisture absorption at an intermediate cost. The LCP material offers excellent barrier performance for purge applications and is self -extinguishing. [0056] In an embodiment of the present invention, the full-contact ring 100 is formed from Polyetherimide (PEI). The PEI material is a thermoplastic polymer that provides good performance at a moderate cost. The PEI material is available as Ultem® from Sabic Innovative Plastics, Pittsfield, Massachusetts (formerly part of General Electric, Fairfield, Connecticut). [0057] In an embodiment of the present invention, the primary shell 300 and door 350 for an in-fab wafer carrier are formed from the LCP material. [0058] In an embodiment of the present invention, the primary shell 300 and door 350 for a wafer shipper are formed from a low ionic grade polycarbonate (PC). The PC material provides a minimum or adequate performance at a low cost. Unfilled PC may be used for shipping containers since ESD protection may not be necessary. [0059] Many embodiments and numerous details have been set forth above in order to provide a thorough understanding of the present invention. One skilled in the art will appreciate that many of the features in one embodiment are equally applicable to other embodiments. One skilled in the art will also appreciate the ability to make various equivalent substitutions for those specific materials, processes, dimensions, concentrations, etc. described herein. It is to be understood that the detailed description of the present invention should be taken as illustrative and not limiting, wherein the scope of the present invention should be determined by the claims that follow.
A protected boot sequence in a computer system. A reset vector directs the system to a boot program including a protected program. This protected program verifies the integrity of the BIOS contents before branching to the BIOS for execution of normal bootstrap functions. The protected program can also lock down various blocks of bootstrap code to prevent them from being changed after a certain point in the boot sequence. The protected boot sequence can proceed in layers, with each layer providing some level of validation or security for succeeding layers.
We claim: 1. A method of booting software in a computer system, comprising:initiating a reset function; executing a first protected program; validating a firmware-based BIOS program with code that cannot be updated by the computer system by verifying the BIOS program contains expected code; locking down portions of at least one of the first protected program and the BIOS program; and executing the BIOS program. 2. The method of claim 1, wherein said executing the BIOS program includes validating and executing an operating system loader.3. The method of claim 1, wherein said validating includes locating the BIOS program.4. The method of claim 1, wherein said initiating includes branching to an entry point of a boot sequence in the first protected program.5. The method of claim 1, wherein said executing the BIOS program includes initializing a main memory.6. The method of claim 1, wherein said executing the BIOS program includes determining a reset type.7. The method of claim 1, wherein said validating the BIOS program includes validating and installing updated modules.8. The method of claim 1, wherein said validating the BIOS program includes validating a second protected program, and executing the BIOS program includes executing the second protected program.9. The method of claim 8, wherein the second protected program is an option ROM program.10. A method of booting software in a computer system, comprising:initiating a reset function; executing a first protected program that cannot be updated by the computer system; validating, through said executing a first protected program, at least one of first and second firmware-based BIOS programs by verifying the at least one of the first and second BIOS programs contains expected code; executing the first BIOS program; executing a second protected program; and executing the second BIOS program, wherein at least one of said executing the first protected program and said executing the second protected program includes locking down blocks of data in at least one of said first protected program, said second protected program, said first BIOS program, and said second BIOS program. 11. The method of claim 10, wherein said initiating includes branching to an entry point of a boot sequence in the first protected program.12. The method of claim 10, wherein said executing the first BIOS program includes initializing a main memory.13. The method of claim 10, wherein said executing the first BIOS program includes determining a reset type.14. The method of claim 10, wherein said executing the second protected program includes loading additional security elements into memory.15. The method of claim 10, wherein at least one of said executing the first BIOS program and said executing the second BIOS program includes validating an operating system loader.16. The method of claim 15, further comprising executing the operating system loader.17. The method of claim 10, wherein said validating includes validating an option ROM.18. The method of claim 17, wherein said executing at least one of the first and second BIOS programs includes executing a validated option ROM program.19. The method of claim 10, wherein said validating includes validating and installing updated modules.20. A machine-readable medium having stored thereon instructions, which when executed by at least one processor causes said at least one processor to perform the following:initiating a reset function; executing a protected program that cannot be updated by the at least one processor; validating, by said executing a protected program, a firmware-based BIOS program by verifying the BIOS program contains expected code; locking down portions of at least one of said protected program and said BIOS program; and executing the BIOS program. 21. The medium of claim 20, wherein said executing the BIOS program includes validating and executing an operating system loader.22. The medium of claim 20, wherein said validating includes locating the BIOS program.23. The medium of claim 20, wherein said initiating includes branching to an entry point of a boot sequence in the first protected program.24. The medium of claim 20, wherein said validating the BIOS program includes validating a second protected program, and executing the BIOS program includes executing the second protected program.25. The medium of claim 20, wherein said validating the BIOS program includes validating and installing updated modules.26. An apparatus, comprising:a first firmware-based memory block containing a protected first program sequence; and a second firmware-based memory block containing a second program sequence for booting a computer system; wherein the first program sequence includes instructions for validating the second program sequence by verifying the second program sequence contains expected code and for transferring control to the second program sequence; wherein the protected first program sequence cannot be updated by the computer system; wherein the first program sequence includes instructions for locking down at least one of a portion of the first memory block and a portion of the second memory block. 27. The apparatus of claim 26, wherein the second program sequences includes instructions for locking down at least one of a portion of the first memory block and a portion of the second memory block.28. The apparatus of claim 26, wherein at least one of the first and second program sequences includes instructions for validating an option ROM.29. The apparatus of claim 26, wherein at least one of the first and second program sequences includes instructions for validating and installing updated modules.30. The apparatus of claim 26, wherein the first program sequence includes instructions for locating the second program sequence.31. The apparatus of claim 26, wherein the second program sequence includes a BIOS program sequence.
BACKGROUND OF THE INVENTION1. Field of the InventionThe invention pertains generally to a boot process in a computer system. More particularly, it pertains to a protected boot process that resists tampering with the boot sequence.2. Description of the Related ArtBefore a computer system can operate, it must have an operating system (OS) in its memory that allows the computer's resources to be reached and controlled by the other software, such as the various application programs. It is desirable to have various types and versions of operating systems loadable into the same computer system hardware. To accomplish this, the computer hardware has a non-volatile, comparatively simple bootstrap program, which initializes various basic functions and then loads more complicated software from a disk. The boot sequence may have multiple levels of load programs, with each successive level loading a more complex, more capable, but also more modifiable program until the OS itself is loaded.In a conventional system, the boot process is started with a reset function of some kind. This might be a cold start reset (power to the hardware is initially off), a warm start reset (the hardware is already powered up, but in a partially unknown logic state), or one of several other starting conditions. The type of reset affects the particular functions that must be performed in the boot sequence, but generally does not change the overall boot process.The reset function typically generates a reset interrupt, which vectors the system to a program in non-volatile memory and begins execution from that point. This program is generally a Basic Input-Output System (BIOS) in flash memory. The BIOS enables basic input-output (IO) control, branches into an option ROM to enable the options that are active in that particular system, and then branches back into the BIOS program to complete initialization and load the OS into main memory from a disk. While most of the hardware in such a system is provided by the computer vendor, the BIOS and option ROM are typically provided by third party vendors, so the computer vendor has limited knowledge of, and control over, the specific contents of these items. In addition, both the BIOS and option ROM are typically reprogrammable while in the computer and therefore subject to tampering after the system has been installed. This presents a security issue, since there is no way to tell if the BIOS or option ROM have been tampered with. Widespread concern over sophisticated hackers and computer viruses makes this problem especially worrisome, as the system may be tampered with in unknown and possibly undetectable ways.Computer vendors want to be able to verify that the bootstrap sequence is the one they want and expect, and that any unauthorized changes that have been made to this sequence are detectable at boot time so the boot sequence can be terminated and the problem investigated.SUMMARY OF THE INVENTIONThe invention includes a method of booting an operating system that includes initiating a reset function, executing a protected program, validating a BIOS program, and executing the BIOS program.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a schematic of a boot sequence.FIG. 2 shows a firmware hub block partitioning scheme.FIG. 3 shows a schematic of an alternate boot sequence.DETAILED DESCRIPTION OF THE INVENTIONThe invention supplements the conventional boot sequence by introducing one or more groups of protected instructions into the sequence that are protected from tampering themselves, and that verify the integrity of at least a part of the remaining boot sequence. FIG. 1 shows one embodiment of the system. Block 10 encompasses the non-volatile memory containing instructions and data used in the boot sequence. Firmware hub (FWH) 12 is a non-volatile memory block containing instructions (code) that control and validate the boot sequence. BIOS 14 is a non-volatile memory block, which can contain a relatively standard BIOS, but modified to interact with FWH 12.When the system boots, system reset vector 16 is invoked, which directs the processor to begin execution at a specific address within firmware hub A (FWH_A) in sub-block 21 of FWH 12. The code of FWH_A locates the first sub-block 23 of the BIOS 14, designated BIOS_A. FWH_A 21 then validates the BIOS and FWH_B 25 to make sure it contains the code that is expected. Validation can take a number of forms, depending on the level of security desired. One embodiment performs a checksum of the BIOS code and compares that checksum to the expected checksum stored in FWH_A. Another embodiment uses digital signatures to increase the protection afforded by this security system. FWH_A can include a table identifying the type of security check to be performed, the BIOS objects on which to perform the security check, and the code for performing it.In addition to validation of the BIOS, code for validating and executing the option ROM can also be included in either FWH 12 or BIOS 14. The option ROM may be included within BIOS 14, or may be in a separate memory block.After validation of the BIOS, control is passed to the code of BIOS_A, located in sub-block 23 of BIOS 14. BIOS_A code is responsible for initializing main memory and determining the type of the CPU reset. The type of CPU reset that initiated the boot sequence may affect the specific functions that are performed during the boot sequence, but does not change the overall boot process. After performing these functions, control is passed to FWH_B, located in sub-block 25 of FWH 12. FWH_B code is responsible for locking down various blocks of flash memory in FWH 12 and/or BIOS 14.Lock down is a process of stabilizing a block of code by preventing further write access to that code. This feature is dynamically available in the flash memory typically used for FWH 12. Prior to being locked down by FWH_B, the affected blocks can be updated by the boot code. Subsequent to lockdown, the blocks cannot be further modified without restarting the boot procedure. FWH_B can also load additional security elements into system memory for later use. A limited amount of lockdown may also be performed by FWH_A.Various blocks of code can be updated before being locked down. This is typically handled by having the updated blocks written into predetermined locations in main memory by the operating system before the system is shut down. When the system is shut down and restarted, the boot program detects these updated modules, validates them, and installs them into the non-volatile memory containing the boot program. After installing these updated modules, they can be locked down to prevent further changes from being made to them. This process permits necessary updates to be made, but protects the boot sequence from unauthorized tampering after a certain point in the boot sequence has been reached.After completing these tasks, control is passed to the code of BIOS_B contained in BIOS sub-block 27. The BIOS_B code can also have additional security interfaces available for its use as it continues execution of its standard power-on self-test (POST) sequence. BIOS_B can also branch to an option ROM in a conventional manner. BIOS_B contains the OS loader that begins the process that loads operating system 18 into memory. One purpose of the aforementioned validation procedure is to validate the integrity of this OS loader.FIG. 2 shows how the contents of the firmware hub may be allocated. In one embodiment, FWH boot block 31 contains FWH_A code that is provided within the FWH at production. This may be a single 64 KB block that is protected by hardware, and cannot be updated by the system. This code locates BIOS_A, and performs a validation procedure to verify that the BIOS code is the code that is expected.BIOS startup block 32 provides code that executes within FWH_A but is typically provided by the BIOS vendor. It can include code for interfacing with BIOS_A, BIOS_B, and also for a BIOS recovery function (not shown). Both boot block 31 and BIOS startup block 32 contain code that only executes directly from FWH 12.Lock-down partition 33 is a set of one or more blocks that are locked down early in the boot process so they will be protected during system operation. Each block can be locked down individually. A Flash Media Manager (FMM) can be used to update data within this partition. However, any access that attempts to write to this partition must do so during the boot sequence before lockdown. Once the blocks in this partition have been locked down, the FWH hardware prevents any writes to these memory locations. These blocks can be unlocked only by resetting the FWH, and this occurs only during a CPU reset. To update these blocks, an update image is placed in memory and a CPU reset is initiated. During the protected boot sequence, this image will be detected and the update image will be used to update records within the locked down partition before it has actually been locked down.Unlocked partition 34 is a set of one or more blocks that can be unlocked and are therefore accessible for writing. These blocks can be of any size that space allows. All data within these blocks can be managed by the FMM.The number of blocks allocated to each of the functions shown in FIG. 2 can be selected to meet the needs of a particular system type, and would normally be determined by the BIOS vendor during initial design of the FWH and BIOS code.A more detailed description of one embodiment of the invention follows, describing the four major sections of FIG. 1: FWH_A, BIOS_A, FWH_B, and BIOS_B.FWH_AFWH_A code is the system-critical startup code. It is the most trusted piece of code, and is assumed to be completely trustworthy (i.e., not subject to tampering or unauthorized changes). This code is hardware protected and is typically not programmable in the system. To increase the level of security, the system may use special hardware to protect this code. The device containing this code is generally built into the system's motherboard, and the code is responsible for validating the integrity of the next lower level of code. FWH_A code controls the security of all code within the bootstrap sequence, and may lockdown or otherwise protect any lower level code if tampering is detected. Such lockdowns may include, but are not limited to, hardware block locking.The FWH_A code is the code responsible for performing a processor mode switch and validating the code within the system. In particular, it must validate the entire BIOS Startup Block before passing control over to the code. Verification can be done through the use of a checksum or through more complicated verification means. All code within this section of code should run with limited or no memory resources.When an update recovery block (reclaim block) is available, the other security software can back up the BIOS startup block before erasing that block and then writing the new BIOS startup block during the update sequence. If power should be lost before the update can complete, the backup is used for updating. The FWH_A code determines which block, the BIOS startup block or the reclaim block contains the valid image.The FWH_A can be the first piece of code executed after a processor reset. This code should execute without having to first initialize RAM resources. It is provided in the FWH boot block, which contains trusted code. FWH_A code can perform the following functions:1) Boot vector-On processor reset, the first instruction is fetched at this location and executed.2) Lock boot block-The boot block is set to a locked-down state. Execution of the lockdown code takes place external to the boot block, since flash memory cannot be locked down while executing code from within itself.3) Switch modes-Switch CPU mode to a flat 32-bit environment.4) Validate BIOS startup block-perform validation through predetermined means, such as a checksum.5) If validation fails-Either a) issue warning signal and halt system, or b) locate backup BIOS startup block in reclaim block, validate it, and jump to it if it passes or issue warning signal and halt system if it fails.6) Jump to BIOS_A-If validation step 4 passed, jump to BIOS_A code entry point.BIOS_ABIOS_A code is generally responsible for system initialization, such as initializing main memory and enabling basic hardware resources. Like FWH_A code, BIOS_A code is typically built into the system motherboard, but unlike FWH_A code, BIOS_A code is typically re-programmable in the system. BIOS_A code can also perform validation of other code modules as needed.BIOS_A code is the first piece of BIOS code executed. This code is responsible for determining the type of system reset that initiated the boot, and controls the security of current and lower level modules. For example, in the case of a Warm Boot, Cold Boot, or Awake, this code will typically lock down all the devices holding the firmware modules. In the case of an Update Boot, this code will typically perform any firmware updates and then reboot.BIOS_A code may use interfaces provided to it from the FWH_A module. It can perform validation of the next lower level module and then pass control to that module upon successful validation.BIOS_A code is responsible for enabling RAM resources, and passing control back to FWH_B. BIOS_A code is located in the BIOS startup block and is called by FWH_A code. As previously stated, BIOS_A code can perform these functions:1) Determine reboot-The processor can be reset for multiple reasons. These reasons include waking from a power-saving sleep state, partial reboot, warm reboot, cold boot, and others. The boot sequence may be somewhat altered depending on which type of reboot is being executed.2) Enable memory-Once the type of reboot has been determined, the BIOS_A code can restore the memory controller state (warm boot, wakeup from sleep state), or reinitialize and test the memory (cold boot).3) Setup FWH_B parameters-The BIOS_A code indicates the execution path to perform based on the type of boot. It can determine the location of other protective software (warm boot), or the location to load the other protective software (cold boot).4) Jump to FWH_B-After enabling memory, the BIOS_A code can return control back to the FWH boot block by jumping to the entry point of FWH_B.FWH_BFWH_B code generally involves system-critical patches and Option ROMs, such as CPU micro code patches and in-system Video BIOS. FWH_B code is typically stored on the motherboard, and is not responsible for its own security, which is provided by a higher-level module such as BIOS_A.FWH_B code is responsible for initializing any related protective software from other managed blocks, for updating any blocks that need to be updated during the boot sequence, locking down those blocks, and passing control to BIOS_B. For example, this code might perform the following functions:1) Initialize non-volatile storage-This code determines the total flash memory on the platform and initializes any associated registers.2) Branch, based on type of boot-Based on the type of boot determined in BIOS_A, the code can branch to one of the following step 3's: Load OS, Return to OS, or Update.3) Load OS-BIOS_A code indicated that the BIOS is reloading. The interface for other associated protective software should be loaded at the location indicated by BIOS_A.4) Initialize stack-Memory resources are available, therefore the stack should be initialized for use.5) Load Flash Media Manager (FMM) to memory-The FMM should be copied from the boot block to a location in memory based on the load value specified by BIOS_A.6) Perform Restore if Needed-At this point, memory resources are available, allowing restoration of a failed BIOS startup block update to occur. Calling the FMM's restore function can do this.7) Lockdown BIOS startup block-BIOS startup block should be locked-down.8) Initialize FMM-Initializes the FMM, both the locked and unlocked partition, and allows any power-loss recovery to be performed.9) Load related protective software-Other protective code can be loaded at this point, using the interface loaded in step 3.10) On failure jump to BIOS Recovery-In the event that the FMM locked partition fails to initialize or the related protective software is not located, control can be passed to BIOS Recovery code.11) Lock down blocks-Lock down all blocks within the lock-down partition.12) Jump to BIOS_B-Pass control to the BIOS_B loader within FWH.3) Return to OS-BIOS_A code indicated that the BIOS is returning to the OS, such as from a sleep state.4) Lock down blocks-Using no memory resources, all blocks within the locked-down partition and BIOS startup block are locked down.5) Switch to Real Mode-Before turning on the BIOS image in shadowed memory, the processor is returned to Real Mode.6) Jump to BIOS compatibility boot vector-Return control back to the BIOS image shadowed in memory.3) Update-BIOS_A code indicated that an update to the locked down partition is occurring and that the trusted update application should be executed.4) Initialize Stack-Locate and set up a stack location.5) Validate related protective code-Any related protective code must be validated to assure that the security software is itself secure and valid.6) Validate and load update application-The update application software is validated and loaded into memory.7) Execute update application-Pass control to the update application. This application locates, checks, and loads the update images.8) Perform cold boot-Initiate a full reboot.BIOS_BBIOS_B code generally involves the main BIOS, Option ROMs and operating system (OS) loader. Main BIOS is typically stored on the motherboard and is updateable. It provides all software interfaces and performs additional hardware detection and initialization. It can also be used to validate the integrity of the BIOS_B Option ROMs and OS loader, typically by using checksums. The BIOS_B Option ROMs are typically stored in daughter cards and manage the initialization of the add-in hardware devices. These Option ROMs have no internal provisions for security or understanding of the described higher layers. However, they may have their own proprietary update mechanisms.The BIOS_B code is typically responsible for loading standard BIOS, and can therefore be referred to as a BIOS loader. The actual order of events is typically left to the BIOS vendor to determine, but can be the following steps:1) Load BIOS to memory-Once loaded into memory, BIOS_B code can decompress the BIOS code.2) Initialize video-Outputting video to the screen as soon as possible is usually desirable.3) Perform a full memory test-Only a portion of low memory might be initialized during the BIOS_A portion of the boot flow. This step can test and initialize the remaining memory.4) Initialize remainder of the system.5) Relocate related protective code-This code is also typically located at the high end of memory, so this step moves it to just below the SMM code of the previous step. Related tables can be located elsewhere.6) POST-Complete Power-On Self-Test.In another embodiment, shown in FIG. 3, FWH boot block 43 contains BIOS_A code in sub-block 44 and FWH_B code in sub-block 41. BIOS_A code may be provided by the BIOS creator, while FWH_B code may be provided by the BIOS creator or by a third party. BIOS_A code and FWH_B are stored together in boot block 43 at the time of BIOS programming. BIOS_B code is stored in another portion of the nonvolatile device 40 and may not be protected. The code stored in boot block 43 may be protected from modification during runtime by hardware and can only be updated through BIOS_A code. Thus boot block 43 is protected from unauthorized tampering while the system is running.The system's reset vector 42 causes execution to start at a predetermined address in BIOS_A. BIOS_A code is responsible for starting the system, initializing memory and locating FWH_B code. FWH_B code is responsible for locating and validating all or a portion of BIOS_B code to ensure it is the code that is expected. FWH_B code subsequently passes control to BIOS_B code, which continues the initialization of the system and loads operating system 46.The foregoing description is intended to be illustrative and not limiting. Other variations will occur to those of skill in the art. Such variations are encompassed by the invention, which is limited only by the scope and spirit of the appended claims.
A fill pattern for a semiconductor device such as a memory cell. The memory cell includes a plurality of first topographic structures comprising conductive lead lines deposited on a semiconductor substrate, and a plurality of second topographic structures comprising fill patterns such that the top surfaces of the second topographic Structures are generally coplanar with the top surfaces of the plurality of first topographic structures. The plurality of first and second topographic Structures are arranged in a generally repeating array on the substrate. A planarization layer is deposited on top of the substrate such that it fills the space between the plurality of first and second topographic structures, with its top surface generally coplanar with that of the top surfaces of the first and second topographic structures.
CLAIMS 1. A method of fabricating a semiconductor wafer, comprising: providing a generally planar semiconductor wafer substrate such that said substrate is defined by substantially orthogonal first and second in-plane dimensions; defining a topographic layer of conductive lead line material such that said topographic layer projects onto said substrate to occupy at least a portion of said substantially orthogonal first and second in-plane dimensions; depositing at least one said topographic layer of conductive lead line material on said substrate; depositing a plurality of topographic fill patterns adjacent either said topographic layer of conductive lead line material or another of said plurality of topographic fill patterns such that spaces defined therebetween possess substantially equal width as any other space; arranging said plurality of topographic fill patterns and said at least one said topographic layer of conductive lead line material so that a grid defined by a plurality of crossings of said spaces contains no linear dimension longer than the longest dimension of any one of said plurality of topographic fill patterns, and that no intersection defined by any of said plurality of crossings includes uninterrupted linear dimensions; and depositing a planarization layer over said substrate such that it is disposed at least within said grid and laterally surrounds said at least one topographic layer of conductive lead line material and said plurality of topographic fill patterns. 2. A method according to claim 1, wherein said step of depositing a planarization layer includes depositing a layer of spin-on glass. 3. A method according to claim 1, wherein said step of depositing a planarization layer includes directly applying TEOS by chemical vapor deposition. 4. A method according to claim 1, whereupon deposition of said planarization layer produces a top surface of said layer substantially co-planar with a top surface of said topographic layer of conductive lead line material and said plurality of topographic fill patterns. <Desc/Clms Page number 15> 5. A method according to claim 1, comprising the additional step of defining an array comprising at least one of said plurality of topographic fill patterns and topographic layers such that no portion of any of said plurality of topographic layers overhangs a boundary of said array. 6. A method according to claim 5, wherein the additional step of defining said array further includes defining said array boundary mostly with straight edges of said plurality of topographic fill patterns. 7. A semiconductor device comprising: a substrate; first topographic patterns deposited over said substrate; second topographic patterns deposited over said substrate, where said first and second topographic patterns define active lead lines and dummy fills, respectively; an array over said substrate, said array comprising a plurality of valleys circumscribing first and second topographic patterns, said array configured such that: the periphery of said array is substantially bounded by straight edges of said plurality of dummy fills, said active lead lines, or a combination of both ; and no portion of any of said plurality of dummy fills extends laterally beyond said periphery; a grid disposed within said array such that: the longest linear dimension of each of said plurality of valleys making up said grid is no longer than the longest lateral dimension of any of said dummy fills; and no intersection defined by a crossing between any two of said plurality of valleys includes uninterrupted linear dimensions; and a substantially planar layer of insulative material deposited over said plurality of valleys, said planar layer having a thickness sufficient to render a top surface thereof substantially co-planar with a top surface of said first and second topographic patterns. <Desc/Clms Page number 16> 8. A semiconductor device comprising: a substrate with a plurality of peaks and valleys, where said plurality of peaks are defined by at least one topographic conductive line spaced apart from a plurality of topographic dummy patterns, and said plurality of valleys are defined by interpeak spaces; a repeating array defined by at least a portion of said plurality of peaks and valleys, wherein: the periphery of said array is substantially bounded by straight edges of said plurality of topographic dummy patterns; and no portion of any of said plurality of topographic dummy patterns within said array extends laterally beyond said periphery of said array; a substantially planar grid disposed within said array, said substantially planar grid made up of said interpeak spaces that extend in substantially orthogonal directiopns from one another within said substantially planar grid such that: the longest linear dimension of each of said interpeak spaces making up said grid is no longer than the longest lateral dimension of any of said dummy patterns; and no intersection defined by a crossing between any two of said interpeak spaces includes uninterrupted linear dimensions; and a substantially planar layer of insulative material deposited over said valleys, said planar layer having a thickness selected to render a top surface thereof substantially co- planar with a top surface of said peaks. 9. A semiconductor device according to claim 8, wherein a lateral dimension defining a width of any one of said interpeak spaces is substantially the same as the width of all other said interpeak spaces. 10. A semiconductor device according to claim 9, wherein said insulative material is an oxide-based ceramic. 11. A semiconductor device comprising: a substantially planar substrate; a plurality of first topographic structures comprising conductive lead lines deposited over said substantially planar substrate, said plurality of first topographic structures including a top surface; <Desc/Clms Page number 17> a plurality of second topographic structures comprising fill patterns with top surfaces thereon, said top surfaces of said plurality of second topographic structures generally co-planar with said top surfaces of said plurality of first topographic structures; at least one geometrically simple array comprising at least a portion of said plurality of first and second topographic structures such that: the periphery of said at least one geometrically simple array is substantially bounded by straight edges of said plurality of second topographic structures; and no portion of said plurality of said second topographic structures within said array extends laterally beyond said periphery; a gridded valley disposed within said array, said gridded valley comprising an interconnected series of spaces between adjacent ones of said first and second topographic structures, such that: the width of each of said interconnected series of spaces is substantially equal: the longest linear dimension of each of said series of spaces is no longer than the longest dimension of any of said second topographic structures; and no intersection defined by a crossing between any two of said interconnected series of spaces includes uninterrupted linear dimensions; and a planarization layer deposited over said substantially planar substrate such that it is disposed at least within said gridded valley and laterally surrounds said plurality of first and second topographic structures. 12. A memory cell comprising: a substantially planar semiconductor substrate; a switching device disposed over said semiconductor substrate; a charge storage device in electrical communication with said switching device; a plurality of topographic structures comprising: at least one first topographic structure including conductive lead lines deposited over said semiconductor substrate and in electrical communication with said switching device, said at least one first topographic structure including a top surface; and <Desc/Clms Page number 18> a plurality of second topographic structures with top surfaces thereon, said top surfaces of said second topographic structures generally co- planar with said top surfaces of said at least one first topographic structure; at least one geometrically simple array comprising at least a portion of said plurality of first and second topographic structures such that: the periphery of said at least one geometrically simple array is substantially bounded by straight edges of said plurality of second topographic structures; and no portion of said plurality of second topographic structures within said array extends laterally beyond said periphery; a gridded valley disposed within said array and including an interconnected series of spaces between adjacent topographic structures, wherein: a lateral distance defining a width of any one of said series of spaces is substantially equal to that of another of said series of spaces within said gridded valley; the longest linear dimension of each of said series of spaces is no longer than the longest dimension of any of said second topographic structures; and no intersection defined by a crossing between any two of said interconnected series of spaces includes uninterrupted linear dimensions; and a planarization layer deposited over said substrate such that it is disposed at least within said gridded valley and laterally surrounds said plurality of topographic structures. 13. A memory cell according to claim 12, wherein said width of each of said interconnected series of spaces is between 0.25 and 0.5 micron. 14. A memory cell according to claim 12, wherein an arrangement of said plurality of second topographic structures define a first orthogonal in-plane dimension and a second orthogonal in-plane dimension. 15. A memory cell according to claim 14, wherein at least one of said fill patterns overlaps with at least one adjacent fill pattern along at least one of said first and second in- plane dimensions. <Desc/Clms Page number 19> 16. A memory cell according to claim 12, wherein said planarization layer comprises TEOS. 17. A memory cell according to claim 12, wherein said planarization layer comprises spin-on glass. 18. A memory cell according to claim 12, wherein said fill pattern is T-shaped. 19. A memory cell according to claim 18, further comprising a second set of said fill patterns disposed between said T-shaped fill patterns. 20. A memory cell according to claim 19, wherein said second set of said fill patterns are square-shaped. 21. A memory cell according to claim 12, wherein said fill patterns are made of the same material as said conductive lead lines. 22. A memory cell according to claim 12, wherein a first set of said interconnected series of spaces extend in a first orthogonal in-plane dimension, while said second set of said interconnected series of spaces extend in a second orthogonal in-plane dimension. 23. A memory cell comprising: a substrate with a plurality of peaks and valleys, where said peaks are defined by at least one topographic conductive line spaced apart from a plurality of topographic dummy patterns, and said valleys are defined by interpeak spaces that are formed between said peaks; a switching device disposed over said semiconductor substrate; a charge storage device in electrical communication with said switching device; a repeating array defined by at least a portion of said plurality of peaks and valleys, wherein: the periphery of said array is substantially bounded by straight edges of said plurality of dummy patterns; and no portion of any of said plurality of said dummy patterns within said array extends laterally beyond said periphery of said array; <Desc/Clms Page number 20> a grid disposed within said array, said grid defined by said interpeak spaces such that the longest linear dimension of each of said valleys is no longer than the longest lateral dimension of any of said dummy patterns, and no intersection defined by a crossing between any two of said interpeak spaces includes uninterrupted linear dimensions; and a substantially planar layer of insulative material deposited over said valleys, said planar layer having a thickness selected to render a top surface of said substantially planar layer substantially co-planar with a top surface of said peaks. 24. A memory cell comprising : a substantially planar semiconductor substrate defining first and second orthogonal in-plane dimensions; a switching device disposed over said semiconductor substrate; a charge storage device in electrical communication with said switching device; a plurality of first topographic structures comprising conductive lead lines deposited over said semiconductor substrate and in electrical communication with said switching device, said topographic structures including a top surface; a plurality of second topographic structures with top surfaces thereon, said plurality of second topographic structures comprising the same material as said plurality of first topographic structures and defining first and second in-plane dimensions, and at least one of said fill patterns overlaps with at least one adjacent fill pattern along at least one of said first and second in-plane dimensions, wherein at least a portion of said second topographic structures are T-shaped, said top surfaces of said second topographic structures generally co-planar with said top surfaces of said plurality of first topographic structures; at least one geometrically simple array comprising at least a portion of said plurality of first and second topographic structures arranged over said semiconductor substrate such that: the periphery of said at least one geometrically simple array is substantially bounded by straight edges of said plurality of second topographic structures; and no portion of said plurality of second topographic structures within said array extends laterally beyond said periphery; a gridded valley disposed within said array, said gridded valley comprising a first set of interconnected series of spaces that extend in said first orthogonal in-plane dimension, and <Desc/Clms Page number 21> a second set of said interconnected series of spaces that extend in said second orthogonal in-plane dimension such that: said first and second set of interconnected series of spaces between adjacent ones of said first and second topographic structures define a width of any one of said interconnected series of spaces between 0.25 and 0.5 micron; the longest linear dimension of each of said interconnected series of spaces is no longer than the longest dimension of any of said second topographic structures; and no intersection defined by a crossing between any two of said interconnected series of spaces includes uninterrupted linear dimensions; and a TEOS planarization layer deposited over said substrate such that it is disposed at least within said gridded valley and laterally surrounds said plurality of first and second topographic structures. 25. A reticle used to make memory cells, said reticle comprising: at least one generally planar surface; a plurality of lead line cutouts in said surface; and a plurality of fill pattern cutouts in said surface, said plurality of fill pattern cutouts interspersed between said plurality lead line cutouts, and spaced apart from each of said plurality of lead line cutouts by an amount sufficient to avoid capacitive communication between a metal lead line and a metal fill pattern formed on a memory cell by said reticle, wherein said plurality of lead line and fill pattern cutouts are disposed in an array within a surface of said reticle such that: the periphery of said array is substantially bounded by straight edges; and no portion of any of said plurality of fill pattern cutouts within said array extends laterally beyond said periphery; a grid defined by at least a portion of said surface, said grid comprising an interconnected series of spaces between each adjacent said plurality of lead line and fill pattern cutouts such that: a lateral distance defining the width of any one of said series of spaces is substantially equal to that of any other of said series of spaces within said grid; <Desc/Clms Page number 22> the longest linear dimension between each of said series of spaces is no longer than the longest dimension of any of said plurality of fill pattern cutouts; and no intersection defined by a crossing between any two of said interconnected series of spaces includes uninterrupted linear dimensions. 26. A reticle according to claim 25, wherein at least a portion of said fill pattern cutouts are T-shaped. 27. A reticle according to claim 25, wherein at least one of said plurality of fill pattern cutouts further define a first in-plane dimension and a second in-plane dimension substantially orthogonal to said first in-plane dimension such that at least one of said plurality of fill pattern cutouts overlaps with at least one adjacent fill pattern cutout along at least one of said first or second in-plane dimensions. 28. A reticle according to claim 25, wherein a lateral dimension defining a width of any one of said interconnected series of spaces is substantially the same between all other said series of spaces. 29. A semiconductor fabrication system comprising: a photoresist application mechanism to deposit photoresist onto a semiconductor substrate; an electromagnetic radiation source to illuminate at least a portion of said photoresist; a solvent dispensing mechanism to wash away unexposed photoresist; an etching mechanism to selectively remove at least one layer of insulative coating; and a reticle with a generally planar body that occupies first and second substantially orthogonal dimensions; said reticle comprising: a first segment of said generally planar body defined by a plurality of cutouts therethrough, said cutouts adapted to define topographic peaks on a semiconductor, where said cutouts are shaped to further define at least one lead line and a plurality of dummy patterns spaced apart from one another; <Desc/Clms Page number 23> a second segment of said generally planar body comprising the remainder thereof such that a pattern formed by said remainder extends in said first and second substantially orthogonal dimensions, said remainder adapted to define a plurality of interpeak valleys on said semiconductor; a geometrically simple array defined by said plurality of cutouts, wherein: the periphery of said array is substantially bounded by straight edges of said first segment; and no portion of any of said plurality of said dummy patterns within said first segment extends laterally beyond said periphery of said array; and a grid defined by at least a part of said second segment such that: the longest linear dimension in the portion of said second segment bounded by said periphery is no longer than the longest linear dimension of any part of said first segment; and no intersection formed in said second segment includes uninterrupted linear dimensions. 30. A motherboard assembly including: a generally planar board; a mount for a microprocessor, said mount secured to said generally planar board; a mount for a plurality of memory devices, said mount secured to said generally planar board; a mount for a plurality of controller sets, said mount secured to said generally planar board ; a plurality of interconnect devices to provide electrical communication between said motherboard and various input, output and memory devices; and at least one semiconductor device from the group consisting of said microprocessors, memory devices and controllers, said at least one semiconductor device mounted to said generally planar board and including: a substrate with a plurality of peaks and valleys, where said peaks are defined by at least one topographic conductive line spaced apart from at least one topographic dummy pattern, and said valleys are defined by interpeak spaces; <Desc/Clms Page number 24> an array defined by an arrangement of at least a portion of said peaks and valleys such that the periphery of said array is substantially bounded by dummy pattern straight edges, and that no portion of any of said plurality of said second topographic structures within said array extends laterally beyond said periphery; a grid disposed within said array such that the longest dimension of each of said interpeak spaces is no longer than the longest dimension of any of said fill patterns, and that no intersection defined by a crossing between any two of said interpeak spaces includes uninterrupted linear dimensions; and a planar layer of insulative material deposited in said valleys and of such thickness as to render a top surface of said planar layer substantially co-planar with a top surface of said peaks. 31. A computer system incorporating a memory cell, said computer system comprising : a microprocessor ; at least one input electrically coupled to said microprocessor; a mass storage unit electrically coupled to said microprocessor; an output electrically coupled to said microprocessor; at least one memory device electrically coupled to said microprocessor, said at least one memory device adapted to store computer programs for use by said microprocessor, wherein said at least one memory device is defined by: a semiconductor substrate; a switching device disposed on said semiconductor substrate; a charge storage device in electrical communication with said switching device; a plurality of first topographic structures comprising conductive lead lines deposited on said semiconductor substrate and in electrical communication with said switching device, said plurality of first topographic structures including a top surface; a plurality of second topographic structures comprising fill patterns with top surfaces thereon, said top surfaces of said second topographic structures generally co-planar with said top surfaces of said plurality of first topographic structures; <Desc/Clms Page number 25> at least one geometrically simple array comprising at least a portion of said plurality of first and second topographic structures arranged such that: the periphery of said array is substantially bounded by straight edges of said plurality of second topographic structures; and no portion of any of said plurality of said second topographic structures within said array extends laterally beyond said periphery; a gridded valley with an interconnected series of spaces, said gridded valley disposed within said array, said interconnected series of spaces disposed between each adjacent said first and second topographic structures such that: a lateral distance defining a width of any one of said series of spaces is substantially equal to that of any other of said series of spaces within said gridded valley; each of said series of spaces contain no linear dimension longer than the longest dimension of any of said fill patterns; and no intersection defined by a crossing between any two of said series of spaces includes uninterrupted linear dimensions; and a planarization layer deposited on top of said substrate such that it is disposed at least within said gridded valley and laterally surrounds said plurality of first and second topographic structures. 32. A computer system according to claim 31, wherein said width of each of said series of spaces is between 0.25 and 0.5 micron. 33. A computer system according to claim 31, wherein an arrangement of each of said fill patterns define a first orthogonal in-plane dimension and a second orthogonal in-plane dimension. <Desc/Clms Page number 26> 34. A computer system according to claim 33 wherein said fill patterns overlap with at least one adjacent said fill pattern along at least one of said first and second in-plane dimensions. 35. A computer system according to claim 33, wherein a first set of said interconnected series of spaces extend in substantially said first in-plane dimension, while said second set of said interconnected series of spaces extend in said substantially second in-plane dimension. 36. A computer system according to claim 31, wherein said planarization layer comprises TEOS. 37. A computer system according to claim 31, wherein said planarization layer comprises spin-on glass. 38. A computer system according to claim 31, wherein at least a portion of said fill pattern is T-shaped. 39. A computer system according to claim 31, wherein said fill patterns are made of the same material as said conductive lead lines. 40. A method for fabricating a reticle, said method comprising: producing a plurality of lead line cutouts in a reticle body; producing a plurality of fill pattern cutouts interspersed between said plurality lead line cutouts, and spaced apart from each of said plurality of lead line cutouts by an amount sufficient to avoid capacitive communication between a metal lead line and a metal fill pattern formed on a memory cell by said reticle, wherein said plurality of lead line and fill pattern cutouts are disposed in an array within a surface of said reticle such that: the periphery of said array is substantially bounded by straight edges; and no portion of any of said plurality of fill pattern cutouts within said array extends laterally beyond said periphery; and forming a grid comprising an interconnected series of spaces between each adjacent said plurality of lead line and fill pattern cutouts, where a lateral distance defining a width of any one of said series of spaces is substantially equal to that of any other of said series of spaces within said grid, such that: <Desc/Clms Page number 27> the longest linear dimension between each of said series of spaces is no longer than the longest dimension of any of said plurality of fill pattern cutouts; and no intersection defined by a crossing between any two of said interconnected series of spaces includes uninterrupted linear dimensions. 41. A method according to claim 40, wherein at least a portion of said fill pattern cutouts are T-shaped. 42. A method according to claim 40, wherein at least one of said plurality of fill pattern cutouts overlaps with at least one adjacent fill pattern cutout.
<Desc/Clms Page number 1> FILL PATTERN GENERATION FOR SPIN-ON GLASS AND RELATED SELF- PLANARIZATION DEPOSITION The present invention relates generally to improved fill patterns for semiconductor devices, and more particularly to geometrically simple arrays of fill patterns interspersed among conductive elements to promote the formation of an insulating planarization layer. The deposition of the numerous layers is one of the key steps in the fabrication of semiconductor devices, where typically alternating patterns of conductive and nonconductive materials are topographically formed on a semiconductor substrate. In a typical photolithographic process, a patterned reticle is employed to provide masking of selected sections of a resist layer on both the semiconductor substrate and subsequent layers, repeated through numerous steps to build a three-dimensional network of connectors. However, the addition of multiple layers causes the topographic projection to become more and more nonplanar; these surface undulations can lead to a loss of resolution in the lithographic masking process. It is therefore highly desirable from a process and quality control perspective to have as little surface undulation as possible on the built-up semiconductor device. One way to minimize the surface undulation is to planarize each exposed surface with one or more insulative layers using known procedures, such as spin-on glass (SOG) or chemical vapor deposition (CVD) methods. One commonly used material in this CVD process is tetraethylorthosilicate (TEOS). When either of these approaches are used to deposit a layer over large tracts of non built-up area, they tend to produce tapered layer thickness variations near the topographic regions in a manner similar to that of a meniscus formed near a container wall due to surface tension in a liquid. To achieve the desired level of planarization, it is precisely this conforma behavior, prevalent in wide-open areas, that substrate designers have been trying to avoid. Similarly, when spacing widths between the rigid upstanding structures varies, the aforementioned layer fill techniques are less than wholly effective at achieving the desired planarization, as spaces of varying size permit disparate amounts of SOG or TEOS to flow into them, and at different rates. Additional methods have been employed to improve the planarity of insulative layers. One well-known approach involves the placement of"dummy"or fill patterns in between the topographic conductive elements to reduce the incidence of conforma dips in the insulative layer. The presence of these fill patterns which, by interrupting otherwise large tracts of unsupported fill area, subdivide and create smaller valley-or grid-like regions for SOG or TEOS layers to fill. However, the addition of fill patterns adds <Desc/Clms Page number 2> complexity, as additional steps must be included to ensure their mechanical and electrical compatibility. For example, since many fill patterns are metal (often deposited simultaneously with the conductive element steps), they can be a source of unwanted conductivity or capacitance. Similarly, a lack of uniformity of spacing between the patterns making up the fill pattern array hampers the even distribution of the layers. The relatively non-uniform spacing between adjacent topographic structures also militates against lower processing costs, where these considerations dictate that fill patterns and the arrays made therefrom be as simple as possible. The cost of depositing customized, non- uniform fill patterns can have a significant impact on fabrication cost; on the other hand, improper attention to a grid or valley layout between fill patterns can lead to spaces that, if inclusive of long straight paths and high throughflow intersections, will exhibit uneven planarization layer flow, and subsequent undulated layer deposition. Accordingly, fill pattern size and spacing become critical design considerations to the person responsible for the circuit layout. Accordingly, the need exists for devices in which fill patterns can be consistently and substantially planar across the entire region of the upper surface of the semiconductor device to provide inexpensive, compact and reliable structures. The present invention satisfies the aforementioned need by providing a planarized semiconductor device and a system which utilizes a reticle configuration that promotes the formation of a planarized landscape on the surface of a semiconductor device. The various layers, regions, and structures of the embodiments of the device according to the present invention may be formed by utilizing conventional semiconductor device fabrication techniques. The selection of these specific techniques may vary from application to application and, with the exception of the fabrication steps outlined herein, is not the subject of the present invention. According to an aspect of the present invention, a method of fabricating a semiconductor device is disclosed, where the steps include: providing a generally planar semiconductor wafer substrate made up of substantially orthogonal first and second in- plane dimensions; defining a topographic layer of conductive lead line material such that it comprises at least first and second sides that extend coplanar with the wafer substrate; depositing one or more topographic layers of conductive lead line material on the substrate; depositing a plurality of topographic fill patterns adjacent either the conductive lead line material or another fill pattern such that spaces defined between the topographic structures possess substantially equal width as any other space; arranging the topographic fill patterns and the topographic layers of conductive lead line material so that a grid <Desc/Clms Page number 3> defined by a plurality of crossings of the spaces contains no linear dimension longer than the longest dimension of any one of the topographic fill patterns, and that no intersection defined by any of the plurality of crossings includes uninterrupted linear dimensions. An additional step includes depositing a planarization layer over the substrate such that it fills up the grid pattern, laterally surrounding the topographic structures of conductive lead line material and fill patterns. Optionally, the step of depositing the insulative layer includes depositing either a layer of spin-on glass or TEOS. In addition, the deposition of the insulative layer produces a top surface substantially co-planar with a top surface of the layers of conductive lead line material and the fill patterns. An additional step may include defining an array comprising at least one of the fill patterns and conductive lead line layers such that no portion of any of the fill patterns overhang the array boundary. The array can be thought of as containing numerous topographic structures repeated in a fairly regular geometric pattern such that it takes on a relatively uniform appearance. One way to achieve a regular geometric pattern is to have the periphery of the array be mostly bounded by the straight-edged sides of the fill patterns. According to another aspect of the present invention, a semiconductor is disclosed. The semiconductor includes a substantially planar substrate with first and second topographic patterns, or structures, defined by active lead lines and dummy fills (both also referred to as peaks), respectively deposited on the substrate. A repeating array, which itself includes a substantially planar grid comprising a plurality of interconnected valleys circumscribing the first and second topographic patterns, is disposed over the substrate, and is configured such that the array periphery is substantially bounded by straight edges of the dummy fills, active lead lines, or combination of both. Furthermore, no portion of any of the dummy fills extends laterally beyond the periphery. Within the grid, the longest linear dimension of each of the valleys is no longer than the longest lateral dimension of any of the dummy fills, and no intersection defined by a crossing between any two valleys includes uninterrupted linear dimensions. In the alternate, a plurality of first and second topographic structures deposited over planar substrate, where the first are conductive lead lines, and the second are fill/dummy patterns, both including top surfaces thereon that are generally co-planar with one another. In addition, a planarization layer deposited over the substantially planar substrate such that it is disposed at least within the gridded valley and laterally surrounds the first and second topographic structures. Optionally, the semiconductor further may include a substantially planar layer of insulative material deposited over the valleys, and has a thickness selected to render a top <Desc/Clms Page number 4> surface of the substantially planar layer substantially co-planar with a top surface of the peaks. In addition, the semiconductor device further includes a lateral dimension defining a width of any one of the interpeak spaces such that it is substantially as wide as all other interpeak spaces. This ensures a relatively constant spacing between adjacent peaks, whether the peaks be topographic conductive lead lines or topographic dummy patterns. Additionally, the insulative material on the semiconductor is an oxide-based ceramic. In still another aspect of the present invention, a memory cell is disclosed. The device includes, in addition to the semiconductor configuration of the previous embodiment, a switching device (such as a transistor) and a charge storage device (such as a capacitor) in electrical communication with the switching device. The substrate defines first and second orthogonal in-plane dimensions. The first topographic structures are made up of conductive lead lines in electrical communication with the switching device. The second topographic structures include a top surface generally co-planar with the top surfaces of the first topographic structures. The gridded valley is made up of a first set of interconnected series of spaces that extend in the first orthogonal in-plane dimension, and a second set of interconnected series of spaces that extend in the second orthogonal in-plane dimension. Optionally, the memory cell includes a width of each of the interconnected series of spaces that is between 0.25 and 0.5 micron, and the second topographic structures define first and second in-plane dimensions extending in first and second orthogonal in- plane dimensions. At least one of the fill patterns may overlap with at least one adjacent fill pattern along at least one of the first and second in-plane dimensions. Also, the second topographic structures may be any of a variety of geometric shapes. Additionally, the first and second topographic structures may be made of the same material. In still another aspect of the invention, a reticle used to make a memory cell is disclosed. The reticle comprises a surface into which plurality of lead line cutouts and a plurality of fill pattern cutouts are made. The cutouts are adapted to define topographic peaks on the surface of a semiconductor, where the lead line cutouts are shaped to further define at least one lead line, and the fill pattern cutouts define a plurality of dummy patterns spaced apart from one another. The fill pattern cutouts are interspersed between the lead line cutouts, and are spaced apart from each of the lead line cutouts by an amount sufficient to avoid capacitive communication between a metal lead line and a metal fill pattern formed on a memory cell by the reticle. The lead line and fill pattern cutouts are disposed in an array within a surface of the reticle such that the periphery of the array is substantially bounded by straight edges, and that no portion of any of the fill pattern <Desc/Clms Page number 5> cutouts within the array extends laterally beyond the periphery. A grid, which is part of the reticle surface remaining after the fill pattern and lead line cutouts have been created, includes an interconnected series of spaces between adjacent cutouts. A lateral distance defining a width of any one of the series of spaces is substantially equal to that of any other of the series of spaces within the grid, while the longest linear dimension between each of the series of spaces is no longer than the longest dimension of any of the fill pattern cutouts. Furthermore, no intersection defined by a crossing between any two of the interconnected series of spaces includes uninterrupted linear dimensions. Optionally, the fill pattern cutouts are any of a variety of geometric shapes. In addition, at least one of the fill pattern cutouts further define a first in-plane dimension and a second in-plane dimension substantially orthogonal to the first in-plane dimension such that at least one of the fill pattern cutouts overlaps with at least one adjacent fill pattern cutout along at least one of the first or second in-plane dimensions. Also, a lateral dimension defining a width of any one of the interconnected series of spaces is substantially the same between all other the series of spaces. In yet another aspect of the invention, a semiconductor fabrication system is disclosed. The semiconductor fabrication system includes: a photoresist application mechanism to deposit photoresist onto a semiconductor substrate; an electromagnetic radiation source to illuminate at least a portion of the photoresist; a solvent dispensing mechanism to wash away unexposed photoresist; an etching mechanism to selectively remove at least one layer of insulative coating; and a reticle with a generally planar body similar to that of the previous embodiment. In yet another aspect of the present invention, a motherboard assembly employing memory cells is disclosed. The motherboard includes a generally planar board, a plurality of interconnect devices to provide electrical communication between the motherboard and various input, output and memory devices, and mounts for a microprocessor, plurality of memory devices and plurality of controller sets, all of which are mounted to the generally planar board. The motherboard also includes at least one semiconductor mounted to the generally planar board, where the semiconductor is from the group consisting of the microprocessors, memory devices and controllers. The semiconductor is similar to that of the previously discussed embodiments. In yet another aspect of the present invention, a computer system employing memory cells is disclosed. The computer system includes a microprocessor, at least one input electrically coupled to the microprocessor, a mass storage unit electrically coupled to the microprocessor, an output electrically coupled to the microprocessor and at least one <Desc/Clms Page number 6> memory device adapted to store computer programs for use by the microprocessor such that it is electrically coupled to the microprocessor. The memory device is similar to that of the previously discussed embodiments. In still another aspect of the present invention, a method of fabricating a reticle is disclosed, the method including the steps of producing a plurality of lead line cutouts in a reticle body; producing a plurality of fill pattern cutouts interspersed between the plurality lead line cutouts, and forming a grid comprising an interconnected series of spaces. The structure of the reticle is similar to that of the previous reticle embodiment. These and other objects and advantages of the invention will be apparent from the following description, the accompanying drawings, and the appended claims. FIG. 1A is an elevation view of a semiconductor device without fill patterns according to the prior art; FIG. 1B is an elevation view of a semiconductor device with fill patterns according to the prior art; FIG. 2 is a top view of a fill pattern according to the prior art; FIG. 3 is a top view of an alternate fill pattern according to the prior art; FIG. 4 is a top view of still another fill pattern according to the prior art; FIG. 5A is a top view of a single fill pattern according to one embodiment of the present invention; FIG. 5B is a top view of a pair of fill patterns overlapping in one dimension according to one embodiment of the present invention; FIG. 5C is a top view of a simple repeating array of fill patterns according to the present invention; FIG. 5D is a top view of an extension of the embodiment of FIG. 5C ; FIG. 6A is a top view of a fill pattern extending horizontally, vertically and in a horizontal-vertical plane, in all cases where the pitch is less than the lateral spacing of the pattern; FIG. 6B is a top view of a fill pattern extending horizontally, vertically and in a horizontal-vertical plane, in all cases where the pitch is equal to the lateral spacing of the pattern; FIG. 6C is a top view of a fill pattern extending horizontally, vertically and in a horizontal-vertical plane, in all cases where the pitch is greater than the lateral spacing of the pattern; <Desc/Clms Page number 7> FIG. 7A is a top view of a reticle with cutouts representative of the embodiment shown in FIG. 6A; FIG. 7B is a top view of a variation of the cutout pattern shown in FIG. 7A, highlighting a single pattern as well as horizontal, vertical and planar extensions of the pattern where the pitch is less than the lateral spacing of the pattern; FIG. 7C is a top view of a variation of the cutout pattern using different geometric shapes, as well as horizontal, vertical and planar extensions of the pattern where the pitch is less than the lateral spacing of the pattern; FIG. 7D is a top view of a variation of the pattern in FIG. 7C using different geometric shapes; FIG. 8 is an elevation view of the fill pattern according to the present invention; FIG. 9 is a top view of a motherboard including semiconductor devices according to an embodiment of the present invention; and FIG. 10 is a block diagram showing the various parts of a computer system according to an embodiment of the present invention. Referring to FIGS. 1A and 1B, the prior art semiconductor devices include a substrate 1 with an upper surface 2 onto which electrically conductive leads 5,6 and 7 are deposited. Typically, a low dielectric insulation layer 10 is placed over the leads and remaining exposed substrate upper surface 2. A planarization layer 20 is then deposited on top of the dielectric layer 10 to smooth out the surface undulations caused by the conductive lead lines 5,6 and 7. Well-known approaches, such as SOG and CVD of TEOS are used to deposit and disperse the planarization layer 20 while still in its liquid (albeit viscous) state. While the planarization later 20 is generally effective at filling relatively tight spaces 30 between lead lines, the outward-pushing force caused by the spinning motion of the SOG process tends to leave semi-conformal troughs 35 in larger spaces, such as space 40. The addition of dummy patterns 50 (alternatively referred to as fill patterns), as specifically shown in FIG. 1B, tends to ameliorate most of the trough problem, although uneven fill pattern spacing can result in a remaining wide space 60, still leaving an uneven distribution of planarization layer 65. In some situations, the placement of dummy patterns 50 is such that they can capacitively react with conductive lead lines 5, 6 and 7 if placed too close. This can corrupt the electrical signals passing through the lead. Referring now to FIGS. 2-4, examples of prior art fill patterns are shown. In FIG. 2, dummy patterns 50 are arranged in a repeating array 70. A repeating, two-dimensional <Desc/Clms Page number 8> grid pattern 80 disposed within the array 70 is made up of horizontal spaces (alternatively referred to as gaps) 82, vertical spaces 84 and intersections 86 comprising vertical and horizontal space crossings. Note that an intersection requires more than a mere meeting of spaces in two different dimensions, but must have each of the spaces actually cross such that they both extend on both sides beyond the intersection point. As such, a corner or a T-shaped junction does not qualify as an intersection in the present context. These spaces and intersections of spaces provide pathways through which the insulative material, whether it be SOG, TEOS or a related compound, flows to form the planarization layer. It is noted that the intersections 86 of the device shown in FIG. 2 includes uninterrupted linear dimensions 86A and 86B. In the present context, the term"uninterrupted linear dimension"refers to one of the space or gap dimensions that contain no breaks, discontinuities or changes in direction between adjacent intersections. Stated another way, an uninterrupted linear dimension describes structure that extends in a generally straight fashion such that it can coincide with a single coordinate in a conventional Cartesean layout 90, with no changes in direction. By way of contrast with the device shown in FIG. 2, neither of the intersections of FIGS. 3 and 4 evidence uninterrupted linear dimensions, as the vertical dimension 186B of FIG. 3, and both the horizontal and vertical dimensions 286A and 286B of FIG. 4 deviate from the required linearity between adjacent intersections. It is also noted that both the horizontal and vertical spaces 82,84 of FIG. 2 are of linear dimensions longer than that of the longest dimension 50A of dummy pattern 50, while in FIG. 3, the horizontal space 182 is longer, although the maximum vertical space 184 is not, being approximately the same height as dummy pattern 150A is long. The present inventors have discovered that both of these fill pattern features, long linear spacings and uninterrupted linear intersections of spacings, contribute to the conforma "troughing"of the deposited planarization layer, and thus need to be eliminated or minimized. Thus, while each of the fill patterns shown in FIGS. 2-4 individually include desirable fill features, such as straight edges around the periphery defined mostly by the alignment 51-54 of the straight edges of dummy patterns 50 such that no portions of the dummy patterns 50 project over an array periphery (FIG. 2), simple arrays (FIGS. 2 and 3), no long linear space dimensions (FIG. 4) and no uninterrupted linear space crossings at the intersections (FIGS. 3 and 4), none provide all of the features needed to ensure smooth planar insulative layers. Referring now to FIGS. 5A-5D, a pair of fill patterns 350,351 have been combined to form a composite fill pattern 355. The fill (or dummy) patterns, as well as the conductive lead lines (not shown) are built-up from a generally planar surface into a <Desc/Clms Page number 9> three-dimensional topographic structure, such that the footprint of the structures projects a two-dimensional image onto the substrate. Typically, the fill patterns are of geometrically simple designs, such as rectangles, or various shapes resembling a cross, or the letters"T" or"L". This promotes ease of integration into the interstitial areas between conductive lead lines (not shown) deposited on a semiconductor substrate, as well as lower fabrication costs due to simple cutouts on the mask or reticle. Moreover, the fill patterns are made of an electrically conductive material, such as metal. In addition, they are typically deposited on a semiconductor substrate (not shown) at the same time, and as part of the same process as the conductive lead lines. Referring now to FIG. 5B, a small portion of repeating array 370 of fill patterns 350,351 is shown. The repeating nature of array 370 is such that the one or more fill pattern shapes are placed in an orderly geometric way to be as simplistic as possible through the creation of relatively uniform spacing between the fill patterns. In addition, the array 370 defines a periphery 375 such that none of the projections of the fill patterns 350,351 extend beyond the boundary of the array 370 set up by periphery 375. This, too facilitates low cost fabrication, as repeating array profiles are easier to set up and produce. Preferably, an alignment of the outer edges of the fill patterns 350 creates the straight, even boundary defined by each array 370. Disposed within array 370 is a grid 380, also known as a gridded valley, specifically shown in FIG. 5D. Unlike array 370, grid 380 need not have a straight periphery 375. Instead, the grid 380 can, and preferably does include jagged, tortuous paths of spaces interspersed among the fill patterns 350,351 and conductive lead lines (not shown). The spaces 385 are bounded on the sides by these upstanding topographic structures, such as the fill patterns 350,351 and conductive lead lines (not shown), and on the bottom by the substantially planar surface of the layer below, such as the substrate 388 of the semiconductor. Preferably, spacing of the topographic structures is such that the width of the spaces 385 is uniform throughout the array 370, thus promoting ease of depositing and consistent quality of the planarization layer (such as SOG or TEOS, shown representatively as 20 in FIGS. 1A and 1B, or any related ceramic or similar insulator). The spaces 385, in conjunction with the side walls of the fill patterns and conductive lead lines, make up three-dimensional valleys as part of the grid into which the planarization layer may be deposited. These valleys circumscribe the topographic"peaks"of the fill patterns 350,351 and conductive lead lines. The planarization layer is preferably deposited to a thickness that ensures that the top surface of the planarization layer is generally coplanar with the top surfaces of the fill patterns 350,351 and conductive lead lines. As an analogous way to visualize the interrelationship between the topographic fill <Desc/Clms Page number 10> patterns, topographic and conductive lead lines, peaks, valleys, spaces, gaps, grids and arrays, it is helpful to think of the array as an overhead view of a few blocks of the downtown section of a metropolitan area, where the topographic structures (fill patterns and conductive lead lines) are three-dimensional buildings and skyscrapers, while the spaces (or valleys) are the two-dimensional cris-crossing streets that separate the buildings and skyscrapers. The grid (or gridded valley) can be thought of as portions of the array with an overhead outline traced by the various streets and their intersections. Within the grid 380, the spaces 385 and valleys 395 (discussed in more detail in conjunction with FIG. 8 below) are arranged such that the deposition of the planarization layer is not be permitted to accelerate too rapidly in the in-plane directions of the substrate, thereby causing the aforementioned troughing of the top surface. To accomplish this, the longest that the linear dimensions of the spaces and valleys are permitted to assume is that of the longest dimension of the longest fill pattern. In other words, the longest continuous linear extension of a space or valley in either the x or y direction is limited to the longest x or y direction projection of the longer of the fill patterns 350,351. The tortuous paths taken by the planarization layer militates against its rapid acceleration during deposition, a phenomenon especially prevalent with SOG techniques. In a similar fashion (and with a similar purpose), the places defining intersections between the numerous spaces (or valleys) have offset features built in. Thus, rather than having a straight-through extension of one of the crossing spaces as it passes through the intersection, the interspersed fill patterns 350,351 are staggered, thus forcing interruptions, breaks and discontinuities in the otherwise linear extensions of the spaces. The substrate itself defines two generally orthogonal in-plane dimensions (x, y) that coincide with the Cartesean coordinate system 390. Accordingly, any projection in an in- plane direction is one that extends only within that plane. One way to define the spacing relationship between the fill patterns is by the pitch P of the fill pattern. Pitch P (as shown in FIG. 5D) is typically the distance between like fill pattern points in an array of fill patterns. Referring now to FIGS. 6A-6C in conjunction with FIGS. 5A and 5D, when the ratio of the pitch P to the correspondingly aligned linear dimension L of the fill pattern is less than one, there exists a negative spacing such that the individual fill patterns overlap by the difference in length between L and P (as shown in FIG. 6A); when the ratio equals one, as shown in FIG. 6B, then the individual fill patterns are aligned such that there is neither an overlap nor a gap between adjacent fill patterns; when the ratio is greater than one, shown in FIG. 6C, there is a gap G that forms between adjacent fill patterns 350,351. Referring now to FIG. 5B, a portion <Desc/Clms Page number 11> of each of the composite fill patterns 355 are shown as overlapping one another along the horizontal (x) direction shown at coordinate system 390, while FIG. 5C shows the overlap in both the horizontal and vertical dimension. This overlap (where P is less than L) permits the uniform lateral spacing of the composite fill patterns 355. The term"lateral" denotes dimensions generally aligned with one of the two major coordinate axes (x, y) in coordinate system 390. Accordingly, neither a diagonal dimension, nor a discontinuous, broken path would constitute a lateral dimension. Similarly, the terms"generally", "substantially"and related variants refer to an arrangement of elements or features that, while in theory would be expected to exhibit exact correspondence or behavior, may, in practice embody something slightly less than exact. Accordingly, for example, when something is"substantially aligned"or"generally planar"in the present context, its qualities, while tending toward exact or absolute, need not be. By appropriate consideration of fill pattern lateral dimensions, and spacing between them, an even distribution of planarization layer (not shown) throughout the array 370 (best shown in FIG. 5D) is effected. This can also effect the grid configuration, in that the effect on the important linear and lateral dimensions, as well as intersection dimensions, needs to be considered. In contrast with each of the devices shown in FIGS. 2-4, the arrangement of the topographic fill patterns 350,351 in FIGS. 5A-5D includes all of the aforementioned features needed to promote smooth, level planarization layers, such as: geometrically simple features that repeat in regular arrays that are simple to fabricate; no portion of the fill patterns project over the array periphery 375; the longest linear dimension of the valleys or spaces is no longer than the longest lateral dimension on any of the larger fill patterns 350; and no intersection between any of the spaces includes an uninterrupted linear dimension. As shown in FIG. 7A, a reticle 500 with body 510 is shown. Body 510 includes a surface 520 into which an array 570 of cutouts 550,551, 552 are disposed. These cutouts are configured such that the cutout pitch is less than the lateral spacing. A reticle with this configuration will lead to a fill pattern spacing similar to that of FIG. 6A. Typically, the reticle 500 (or mask) is placed between a semiconductor substrate (not shown) and a electromagnetic radiation source, such as a light (not shown). The cutouts 550,551, 552 permit light to pass through discrete locations on reticle 500, thus illuminating corresponding spots on the photoresist-coated substrate, which causes the photoresist to harden and remain in place while the unexposed photoresist is removed, typically with the help of a solvent. The use of reticle 500, or another with a different cutout configuration, <Desc/Clms Page number 12> can be used again at a later stage in the build-up of topographic structures. Representative grid 580 is part of the reticle body 510 remaining after cutouts 550,551 and 552 have been established, and is made up of a first and second set of interconnected series of spaces 585, which extend in the x-y dimensions of the surface 520. Preferably, the spaces 585 are between 0.25 and 0.5 microns wide in a lateral direction. As previously described, the longest linear dimension of the interconnected series of spaces 585 is no longer than the longest dimension of any of the fill pattern cutouts 550,551, 552. Referring now to FIG. 7B, a variation on the cutout pattern of FIG. 7A is shown, as well as the individual cutouts 560,561 and 562 that make up composite cutout 555, and their horizontal, vertical and planar extensions 565,566 and 567, respectively. Referring now to FIGS. 7C and 7D, additional variations on the reticle cutout configurations, where the geometric shapes of the cutout patterns 571,572, 573, and 574, making up the composite cutout pattern 570 (shown in FIG. 7C), and cutout patterns 581, 582,583, 584,586 and 587, making up the composite cutout pattern 588 (shown in FIG. 7D) are shown including modified rectangles and related shapes. Similarly, horizontal, vertical and planar extensions 576,577 and 578 of FIG. 7C and 596,597 and 598 of FIG. 7D may be constructed. Referring now to FIG. 8, a view showing the even spacing of fill patterns 350 and conductive lead lines 305,306 and 307 shows how an even planarization layer 320 is produced. The distance between adjacent fill patterns 350 and conductive lead lines 305, 306,307, or any combination thereof, defines space 385. In addition, the space 385, between the upper surface of substrate 392, in conjunction with the upstanding sidewalls 350W, 305W, 306W and 307W define valleys 395. With a substantially uniform spacing of fill patterns 350 and conductive lead lines 305,306, and 307, the lateral dimension of space 385 should be substantially the same throughout the entire array. Referring now to FIGS. 9 and 10, a computer motherboard 600 (FIG. 9) and a block diagram of the layout of a typical computer system 700 are shown. In FIG. 9, the motherboard 600 includes various components to connect the various functions of the central processor, controls, input, output and memory, such as a generally planar board 610, mount 620 for microprocessor, mount 630 for expansion slots, mount 640 for memory, and connectors to establish signal links with other components. FIG. 10 depicts the basic interconnections of the major elements of a computer system. The structures discussed herein are typically associated with the microprocessor 710, memory 750, and to some extent the controllers, which may include, among other things, chip sets (not shown). <Desc/Clms Page number 13> While the embodiments and systems discussed herein have been directed to a particular fill pattern, it is within the scope of the present invention to include similar simplistic, repeating arrangements to achieve the same end. Thus, having described the present invention in detail and by reference to the embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention in the following claims.
The present invention relates to balancing the loss of a non-volatile memory by using a data writing counter. A memory system has a controller (e.g., a CPU, an FPGA, or a GPU) and recording segments in the non-volatile memory (e.g., a flash memory device) and used by the controller to store data. The controller is configured to maintain the data writing counter for the recording segments; select afirst segment of the recording segments for recording the data from a host system, wherein selecting the first segment comprises scanning the data writing counter to identify a first data writing counter corresponding to the first segment; receive, from the host system, the data to be recorded by the non-volatile memory; and write the received data to the selected first segment.
1.A method for a controller, the method comprising:Maintaining a data write counter for the recording section of the nonvolatile memory, wherein each of the data write counters corresponds to a corresponding recording section;Selecting the first section of the recording section for recording data from the host system, wherein selecting the first section includes scanning the data write counter to identify the first section corresponding to the first section Data write counter;Receiving data to be recorded by the non-volatile memory from the host system; andThe received data is written to the first sector.2.The method according to claim 1, wherein the host system is a black box recorder for autonomous vehicles.3.The method of claim 2, wherein the received data is a data stream from the black box recorder.4.The method according to claim 1, further comprising dividing the non-volatile memory into sets of physical blocks to provide the recording section, each set of physical blocks corresponding to a recording section.5.The method according to claim 4, wherein dividing the non-volatile memory to provide recording sections of equal size.6.The method of claim 1, wherein identifying the first data write counter includes comparing the value of the data write counter.7.The method according to claim 6, wherein the value of the first data write counter is lower than the value of at least one other data write counter in the data write counter.8.8. The method according to claim 7, wherein the value of the first data write counter is lower than the value of all other data write counters in the data write counter.9.The method of claim 1, further comprising incrementing the first data write counter when writing the received data to the first sector.10.9. The method of claim 9, wherein the first data write counter is incremented based on the amount of data written to the first sector.11.A system including:At least one processing device; andA memory storing instructions configured to instruct the at least one processing device:Maintaining a data write counter for the recording section of the nonvolatile memory, wherein each of the data write counters corresponds to a corresponding recording section;Selecting the first section of the recording section for recording data from the host system, wherein selecting the first section includes scanning the data write counter to identify the first section corresponding to the first section Data write counter;Receiving data to be recorded by the non-volatile memory from the host system; andThe received data is written to the first sector.12.11. The system according to claim 11, wherein each of the data write counters is incremented based on the amount of data written to the recording section corresponding to the corresponding data write counter.13.The system of claim 11, wherein the host system is a black box recorder for an autonomous vehicle, and the received data is a data stream from the black box recorder.14.The system of claim 13, wherein writing the received data to the first sector includes writing the data stream to the first sector in a cyclic mode.15.The system according to claim 13, wherein when the black box recorder receives a command to start recording, the host system starts to send data to be recorded, wherein the command to start recording is calculated by the autonomous vehicle The device is provided, and the computing device controls the autonomous navigation system.16.The system of claim 15, wherein the instructions are further configured to instruct the at least one processing device:In response to the black box recorder receiving the command to stop recording, the second section of the recording section is selected for recording subsequent data received from the host system, wherein the second section is based on comparison Select the value of the data written into the counter;Receiving the subsequent data; andThe subsequent data is written to the second sector.17.A non-transitory computer-readable storage medium storing instructions that when executed by at least one processing device cause the at least one processing device to perform a method, the method comprising:Maintaining data write counters for recording sections of the memory, wherein each of the data write counters corresponds to a corresponding recording section;Selecting the first section of the recording section for recording data, wherein selecting the first section includes scanning the data write counter to identify the first data write counter corresponding to the first section ;Receiving data to be recorded by the memory; andThe received data is written to the first sector.18.18. The non-transitory computer-readable storage medium of claim 17, wherein writing the received data to the first sector includes writing the data to the first sector in a cyclic mode.19.The non-transitory computer-readable storage medium of claim 17, wherein the received data is a continuous data stream.20.18. The non-transitory computer-readable storage medium of claim 17, wherein identifying the first data write counter includes comparing the value of the data write counter.
Use data write counter to balance the wear of non-volatile memoryTechnical fieldAt least some of the embodiments disclosed herein relate generally to memory systems, and more specifically, to equalize wear and tear of non-volatile memory devices.Background techniqueAutonomous vehicles usually contain many sensors to assist in controlling the autonomous vehicle. In the case of an accident, collision, or near collision involving a vehicle, it may be beneficial to check the sensor data recorded before and/or during the accident to assist in potentially determining the cause of the accident and/or whether there may be a vehicle malfunction. In the event of a power outage during an accident, vehicle sensor data stored in volatile memory may be lost.An event data recorder (EDR) for motor vehicles, sometimes informally referred to as a car "black box", is a device installed in some vehicles to record information related to vehicle crashes or accidents. In one example, automotive original equipment manufacturers (OEMs) that manufacture autonomous vehicles are legally required to install a black box data logger that records the last 30 seconds before the accident. It is hoped that this data will be used to reproduce the root cause of the accident.In another example, in diesel trucks, EDR is triggered by electronic sensing conditions in specific vehicle components (eg, engine or brake components). Some of these conditions may occur due to accidents. Data from these devices can be collected and analyzed after a collision to help determine what the vehicle is doing before and during the collision or event.Some EDRs continuously record data, overwriting the first few minutes, until the recording stops due to an accident (for example, due to a power failure). Other EDRs are activated by collision-like events (for example, a sudden change in speed) and can continue to record until the accident ends. EDR can record a wide range of data, such as whether the brake is applied and the speed at the time of impact. The existing EDR stores the information internally in the EEPROM until it is restored from the EDR module.Summary of the inventionAn aspect of the present disclosure provides a method for a controller, the method comprising: maintaining a data write counter for a recording section of a non-volatile memory, wherein each of the data write counters Corresponding to the corresponding recording section; selecting the first section of the recording section for recording data from the host system, wherein selecting the first section includes scanning the data write counter to identify the The first data writing counter of the first sector; receiving data to be recorded by the nonvolatile memory from the host system; and writing the received data to the first sector.Another aspect of the present disclosure provides a system that includes: at least one processing device; and a memory storing instructions, which is configured to instruct the at least one processing device: to maintain a recording section for the nonvolatile memory A data write counter, wherein each of the data write counters corresponds to a corresponding recording section; the first section of the recording section is selected for recording data from the host system, wherein the first section is selected A section includes scanning the data write counter to identify the first data write counter corresponding to the first section; receiving data to be recorded by the non-volatile memory from the host system; and The received data is written to the first sector.Another aspect of the present disclosure provides a non-transitory computer-readable storage medium storing instructions that when executed by at least one processing device cause the at least one processing device to execute a method, the method comprising: maintaining The data write counter of the recording section of the memory, wherein each of the data write counters corresponds to a corresponding recording section; the first section of the recording section is selected for recording data, wherein all The first section includes scanning the data write counter to identify the first data write counter corresponding to the first section; receiving data to be recorded by the memory; and writing the received data To the first section.Description of the drawingsThe present disclosure will be more fully understood according to the detailed description provided below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing system with a write counter component according to some embodiments of the present disclosure.2 illustrates an example memory system that selects a recording section to receive new data for recording based on the value of a data write counter according to some embodiments of the present disclosure.Figure 3 illustrates an example autonomous vehicle with non-volatile memory that uses a cyclic mode to store data from a data stream into a selected recording section according to some embodiments of the present disclosure.4 is a flowchart of an example method for selecting a recording section based on scanning of a data write counter according to some embodiments of the present disclosure.Figure 5 is a block diagram of an example computer system in which embodiments of the present disclosure can be operated.detailed descriptionAt least some aspects of the present disclosure are directed to wear leveling in non-volatile memory devices. In one example, the memory device is a flash memory. In one example, the memory device is a black box recorder used in an autonomous vehicle. For example, the autonomous vehicle is a car, truck, boat, airplane, helicopter, or unmanned aerial vehicle (for example, drone).Previous controllers for flash memory devices (eg, using NAND flash memory) used conversion tables to perform wear leveling. For example, wear leveling is performed to more evenly distribute the number of program/erase cycles across the memory storage medium to further extend the service life of the memory device. A conversion table between a logical block (for example, a logical block visible to an application program) and a physical block of a memory device is often used dynamically or statically. The conversion table is managed by the controller.Wear leveling is used because the life of each flash memory cell decreases every time data is written to the cell (used to program the cell). When the number of programming cycles reaches a certain limit (which changes for each type of cell (for example, 3,000 programming cycles)), the cell has reached the end of its useful life.Previous wear leveling methods tried to evenly distribute data on physical locations (for example, flash memory cells) to evenly wear the cells. This is done to prevent a small number of units from failing prematurely before other units, and thus end the service life of the memory storage device. For example, the previous flash controller writes to different physical locations of the memory for each write operation in order to evenly distribute data. In addition, previous flash controllers are designed to handle random data.It is recognized that due to the use of the conversion table as in the previous method, there may be several technical problems. For example, previous flash controllers used conversion tables to convert between the logical blocks of the application program (eg, executed on the host system) and the physical blocks of the flash memory device. The controller must spend a lot of processing effort and time to run the conversion table in real time. If the memory device must handle an uninterrupted data flow, the processing overhead of the conversion table can sometimes reduce the speed at which new data can be recorded. For example, when using a conversion table, the performance of a black box recorder that records a continuous data stream will be reduced.In one example, when the previous flash controller tried to handle a data stream of continuous data for a long time, the wear leveling background operation brought a processing burden to the controller. For example, the memory device has a buffer for receiving new data. In some cases, when the controller becomes overloaded at least in part due to wear leveling processing, the controller will not be able to handle new incoming data. In this case, the controller sends a signal to the host device to stop sending new data (for example, because the controller is busy with maintenance processing and the buffer is full).In one instance, the controller may become busy due to copying blocks from one physical location of the memory device to another physical location. This greatly reduces the performance of continuous write operations (for example, when processing data from an S-Video recorder for a long time).At least some aspects of the present disclosure solve the above and other deficiencies by improving the wear leveling mechanism. Specifically, for memory storage devices that record sequentially written data streams, it has been realized that the wear leveling mechanism does not require a conversion table. This significantly reduces the processing overhead required by the controller during operation.Instead of using a conversion table, various embodiments of the present disclosure use a data write counter to select a recording section to record new data. The data write counter is used to track the amount of data written to each recording section of the memory device. Select a recording section that has previously written a small amount of data to write new data.In one example, the data stream from the black box recorder is written to the selected recording section. Data is written in the recording section in a circular mode.In one embodiment, instead of a controller for a flash memory device that uses a logic-to-physical conversion table, the controller divides a number of flash physical blocks into recording sectors (as described above). For example, if divided equally, the recording section size in units of blocks is the total number of blocks used for the memory device divided by the number of recording sections used. In other examples, the size of the recording section may have a different size. For example, different sizes can be used for different types of flash memory cells (for example, when the memory device uses several types of cells to record data). In one example, the block may have a size of hundreds to thousands of bits. In another example, the size of the block may be 2K to 16K bytes or more.The controller maintains a data write counter for each recording section. When data is written to a specific recording section, the data write counter is incremented so that the program/erase cycle endured by the section can be tracked.In one example, the memory device is part of a black box recorder. When the black box recorder receives a recording command (for example, when the autonomous mode of the vehicle is activated or the vehicle is turned on), the data stream to the flash memory device is written in the current active recording section in a circular mode. The data is written directly to the physical block in the active recording zone. When the recording section is full, the previously recorded data will be replaced with new data from the incoming data stream. The data write counter for the active recording zone is increased accordingly, so that the counter indicates the amount of data that has been written to the zone so far (for example, the lifetime of the data written to the zone) Cumulative count). In one example, the circular mode is similar to circular buffer while writing data.In various embodiments, a new active recording section can be selected based on the occurrence of various events. For example, the recording stop command may be received by the controller of the flash memory device. The stop command may be triggered by an accident involving a vehicle, for example. In another example, the stop command may be triggered when the vehicle is turned off (eg, by the user) or when the vehicle exits the autonomous navigation mode. When one or more of these events occur, a new active recording section is selected.In one embodiment, the new active recording section is selected based on the controller's evaluation of the data write counter. For example, the controller may scan the data write counter of each of the recording sections in the memory device. Therefore, the controller can determine which recording zone has been used to pre-program the minimum amount of data. The recording section with the lowest amount of data programmed is selected as the next active recording section to store the data subsequently received by the memory device. In another example, if it is determined that the values of a plurality of data write counters used to record the sectors with the smallest amount of data written have the same value, the controller may randomly select those sectors corresponding to the smallest amount of data. Select the active section. Other selection criteria may also be used (e.g., based on the type of storage unit, the type of previously stored data, and/or the type of subsequent data to be stored).Therefore, the various embodiments of the present disclosure as described above provide a write counter component that uses a data write counter to select a recording section and provides several advantages. For example, using a write counter component can provide a lower cost and simpler controller. The power consumption of the controller is reduced, and the loss leveling mechanism is easier to implement. In addition, since the need for the controller to maintain the logic-to-physical conversion table is eliminated, the memory device exhibits improved continuous write performance.More generally, erasable computer storage media, such as optical disc rewritable, DVD recordable, DVD-RAM, electrically erasable programmable read-only memory (EEPROM), and/or flash memory, have useful service cycles The service cycle is limited by the programming and erasing cycles of the stored data. In other embodiments, the write counter component can be used for these types of memory devices.The program erase (P/E) budget represents a predetermined number of program and erase cycles that can be reliably performed to record data in the erasable medium. After a predetermined number of erasing cycles, the program erase (P/E) budget of such erasable media is used up; as a result, the media may become statistically unreliable and is therefore deemed to be in its usefulness The end of its useful life.For example, flash memory devices usually have many memory cell blocks. Each of the memory blocks can be individually programmed and erased. The degree of wear of each memory block is proportional to the number of erase operations performed on the memory block. By using the data write counter, wear leveling can be performed in the flash memory so that the erase operation is distributed among the memory blocks in the memory device. US Patent No. 6,850,443 discloses some wear-leveling techniques in mass storage systems, and the entire disclosure of the patent is hereby incorporated by reference.Different types of NAND flash memory have been developed. For example, single-level cell (SLC) flash memory has a cell structure that stores a single bit in a reprogrammable cell; multi-level cell (MLC) flash memory has a cell structure that stores multiple bits of data in a reprogrammable cell (for example, Two-bit) cell structure; three-level cell (TLC) flash memory has a cell structure that stores three bits of data in a programmable cell; and four-level cell (QLC) flash memory has a cell structure that can store four bits in a programmable cell The unit structure of the data.Different types of flash memory have different characteristics in terms of performance, production cost, reliability and durability. For example, the P/E budget of SLC flash memory is between 90,000 and 100,000 cycles. The P/E budget of MLC flash memory ranges from 10,000 to 30,000 cycles; and the P/E budget of TLC flash memory is between 3,000 to 5,000 cycles.Examples of other data that can be stored in the recording section according to the present disclosure include data associated with operating systems, software, software stacks, program variables, and the like. Some of this data (such as program variables) is generated at runtime by one or more software processes executing on one or more processing devices. Examples of other data that can be stored include graphics and video buffers, camera input buffers, artificial graphics, and deep learning temporary calculations. Such data is usually generated by one or more software processes at runtime during normal computer operation.The write counter component of the present disclosure can be implemented in various computing systems. In an example system, the processing device of the host system (e.g., system on chip (SOC), FPGA, CPU, or GPU) stores runtime data in non-volatile memory (e.g., cross-point memory (e.g., 3DXP memory) or SSD )in.Figure 1 illustrates an example computing system with a write counter component 107 according to some embodiments of the present disclosure. The host system 101 communicates with the memory system 105 via the bus 103. The processing device 111 of the memory system 105 has read/write access to the storage areas 111, 113,..., 119 of the nonvolatile memory 121. In one example, the host system 101 also reads data from the volatile memory 123 and writes data to the volatile memory.In an example, the processing device 111 and the storage areas 111, 113,..., 119 are on the same chip or die. In some embodiments, the storage area stores data used by the host system 101 and/or the processing device 111 during machine learning processing or other runtime data generated by a software process executed on the host system 101 or the processing device 111.The computing system includes a write counter component 107 in the memory system 105, and the write counter component selects a storage area 111 (for example, a recording section of a flash memory) for recording new data from the host system 101. The storage area 111 is selected by scanning the data write counter, as described herein. The computing system 100 may further include a write counter component 107 in the host system 120 that coordinates with the write counter component 107 in the memory system 105 to at least facilitate scanning of the write counter and/or selection of the storage area 111.In one example, the volatile memory 123 is used as the system memory of the processing device (not shown) of the host system 101. In one embodiment, the process of the host system 101 selects the storage area by evaluating the value from the data write counter. In one example, the data write counter may be stored in the memory of the memory system 105 and/or the host system 101. In one example, the host system 101 may select the storage area based in part on data from sensors and/or software processes executing on the autonomous vehicle. In one example, the aforementioned data is provided by the host system 101 to the processing device 111, which selects the storage area.In some embodiments, the host system 101 or the processing device 111 includes at least a part of the write counter component 107. In other embodiments or in combination, the processing device 111 and/or the processing device in the host system 101 includes at least a part of the write counter component 107. For example, the processing device 111 and/or the processing device of the host system 101 may include a logic circuit that implements the write counter component 107. For example, the controller or processing device (eg, CPU, FPGA, or GPU) of the host system 101 may be configured to execute instructions stored in the memory to perform the operations of the write counter component 107 described herein.In some embodiments, the write counter component 107 is implemented in an integrated circuit chip provided in the memory system 105. In other embodiments, the write counter component 107 in the host system 120 is a part of the operating system of the host system 120, a device driver, or an application.An example of the memory system 105 is a memory module connected to a central processing unit (CPU) via a memory bus. Examples of memory modules include dual in-line memory modules (DIMMs), small outline DIMMs (SO-DIMMs), non-volatile dual in-line memory modules (NVDIMMs), and the like. In some embodiments, the memory system may be a hybrid memory/storage system that provides both memory functions and storage functions. Generally, a host system can utilize a memory system that includes one or more storage areas. The host system can provide data to be stored in the memory system, and can request data to be retrieved from the memory system. In one example, the host can access various types of memory, including volatile and non-volatile memory.The host system 101 may be a computing device, such as a controller in a vehicle, a web server, a mobile device, a cellular phone, an embedded system (for example, an embedded system with a system on a chip (SOC) and internal or external memory), or includes memory And any computing device of the processing device. The host system 101 may include or be coupled to the memory system 105 such that the host system 101 can read data from the memory system 105 or write data to the memory system. The host system 101 may be coupled to the memory system 105 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, no intervening components), whether wired or wireless, including, for example, electrical, optical, or magnetic Wait to connect. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), Double data rate (DDR) memory bus, etc. The physical host interface can be used to transfer data between the host system 101 and the memory system 105. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory system 105 and the host system 101. FIG. 1 illustrates the memory system 105 as an example. Generally, the host system 101 can access multiple memory systems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.The host system 101 may include a processing device and a controller. The processing device of the host system 101 may be, for example, a microprocessor, a central processing unit (CPU), a processing core of a processor, an execution unit, and the like. In some cases, the controller of the host system may be referred to as a memory controller, a memory management unit, and/or an initiator. In one example, the controller controls communication between the host system 101 and the memory system 105 via the bus 103.The controller of the host system 101 may communicate with the controller of the memory system 105 to perform operations such as reading data, writing data, or erasing data in the storage area of the nonvolatile memory 121. In some cases, the controller is integrated in the same package of the processing device 111. In other cases, the controller and the processing device 111 are packaged independently. The controller and/or processing device may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, cache memory, or a combination thereof. The controller and/or processing device may be a microcontroller, a dedicated logic circuit (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.In one embodiment, the storage areas 111, 113, ..., 119 may include any combination of different types of non-volatile storage components. In addition, the memory cells of the storage area may be grouped into memory pages or data blocks that can refer to the cells for storing data. In some embodiments, the volatile memory 123 may be, but is not limited to, random access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM).In one embodiment, one or more controllers of the memory system 105 may communicate with the storage areas 111, 113,... 119 to perform operations such as reading data, writing data, or erasing data. Each controller may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. Each controller can be a microcontroller, a dedicated logic circuit (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller may include a processing device (processor) configured to execute instructions stored in a local memory. In one example, the local memory of the controller includes embedded memory that is configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory system 105, including handling Communication between the memory system 105 and the host system 101. In some embodiments, the local memory may include memory registers that store memory pointers, fetched data, etc. The local memory may also include read-only memory (ROM) for storing microcode.Generally, the controller of the memory system 105 can receive commands or operations from the host system 101 and/or the processing device 111, and can convert the commands or operations into instructions or appropriate commands to implement storage based on the data write counter for the storage area. District choice. The controller can also be responsible for other operations, such as wear leveling, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and the logical block address and physical block address associated with the storage area. Address translation. The controller may further include a host interface circuit to communicate with the host system 101 via a physical host interface. The host interface circuit can convert commands received from the host system into command instructions to access one or more storage areas, and convert responses associated with the storage areas into information for the host system 101.The memory system 105 may also include additional circuits or components that are not illustrated. In some embodiments, the memory system 105 may include a cache or buffer (e.g., DRAM or SRAM) and address circuitry (e.g., DRAM or SRAM) that can receive addresses from one or more controllers and decode the addresses to access the storage area Row decoder and column decoder).In some embodiments, the controller and/or processing device 111 in the host system 101 or the memory system 105 includes at least a part of the write counter component 107. For example, the controller and/or processing device 111 may include a logic circuit that implements the write counter component 107. For example, the processing device (processor) may be configured to execute instructions stored in the memory to perform operations that provide read/write access to the storage area of the write counter component 107, as described herein Narrated. In some embodiments, the write counter component 107 is part of an operating system, device driver, or application program.2 illustrates an example non-volatile memory 254 according to some embodiments of the present disclosure, which selects a recording section to receive new data for recording (e.g., a continuous data stream) based on the value of the data write counter 1-n . The non-volatile memory 254 is an example of the memory system 105. In one example, the non-volatile memory 254 is a flash memory device and/or a solid state drive (SSD).The controller 252 maintains a data write counter 1-n for recording the zone 1-n. Each data write counter corresponds to one of the recording sectors. The controller 252 is an example of the processing device 111.The controller 252 scans the data write counter to select one of the recording sections for receiving the data stream from the host system 250. The data writing counter is scanned by determining the value of the data writing counter and comparing the values, so as to identify the data writing counter having a lower value. In one example, the identified data write counter (eg, data write counter 4) has a minimum value compared to all other data write counters. In another example, the identified write counter has a lower value than at least one other data write counter.The recording section selected to function to record new data is the recording section corresponding to the identified data write counter. When the non-volatile memory 254 initiates the recording of data from the host system 250, the data will be stored in the selected recording section.In one embodiment, the controller 252 divides the physical storage space of the non-volatile memory 254 into recording sections of equal size. In other embodiments, the recording zone may have different sizes. For example, the size of the recording section may correspond to the type of data being recorded, and/or the size may be determined based on context data received from the host system 250. In one example, the context data is based on sensor data received from sensors of the autonomous vehicle.When data is received from the host system 250, the controller 252 writes the received data to the active recording zone (for example, recording zone 4). As the received data is written into the active recording zone, the data write counter corresponding to the active recording zone is incremented. In one example, the data write counter is incremented based on the amount of data written to the active recording zone. In one example, the value of the data write counter is incremented based on the number of bits or bytes of data written.In one embodiment, the host system 250 may collect data from sensors of the embedded system. For example, the sensor may be located on an autonomous vehicle and collect image data for vehicle navigation. In one embodiment, sensor data is input to a neural network, and the output is used to control the vehicle. In one embodiment, the processing by the neural network is used to provide result data to the controller 252 to select the recording section.In one embodiment, the controller 252 is used to train or operate a neural network. During training or other operations of the neural network, data is read from or written to volatile memory (e.g., volatile memory 123).In one embodiment, the controller 252 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller may be a microcontroller, a dedicated logic circuit (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor. The controller may include one or more processors (processing devices) configured to execute instructions stored in local memory.The local memory of the controller may include an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control aspects of the operation of the memory system. The local memory of the controller may include read-only memory (ROM) for storing microcode and/or memory registers for storing, for example, memory pointers, extracted data, etc.3 illustrates an example autonomous vehicle 306 according to some embodiments of the present disclosure, which has a non-volatile memory 302 that uses a cyclic mode to store data from a data stream to a selected recording section 320 in. The data stream is received from the black box recorder 304. The black box recorder 304 selects the data from the sensor 308 of the autonomous vehicle 306 and/or the data provided by the processing device 310 to be included in the data stream sent to the non-volatile memory 302.In one embodiment, the data stream includes runtime data generated by one or more software processes of the processing device 310. In one example, the software process collects sensor data from the sensor 318. In one example, the software process controls the navigation system 316 and/or provides data from sensors associated with the navigation system 316.The non-volatile memory 302 is an example of the memory system 105. The recording section 320 is an example of the recording section 4 or the memory area 111 of FIG. 2. The black box recorder 304 is an example of the host system 250.The controller 312 controls writing of data to the recording section 320. The controller 312 has made the selected recording section 320 as the active recording section based on the scan data writing counter 314.The controller 312 divides the non-volatile memory 302 into a set of physical blocks to provide a recording section (for example, a recording section 320). For example, the recording section 320 includes physical blocks 1-n.In one example, two or more types of flash memory can generally be used to implement the physical block, such as SLC, MLC, TLC, and/or QLC flash memory. SLC flash memory is reliable when the P/E budget is large, but it is expensive (for example, when manufactured on a given size of integrated circuit die, bitwise calculation); MLC flash memory has a medium P/E E budget, and low price (for example, when manufactured on a given size of integrated circuit die, bitwise calculation); TLC and QLC flash memory are inexpensive (for example, when manufactured on a given size of integrated circuit die, Bitwise calculation), but the P/E budget is smaller. In a statistical sense, using custom ratios of different types of physical blocks can be a customized trade-off between cost and benefit. In one example, each recording sector can use the same type of flash memory cell. In another example, each recording section (or each of many groups of sections) may use a different type of flash memory cell.The data is recorded to the recording section 320 in a loop mode. For example, when the recording zone 320 becomes full, the oldest recorded data is replaced by new data received from the black box recorder 304.In one embodiment, when the black box recorder 304 receives a command to start recording, the black box recorder 304 starts to send data to the non-volatile memory 302. In one example, the command to start recording is provided by the processing device 310. In one example, the processing device 310 controls the navigation system 316. In one example, when the autonomous vehicle 306 enters the autonomous navigation mode, a command to start recording is provided.In one embodiment, the controller 312 selects a new recording section for recording subsequent data. For example, in response to the black box recorder 304 receiving a command to stop recording, the controller 312 selects a new recording section. As described herein, a new recording section is selected based on the scan data writing counter 314. When the black box recorder 304 resumes sending data, the controller 312 writes the data to the newly selected recording section.4 is a flowchart of an example method of selecting a recording section based on scanning of a data write counter according to some embodiments of the present disclosure. For example, the method of FIG. 4 can be implemented in the systems of FIGS. 1 to 3.The method of FIG. 4 may be executed by processing logic, which may include hardware (for example, processing devices, circuits, dedicated logic, programmable logic, microcode, device hardware, integrated circuits, etc.), software (for example, processing Instructions running or executed on the device) or a combination thereof. In some embodiments, the method of FIG. 4 may be performed at least in part by the write counter component 107 of FIG. 1.Although shown in a specific order or order, unless otherwise specified, the order of the processing procedures can be modified. Therefore, it should be understood that the illustrated embodiments are only examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At block 401, a data write counter for the recording section of the non-volatile memory is maintained. For example, the controller 312 maintains a data write counter 314 of the non-volatile memory 302.At block 403, one of the recording zones is selected for recording data from the host system. The selection section includes scanning the data writing counter to identify one of the data writing counters (for example, having a minimum or lower value for the amount of data written). For example, the data writing counter 314 is scanned by the controller 312 to select the data writing counter corresponding to the recording section 320.At block 405, data to be recorded by the non-volatile memory is received from the host system. For example, the host system 250 provides the data stream to the non-volatile memory 254.At block 407, the data received from the host system is written to the selected sector. For example, the continuous data stream from the black box recorder 304 is written to the recording section 320 in a circular mode.In one aspect, the present disclosure includes a computing device that executes any of the methods and a non-transitory computer-readable storage medium storing instructions that, when executed by a processing device, cause the processing device to perform the method Any of them.In one embodiment, a method for a controller includes maintaining a data write counter (e.g., 314) for a recording section of a non-volatile memory (e.g., 302), wherein the data write Each of the counters corresponds to a corresponding recording section; the first section (for example, 320) in the recording section is selected for recording data from the host system (for example, 250), wherein the first section is selected The segment includes scanning the data write counter to identify the first data write counter corresponding to the first sector; receiving data to be recorded by the non-volatile memory from the host system; and transferring the The received data is written to the first sector.In one example, the host system is a black box recorder (e.g., 304) for autonomous vehicles.In one embodiment, the received data is a data stream from the black box recorder.In one embodiment, the method further includes dividing the non-volatile memory into sets of physical blocks (for example, physical blocks 1-n of FIG. 3) to provide the recording section, each set of physical blocks Corresponds to a recording section.In one embodiment, the non-volatile memory is divided to provide recording sections of equal size.In one embodiment, identifying the first data write counter includes comparing the value of the data write counter.In one embodiment, the value of the first data write counter is lower than the value of at least one other data write counter in the data write counter.In one embodiment, the value of the first data write counter is lower than the value of all other data write counters in the data write counter.In one embodiment, the method further includes incrementing the first data write counter when writing the received data to the first sector.In one embodiment, the first data write counter is incremented based on the amount of data written to the first sector.In one embodiment, a system includes: at least one processing device (for example, the controller 312 or the processing device 111); and a memory storing instructions configured to instruct the at least one processing device: The data write counter of the recording section of the non-volatile memory, wherein each of the data write counters corresponds to a corresponding recording section; the first section of the recording section is selected for recording from the host system Data, wherein selecting the first section includes scanning the data write counter to identify the first data write counter corresponding to the first section; receiving from the host system to be transferred by the nonvolatile Data recorded by a sexual memory; and writing the received data to the first sector.In one embodiment, each of the data write counters is incremented based on the amount of data written to the recording section corresponding to the corresponding data write counter.In one embodiment, the host system is a black box recorder for an autonomous vehicle, and the received data is a data stream from the black box recorder. For example, car manufacturers often want to always record raw vehicle sensor data for autonomous vehicles. However, extended recording of original data may be considered too expensive. The black box recorder records vehicle sensor data generated immediately before the event and possibly during the event (for example, a collision or proximity collision involving a corresponding vehicle or nearby vehicles), and/or can record vehicle sensor data in the event of a power failure.In one embodiment, writing the received data to the first section includes writing the data stream to the first section in a circular mode.In one embodiment, when the black box recorder receives a command to start recording, the host system starts to send the data to be recorded, and the command to start recording is provided by the computing device of the autonomous vehicle, and The computing device controls the autonomous navigation system.In one embodiment, the instructions are further configured to instruct the at least one processing device: in response to the black box recorder receiving a command to stop recording, select a second section of the recording section for recording The subsequent data received from the host system, wherein the second section is selected based on comparing the value of the data write counter; receiving the subsequent data; and writing the subsequent data to the second Section.In one embodiment, a non-transitory computer-readable storage medium stores instructions that when executed by at least one processing device cause the at least one processing device to perform a method, the method comprising: maintaining The data write counter of the recording section, wherein each of the data write counters corresponds to a corresponding recording section; the first section of the recording section is selected for recording data, wherein the first section is selected A sector includes scanning the data write counter to identify the first data write counter corresponding to the first sector; receiving data to be recorded by the memory; and writing the received data to all The first section.In one embodiment, writing the received data to the first sector includes writing the data to the first sector in a cyclic mode.In one embodiment, the received data is a continuous data stream.In one embodiment, identifying the first data write counter includes comparing the value of the data write counter.Figure 5 is a block diagram of an example computer system 200 in which embodiments of the present disclosure can be operated. In one embodiment, a set of instructions for causing a machine to perform any one or more of the methods discussed herein can be executed in the computer system 200. In some embodiments, the computer system 200 may correspond to a memory system, or a host system that includes, is coupled to, or utilizes a memory system (for example, the memory system 105 of FIG. 1), or may be used to execute the write counter component 107 Operations (for example, executing instructions to perform operations corresponding to the write counter component 107 described with reference to FIGS. 1 to 4). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can be used as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or client machine in a cloud computing infrastructure or environment, or as a server or client machine in a client-server network environment Operate in the capacity.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network appliance, a server, a network router, a switch or a bridge, or it can execute (in sequence or Otherwise) any machine that specifies a set of instructions for actions to be taken by the machine. In addition, although only a single machine is described, the term "machine" should also be used to encompass any collection of machines that individually or collectively execute one or more sets of instructions to perform any of the methods discussed herein Or more.The example computer system 200 includes a processing device 202, a main memory 204 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), static Random Access Memory (SRAM), etc.), and a data storage system 218, which communicate with each other via a bus 230 (which may include multiple buses).The processing device 202 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More precisely, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , Or a processor that implements a combination of instruction sets. The processing device 202 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and so on. The processing device 202 is configured to execute instructions 226 for performing the operations and steps discussed herein. The computer system 200 may further include a network interface device 208 to communicate on the network 220.The data storage system 218 may include a machine-readable storage medium 224 (also referred to as a computer-readable medium), on which one or more sets of instructions 226 or software embodying any one or more of the methods or functions described herein are stored . The instructions 226 may also completely or at least partially reside in the main memory 204 and/or the processing device 202 during execution by the computer system 200, and the main memory 204 and the processing device 202 also constitute machine-readable storage media. The machine-readable storage medium 224, the data storage system 218, and/or the main memory 204 may correspond to the memory system 105 of FIG.In one embodiment, the instructions 226 include instructions that implement the functionality corresponding to the write counter component (eg, the write counter component 107 described with reference to FIGS. 1 to 4). Although the machine-readable storage medium 224 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. The term "machine-readable storage medium" should therefore be regarded as including but not limited to solid-state memory, optical media, and magnetic media.Algorithms and symbolic representations regarding the operation of data bits in computer memory present some parts of the previous detailed description. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other skilled in the art. Algorithms are here and generally considered to be self-consistent sequences of operations that lead to the desired result. Operations are those that require physical manipulation of physical quantities. Usually, but not necessarily, these quantities are in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to call these signals bits, values, elements, symbols, characters, terms, numbers, etc.However, it should be borne in mind that all these and similar terms are associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to a computer system that manipulates and transforms the data expressed as physical (electronic) quantities in the registers and memory of a computer system into computer system memory or registers or other data similarly expressed as physical quantities in other such information storage systems Or similar to the actions and processes of electronic computing devices.The present disclosure also relates to equipment for performing the operations herein. This device may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as but not limited to any type of disk, including floppy disks, optical disks, CD-ROM and magneto-optical disks, read only memory (ROM), random access memory (RAM), EPROM, EEPROM, magnetic or optical cards, or any type of media suitable for storing electronic instructions, are respectively coupled to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems can be used with programs according to the teachings herein, or it can prove convenient to construct more specialized equipment to perform the methods. The structure for a variety of these systems will be presented from the description below. In addition, the present disclosure is not described with reference to any specific programming language. It should be understood that various programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium storing instructions that can be used to program a computer system (or other electronic device) to execute a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (e.g., computer-readable) medium includes machine (e.g., computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications can be made without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative rather than restrictive sense.
Methods and apparatus for changing the timing of memory requests in a graphics system. Reading data from memory in a graphics system causes ground bounce and other electrical noise. The resulting ground bounce may be undesirably synchronized with a video retrace signal sent to a display, and may therefore cause visible artifacts. Embodiments of the present invention shift requests made by one or more clients by a duration or durations that vary with time, thereby changing the timing of the data reads from memory. The requests may be shifted by a different duration for each memory request, for each frame, or multiples of requests or frames. The durations may be random, pseudo-random, or determined by another algorithm, and they may advance or delay the requests. By making the ground bounce and other noise asynchronous with the video retrace signal, these artifacts are reduced or eliminated.
What is claimed is:1. A video graphics system comprising:a graphics memory;a memory interface coupled to the graphics memory; anda scanout engine coupled to the memory interface and including a FIFO, wherein the FIFO requests data when a low water mark is reached,wherein the low water mark has a first value when a first request is made by the FIFO, and the low water mark has a second value when a second request is made by the FIFO, the first value different from the second value, andwherein the value of the low water mark changes at least once each frame.2. The video graphics system of claim 1 wherein each time the FIFO makes a request, the low water mark is changed in value.3. The video graphics system of claim 1 wherein the first value and the second value are pseudo-randomly generated.4. The video graphics system of claim 1 wherein the first value and the second value are generated by a random number generator.5. The video graphics system of claim 1 wherein the low water mark is changed in value for each frame in a video stream.6. The video graphics system of claim 1 wherein the FIFO is coupled to the memory interface.7. A video graphics system comprising:a graphics memory;a memory interface coupled to the graphics memory;a scanout engine coupled to the memory interface and including a FIFO having a request output configured to provide requests for data when a low water mark is reached; anda delay block coupled to the request output of the FIFO,wherein the delay block delays a request for data by a first duration before a first memory access and by a second duration before a second memory access, the first duration different from the second duration.8. The video graphics system of claim 7 wherein the delay block delays each request for data by a duration, and the duration changes for each request for data.9. The video graphics system of claim 7 wherein the first memory access and the second memory access are consecutive memory accesses.10. The video graphics system of claim 7 wherein the first duration is a first number of pixel clock cycles, the second duration is a second number of pixel clock cycles, and the first number and the second number a pseudo-randomly generated.11. A video graphics system comprising:a graphics memory;a memory interface coupled to the graphics memory;a scanout engine coupled to the memory interface and having a request output configured to provide requests for data; anda delay block coupled to the request output of the scanout engine,wherein the delay block delays a request for data by a first duration before a first memory access and by a second duration before a second memory access, the first duration different from the second duration.12. The video graphics system of claim 11 wherein the delay block is further coupled to the memory interface.13. The video graphics system of claim 11 wherein the delay block delays each request for data by a duration, and the duration changes for each request for data.14. The video graphics system of claim 11 wherein the first memory access and the second memory access are consecutive memory accesses.15. The video graphics system of claim 11 wherein the duration of the first duration and the second duration are determined by a random number generator.16. A video graphics system comprising:a graphics memory;a memory interface coupled to the graphics memory; anda scanout engine coupled to the memory interface,wherein requests for data are provided by the scanout engine to the memory interface, and the memory interface delays the request before passing it to the graphics memory, andwherein the memory interface delays a request for data by a first duration before a first memory access and by a second duration before a second memory access, the first duration different from the second duration.17. The video graphics system of claim 16 wherein the memory interface delays each request for data by a duration, and the duration changes for each request for data.18. The video graphics system of claim 16 wherein the first memory access and the second memory access are consecutive memory accesses.19. The video graphics system of claim 16 wherein the duration of the first duration and the second duration are determined by a random number generator.20. A video graphics system comprising:a graphics memory;a memory interface;a delay circuit coupled between the graphics memory and memory interface; anda scanout engine coupled to the memory interface,wherein requests for data are provided by the scanout engine to the memory interface, by the memory interface to the delay circuit, and by the delay circuit to the graphics memory, andwherein the delay circuit delays a request for data by a first duration before a first memory access and by a second duration before a second memory access, the first duration different from the second duration.21. The video graphics system of claim 20 wherein the delay circuit delays each request for data by a duration, and the duration changes for each request for data.22. The video graphics system of claim 20 wherein the first memory access and the second memory access are consecutive memory accesses.23. The video graphics system of claim 20 wherein the duration of the first duration and the second duration are determined by a random number generator.24. A method of delaying a memory access in a video graphics system, the video graphics system comprising:a graphics memory;a memory interface coupled to the graphics memory; anda logic circuit coupled to the memory interface,the method comprising:generating a first number;generating a request for data with the logic circuit; anddelaying the request for data by a duration proportional to the first number,wherein a new first number is generated each frame.25. The method of claim 24 wherein the delayed request for data is provided to the memory interface.26. The method of claim 24 wherein the logic circuit is a scanout engine.27. The method of claim 26 wherein each request for data is delayed by a duration proportional to a number, and the number is a pseudo-random number.
CROSS-REFERENCES TO RELATED APPLICATIONSThis application claims the benefit of U.S. provisional application 60/406,514 filed Aug. 27, 2002, titled CRTC Fetch Randomizer, by Rao et al., which is incorporated by reference.BACKGROUND OF THE INVENTIONThe present invention relates to reducing the effects of noise in a video graphics system, and more particularly to methods and apparatus for reducing the effects of noise caused by reading data from a memory in a video graphics system.In a conventional video graphics system, data is provided by a graphics pipeline to a digital-to-analog converter (DAC), the output of which drives the input of a display monitor. Accordingly, noise at the DAC output creates video noise on the display, and degrades its performance. Thus, it is desirable to reduce noise at the DAC output.One source of noise is ground bounce caused by the circuit switching and other voltage transients in the video graphics system. Also, these transitions often contain high frequency components that may couple to the DAC output. If more circuits switch simultaneously, the resulting ground bounce is exacerbated. Of particular concern is ground bounce caused by reading data from a graphics memory, since data having widths of 64, 128, or more bits wide may be simultaneously read from memory. As memory outputs change state during a read, capacitances on the output lines are charged or discharged. This results in large, short duration current pulses into and out of the ground supply, thereby causing the ground bounce.If the ground bounce is random, spread in time, or has a low amplitude, the video noise generated is not necessarily apparent to an observer viewing the display. But if the ground bounce is synchronous, that is, periodic such that it occurs each time a particular pixel on the display is being updated, the resulting change in that particular pixel may become noticeable. Moreover, if many adjacent pixels are affected, such as those forming a horizontal or vertical line, an undesirable artifact may result.Accordingly, prior art solutions have been developed to reduce ground bounce noise. For example, analog design techniques such as filtering or ground plane separations have been used. Unfortunately, these solutions require the use of costly electrical components that consume board space and often require one or more board revisions or spins.Thus, what is needed are low cost, easily integrated methods and apparatus for reducing the effects of ground bounce and other electrical switching noise on a video signal.SUMMARYAccordingly, embodiments of the present invention provide methods and apparatus for changing the timing of memory requests in a graphics system, such that ground bounce and resulting video noise is asynchronous with a video stream retrace signal. Embodiments of the present invention shift requests made by one or more clients by a duration or durations that vary with time. The requests may be shifted a different duration for each memory request, for each frame, or multiples of requests or frames. The duration may be random, pseudo-random, or determined by another algorithm, and they may advance or delay the requests. By making the ground bounce and other noise asynchronous with the video retrace signal, these artifacts are reduced or eliminated.One exemplary embodiment of the present invention provides a method of delaying memory accesses in a video graphics system. The method includes generating a first memory access request, generating a first delay, and delaying the first memory access request by the first delay. The method further includes generating a second memory access request, generating a second delay, and delaying the second memory access request by the second delay.Another exemplary embodiment of the present invention provides a video graphics system. The system includes a graphics memory, a memory interface coupled to the graphics memory, and a scanout engine coupled to the memory interface. The scanout engine includes a FIFO, and the FIFO requests data when a low water mark is reached. The low water mark has a first value when a first request is made by the FIFO, and the low water mark has a second value when a second request is made by the FIFO.A further exemplary embodiment of the present invention provides a video graphics system. This system includes a graphics memory, a memory interface coupled to the graphics memory, a scanout engine coupled to the memory interface and including a FIFO having a request output configured to provide a request for data when a low water mark is reached, and a delay block coupled to the request output of the FIFO. The delay block delays the request for data by a first duration before a first memory access and by a second duration before a second memory access.Yet another exemplary embodiment of the present invention provides another video graphics system. This system includes a graphics memory, a memory interface coupled to the graphics memory, a scanout engine coupled to the memory interface and having a request output configured to provide requests for data, and a delay block coupled to the request output of the scanout engine. The delay block delays a request for data by a first duration before a first memory access and by a second duration before a second memory access.Still a further exemplary embodiment of the present invention provides another video graphics system. This system includes a graphics memory, a memory interface coupled to the graphics memory, and a scanout engine coupled to the memory interface. Requests for data are provided by the scanout engine to the memory interface, and the memory interface delays the request before passing it to the graphics memory. The memory interface delays a request for data by a first duration before a first memory access and by a second duration before a second memory access.Yet a further exemplary embodiment of the present invention provides another video graphics system. This video graphics system includes a graphics memory, a memory interface, a delay circuit coupled between the graphics memory and memory interface, and a scanout engine coupled to the memory interface. Requests for data are provided by the scanout engine to the memory interface, by the memory interface to the delay circuit, and by the delay circuit to the graphics memory. The delay circuit delays a request for data by a first duration before a first memory access and by a second duration before a second memory access.Another exemplary embodiment of the present invention provides a method of delaying memory accesses in a video graphics system. The video graphics system includes a graphics memory, a memory interface coupled to the graphics memory, and a logic circuit coupled to the memory interface. The method includes generating a first number, generating a request for data with the logic circuit, and delaying the request for data by a duration proportional to the first number.A better understanding of the nature and advantages of the present invention may be gained with reference to the following detailed description and the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a portion of a graphics system that may benefit by the incorporation of embodiments of the present invention;FIG. 2 is a conceptual representation of a FIFO that may form a portion of a client, such as a scanout engine, for buffering data from a memory interface;FIG. 3 is also a conceptual representation of a FIFO that may be used in a client, such as a scanout engine, for buffering data from the memory interface;FIG. 4 is a block diagram of a circuit implementation of an embodiment of the present invention that modifies the low water mark of a FIFO such that memory accesses are order to disperse ground noise;FIG. 5 is a block diagram of a circuit that may be used to change the timing of memory requests by a client;FIG. 6 is a block diagram of another specific circuit that may be used to change the timing memory requests by a client; andFIG. 7 is a block diagram of another specific circuit that may be used to change the timing of memory requests by a client.DESCRIPTION OF EXEMPLARY EMBODIMENTSFIG. 1 is a block diagram of a portion of a graphics system that may benefit by the incorporation of embodiments of the present invention. This figure, as all the figures, is included for exemplary purposes only, and does not limit either the possible embodiments of the present invention or the claims.Included are a graphics memory 110, memory interface 120, and various clients including client0 130, client1 140, and clientN 150. As indicated, there may be one or more clients. The memory interface 120 writes and reads data to and from the graphics memory 110. This data may include color, depth, texture, or other graphical information. Also, the data stored in the graphics memory 110 may include program instructions and other types of data. In this specific example, the memory interface 120 sends read and write instructions on lines 112 and 114 to the graphics memory which provides an receives data from the memory interface on lines 116. The read and write requests on lines 112 and 114 may include read and write signals, memory address locations, and other information such as instructions regarding burst or page mode reads from the graphics memory 110.Each of these clients may be a graphics engine or other circuit. For example, these clients may include a scanout, rasterizer, shader, or other engine. Each client makes requests to read or write data from or to the graphics memory 110 to the memory interface 120. The memory interface 120 arbitrates requests from the various clients and grants the requests at appropriate times. Specifically, client0 130 makes requests to the memory interface 120 on lines 132. Lines 132 may include a request signal, one or more signals indicating whether the request is for a read or a write, as well as the addresses of locations, either physical or virtual, in the graphics memory 110. The memory interface 120 grants requests to client0 130 on line 134, and data is transferred on lines 136. Similarly, client1 140 communicates with the memory interface 120 over requests lines 140, grant lines 144, and data lines 146, while clientN 150 communicates with the memory interface 120 over request lines 152, grant lines 154, and data lines 156.Again, ground bounce and other coupling problems are exacerbated when one client interfaces the memory on a periodic basis, particularly when it is synchronized with the scanning of the video on a display, that is, when it occurs at the same time (or times) every frame refresh or harmonic of the frame rate of the display. Of notable concern is when data is provided to a scanout engine each time the video trace being provided to a CRT monitor is at a particular location or pixel. The resulting synchronized ground bounce may cause visible artifacts on the display. This is particularly a problem when the other clients or engines in the graphics pipeline are not accessing the memory during frame refreshes.One or more of the clients may store or buffer data received from the graphics memory in a FIFO. Accordingly, when a request by such a client for data is granted, the client's FIFO is at least partially filled. The client then uses or drains data from the FIFO. When the amount of data in the FIFO reaches a threshold referred to as a low water mark, a request for more data is made to the memory interface 120. To prevent the scanout engine from accessing the graphics memory 110 on a periodic or synchronized basis, this low water mark may be changed or varied. The amount of change may be random or pseudo-random, may follow a predetermined algorithm, or may be determined in some other way. The low water mark may be changed after one or more frames, one or more memory accesses, or at other appropriate times.This portion of a graphics system may be included on an integrated circuit manufactured by nVidia corporation, located at 2701 San Tomas Expressway, Santa Clara, Calif. 95050.FIG. 2 is a conceptual representation of a FIFO that may form a portion of a client, such as a scanout engine, for buffering data from the memory interface 120. Included are a memory 210 having a datain port 212, dataout port 214, a low water mark 230, write data pointer 220 and read data pointer 225. Data is input to the memory 210 on lines 212. Write data pointer 220 indicates the location where new data received on datain lines 212 should be written. Incoming data fills memory locations above the write pointer 220 in the order that is received. Data is output on the line 214, and data shifts downward each time data is output. For example, data in location 270 is shifted to location 272, and the write pointer moves down one location when data at location 262 is output on dataout lines 214.When the write data pointer indicating the last valid data stored in the memory reaches the low water mark, a request is sent to the memory interface 120. To vary the time that a request is made, an embodiment of the present invention changes the low water mark from position 230 to position 250. These positions are separated by X 240. Again, the value of X may be random or pseudo random, or determined by some other algorithm, it may be positive or negative in value, and it may change after one or more frames, or one or more memory requests. The value of X may be generated or determined by a random number generator. Alternately, the value of the low water mark itself may be generated or determined by a random number generator.In a practical implementation, the data is not shifted through the memory for each read. Rather, data written to a location remains at that location until it is overwritten. The write pointer indicates the location where new data received on datain line 212 should be written, and the read pointer 225 indicates the last location that data was read from (or the next location to read data from). In this implementation, the low water mark is not an absolute location, but rather a difference between the write pointer 220 and read pointer 225 locations. It will be appreciated by one skilled in the art that other specific implementations may be used consistent with embodiments of the present invention. For example, an implementation similar to the conceptual implementation above may be made using shift registers.FIG. 3 is also a conceptual representation of a FIFO that may be used in a client, such as a scanout engine, for buffering data from the memory interface 120. Included are a FIFO 310 having a data input port 312, data output port 314, write data pointer 320, low water mark 330, and new low water mark 350. In this example, the new low water mark 350 has been moved below the previous low water mark 330 by an amount X 340. As before, the value of X may be random or pseudo random, or determined by some other algorithm, it may be positive or negative in value, and it may change after one or more frames, or one or more memory requests.FIG. 4 is a block diagram of a specific circuit implementation of an embodiment of the present invention that modifies the low water mark of a FIFO such that memory accesses are varied in order to disperse ground noise. Included are a memory interface 420, scanout engine 430, and an additional clientN 490. As indicated, there may be one or more additional clients. The scanout engine 430 includes a FIFO made up of a memory 445, write pointer 450, read pointer 455, low water mark 460, number generator 465, summing nodes 470 and 475, and comparator 480. The scanout engine 430 also includes additional scanout circuitry 485. One skilled in the art will appreciate that other specific circuits may be used to incorporate embodiments of the present invention. For example, the low water mark itself may be varied or randomized in some manner, for example, it may be generated by a random number generator.Data is received by the memory 445 on the datain line 446 and provided by the memory to the additional scanout circuitry 485 on dataout line 447. As data is read out of the memory 445, the amount of valid data in memory is diminished and the write pointer 450 and read pointer 445 approach each other in value, that is, the difference between the two is reduced. This difference is provided on line 472 to the comparator 480.The low water mark 460 and difference amount X 465 are summed and provided on line 474 to comparator 480. The comparator 480 compares the modified low water mark with the amount of data remaining in memory 445. When the amount of valid data remaining in memory 445 falls below the modified low water mark, the comparator provides a need data signal on line 482 to the additional scanout circuitry 485. The additional scanout circuitry 485 requests data from the memory interface 420 over request line 486. At an appropriate time, the memory interface grants a request by sending a signal back on line 488. Thus, the memory 445 drains to the modified low water mark provided on lines 474, and is then at least partially refilled.Again, in a specific situation, the scanout engine may be providing data while the other clients or engines are idling. Varying or modifying the low water mark shifts data requests to the memory interface, and thus data reads from the memory. This prevents the scanout engine from accessing the memory in a periodic or synchronous fashion that might cause ground bounce that would consistently distort one or more specific pixels during each screen retrace. Though this specific example shown is a scanout engine, embodiments of the present invention may be used in other circuits in a graphics system.In this specific example, summing node 475 is shown as adding the low water mark 460 to the difference X 465. In other embodiments, the difference X 465 may be subtracted from the low water mark 460.FIG. 5 is a block diagram of a circuit that may be used to change the timing of memory requests by a client such as a scanout engine, such that ground bounce and electrical coupling problems caused by reading data from a graphics memory is at least less visible on a display monitor. Included are a memory interface 520, scanout engine 530, and clientN 590. As indicated, there may be one or more other clients. The scanout engine 530 includes a FIFO, which includes a memory 545, write pointer 550, read pointer 555, summing node 570, comparator 580, and delay 560. Also included in the scanout engine is the additional scanout circuitry block 585.Data is received by the memory 545 on the datain lines 546 and provided by the memory to the additional scanout circuitry 585 on dataout lines 547. Again, as data is read out of the memory 445, the amount of valid data in memory is diminished and the write pointer 550 and read pointer 545 approach each other in value, that is, the difference between the two is reduced. This difference is provided on line 572 to the comparator 580.The low water mark 560 is provided on line 574 to comparator 580. The comparator 580 compares the low water mark with the amount of data remaining in memory 545. When the amount of valid data remaining in memory 545 falls below the low water mark on line 574, the comparator provides a need data signal on line 582 to the delay block 560. This delay block delays the need data signal and provides it to the additional scanout circuitry 585.As with all the included examples and other embodiments of the present invention, this delay may be for a number of pixel or other clock cycles, or another measuring unit may be used. The value of the delay may be random or pseudo random, or determined by some other algorithm, and it may change after one or more frames, or one or more memory requests. The duration of the delay may be determined by a random number generator. For example, a random number generator may generate a number, and the delay may be approximately that number of pixel clocks in duration.The additional scanout circuitry 585 requests data from the memory interface 520 over request line 586. At an appropriate time, the memory interface grants a request by sending a signal back on line 588. Thus, the memory 545 drains to the modified low water mark provided on lines 574, and is then at least partially refilled.By varying or modifying the delay time in signal path from the FIFO to the remainder of the scanout engine, data requests to the memory interface, and thus data reads from the memory, are shifted. This prevents the scanout engine from accessing the memory in a periodic or synchronized manner that might cause ground bounce that would consistently distort one or more specific pixels during each screen retrace. Again, though this specific example shown is a scanout engine, this and other embodiments of the present invention may be used in other circuits in a graphics system.FIG. 6 is a block diagram of another specific circuit that may be used to change the timing of memory requests by a client such as a scanout engine, such that ground bounce and electrical coupling problems caused by reading data from a graphics memory is at least less visible on a display monitor. Included are a memory interface 620, scanout engine 630, and clientN 690. As indicated, there may be one or more other clients. The scanout engine 630 includes a FIFO, which includes a memory 645, write pointer 650, read pointer 655, summing node 670, comparator 680, and delay 660. Also included in the scanout engine is the additional scanout circuitry block 685.Data is received by the memory 645 on the datain lines 646 and provided by the memory to the additional scanout circuitry 685 on dataout lines 647. Again, as data is read out of the memory 645, the amount of valid data in memory is diminished and the write pointer 650 and read pointer 645 approach each other in value, that is, the difference between the two is reduced. This difference is provided on line 672 to the comparator 680.The low water mark 660 is provided on line 674 to comparator 680. The comparator 680 compares the low water mark with the amount of data remaining in memory 645. When the amount of valid data remaining in memory 645 falls below the low water mark on line 674, the comparator provides a need data signal on line 682 to the additional scanout circuitry 685.The additional scanout circuitry 685 requests data from the memory interface 620 over request line 686. This request is delayed by the delay block 660, which provides it to the memory interface 620. As before, this delay may be for a number of pixel or other clock cycles, or another measuring unit may be used. Again, the value of the delay may be random or pseudo random, or determined by some other algorithm, and it may change after one or more frames, or one or more memory requests. At an appropriate time, the memory interface grants a request by sending a signal back on line 688. Thus, the memory 645 drains to the modified low water mark provided on lines 674, and is then at least partially refilled.By varying or modifying the delay time in signal path from the scanout engine to the memory interface, data requests to the memory interface, and thus data reads from the memory, are shifted. This prevents the scanout engine from accessing the memory in a periodic fashion that might cause ground bounce that would consistently distort one or more specific pixels during each screen retrace. Again, though this specific example shown is a scanout engine, this and other embodiments of the present invention may be used in other circuits in a graphics system.FIG. 7 is a block diagram of another specific circuit that may be used to change the timing of memory requests by a client such as a scanout engine, such that ground bounce and electrical coupling problems caused by reading data from a graphics memory is at least less visible on a display monitor. Included are a graphics memory 710, memory interface 720, and various clients including client0 130, client1 140, and clientN 150. As indicated, there may be one or more other clients. The memory interface 120 writes and reads data to and from the graphics memory 110. In this specific example, the memory interface 720 sends read requests on lines 712 to the delay block 760, which delays the request before providing it to the graphics memory 710. This delay may be for a number of pixel or other clock cycles, or another measuring unit may be used. The value of the delay may be random or pseudo random, or determined by some other algorithm, and it may change after one or more frames, or one or more memory requests.The memory interface provides write requests on lines 714 to the graphics memory 710, which provides and receives data to and from the memory interface on lines 716. The read and write requests on lines 712 and 714 may include read and write signals, memory address locations, and other information such as instructions regarding burst or page mode reads from the graphics memory 710.By varying or modifying the delay time in read signal path from the memory interface to the graphics memory, data reads from the memory are shifted. This prevents the scanout or other engine from accessing the memory in a periodic fashion that might cause ground bounce that would consistently distort one or more specific pixels during each screen retrace.In another embodiment of the present invention, the memory interface itself delays the read request sent on line 712, and a separate delay block 760 is not required.The foregoing description of specific embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.
Magnetic memory cells, methods of fabrication, semiconductor device structures, and memory systems are disclosed. A magnetic cell core includes at least one magnetic region (e.g., a free region or a fixed region) configured to exhibit a vertical magnetic orientation, at least one oxide-based region, which may be a tunnel junction region or an oxide capping region, and at least one magnetic interface region, which may comprise or consist of iron (Fe). In some embodiments, the magnetic interface region is spaced from at least one oxide-based region by a magnetic region. The presence of the magnetic interface region enhances the perpendicular magnetic anisotropy (PMA) strength of the magnetic cell core. In some embodiments, the PMA strength may be enhanced more than 50% compared to that of the same magnetic cell core structure lacking the magnetic interface region.
1.A method of forming a memory unit, the method comprising:Forming an oxide material over the substrate;Forming a magnetic material on the oxide material, and forming the magnetic material includes:Forming a magnetic sub-region above the oxide material; andForming another magnetic sub-region above the magnetic sub-region;Forming another oxide material above the magnetic material;An iron-based material is formed between the magnetic sub-region and the other magnetic sub-region of the magnetic material, and the iron-based material is directly located between the magnetic sub-region and the other magnetic sub-region Between; andThe oxide material, the magnetic material, the another oxide material, and the iron-based material are patterned to form a magnetic unit core, the magnetic unit core including a tunnel junction region from the oxide material , One of a free area and a fixed area from the magnetic material, a magnetic interface area from the iron-based material, and an oxide cap area from the other oxide material, the magnetic material exhibiting vertical Magnetic orientation.2.The method of claim 1, further comprising annealing the oxide material, the magnetic material, the another oxide material, and the iron-based material.3.The method of claim 1, wherein forming the iron-based material includes forming the iron-based material by magnetron sputtering.4.The method of claim 1, further comprising forming another iron-based material in contact with the oxide material.5.The method of claim 1, wherein forming the iron-based material includes forming a single layer of the iron-based material.6.The method of claim 1, wherein forming the iron-based material includes forming the iron-based material to have the same crystal orientation as the magnetic sub-region.7.A method of forming a semiconductor device structure, the method comprising:Form a material structure, which includes:Forming a magnetic material over the substrate, the magnetic material exhibiting a switchable magnetic orientation;Forming another magnetic material over the substrate, the another magnetic material exhibiting a fixed magnetic orientation;Forming a non-magnetic material perpendicularly between the magnetic material and the other magnetic material;Forming an oxide-based non-magnetic material isolated from the non-magnetic material by the magnetic material; andForm a single layer of iron-based material directly between a magnetic sub-region of a magnetic material and another magnetic sub-region; andThe material structure is patterned to form the semiconductor device structure.8.8. The method of claim 7, wherein forming a magnetic material, forming another magnetic material, and forming a non-magnetic material comprise:Forming the another magnetic material before forming the magnetic material and before forming the non-magnetic material;Forming the non-magnetic material above the another magnetic material; andThe magnetic material is formed over the non-magnetic material.9.The method of claim 7, wherein forming the material structure further comprises forming another single layer of iron-based material in contact with the another magnetic sub-region, the single layer of iron-based material and the The other single layer of iron-based material is spaced apart.10.The method of claim 7, wherein forming the material structure further comprises forming another single layer of iron-based material in contact with the another magnetic material.11.The method of claim 7, wherein the iron-based material forming a single layer includes cobalt iron (CoFe) forming a single layer.12.The method of claim 7, wherein forming a single layer of iron-based material includes forming a single layer of iron.13.A method of forming a semiconductor device, the method comprising:Form a material structure, which includes:Forming a magnetic material over the non-magnetic oxide material, the magnetic material exhibiting a switchable magnetic orientation;Forming another non-magnetic oxide material above the magnetic material;Forming another magnetic material isolated from the magnetic material by one of the non-magnetic oxide material and the other non-magnetic oxide material, the other magnetic material exhibiting a fixed magnetic orientation; andForming an iron-based material directly between a magnetic sub-region of the magnetic material and another magnetic sub-region of the magnetic material by magnetron sputtering, the iron-based material defining a thickness of less than about 10 angstroms; andThe material structure is patterned to form the semiconductor device.14.The method of claim 13, wherein forming the material structure further comprises forming another iron-based material defining a thickness of less than about 10 angstroms, the iron-based material being isolated from the another iron-based material .15.The method of claim 13, wherein forming the iron-based material between the sub-regions of the magnetic material includes forming the iron-based material directly adjacent to the spacer material between the sub-regions of the magnetic material Material.16.The method of claim 13, wherein forming a magnetic material over a non-magnetic oxide material includes forming the magnetic material to include a ferromagnetic material.17.The method of claim 13, wherein forming the iron-based material by magnetron sputtering includes forming the iron-based material to a thickness of less than about 5 angstroms.18.A magnetoresistive structure, which includes:Free area, which is configured to exhibit switchable vertical magnetic orientation;Fixed area, which is configured to exhibit a fixed perpendicular magnetic orientation;A non-magnetic area, which is between the free area and the fixed area; andA magnetic interface region, which is between the magnetic sub-regions of the free region.19.The magnetoresistive structure of claim 18, wherein the magnetic interface region includes cobalt iron.20.The magnetoresistive structure of claim 18, wherein the magnetic interface region has a thickness of less than about 10 angstroms.21.18. The magnetoresistive structure according to claim 18, further comprising another magnetic interface area provided on one side of the free area.22.18. The magnetoresistive structure of claim 18, further comprising another magnetic interface region adjacent to the fixed region.23.A semiconductor device including:A free area, which exhibits a switchable magnetic orientation, the free area being between an oxide area and another oxide area;A fixed area that exhibits a fixed magnetic vertical orientation on one side of the oxide area; andThe magnetic interface region is between the magnetic sub-regions of the free region, the magnetic interface region having a thickness of less than about 10 angstroms.24.The semiconductor device according to claim 23, wherein the magnetic interface region contains iron.25.The semiconductor device according to claim 23, wherein the oxide region includes an oxide of magnesium, aluminum, or titanium.26.The semiconductor device according to claim 23, further comprising another magnetic interface area contacting at least one of the free area and the fixed area.27.The semiconductor device according to claim 23, wherein the free region includes iron and at least one of cobalt and boron.28.A memory system includes:At least one semiconductor structure, the at least one semiconductor structure comprising:Free area, which is configured to exhibit switchable vertical magnetic orientation;An oxide region, which is above the free region;A fixed region above the oxide region and configured to exhibit a fixed perpendicular magnetic orientation; andA magnetic interface region, which is between the magnetic subregions of the free region; andA processor coupled to the at least one semiconductor structure.29.The system of claim 28, wherein the magnetic interface region comprises iron.30.The system of claim 28, wherein the magnetic interface region has a thickness of less than about 10 Angstroms.31.The system of claim 28, further comprising another magnetic interface area contacting at least one of the free area and the fixed area.
Memory unit, manufacturing method, semiconductor device structure, and memory systemInformation about divisional applicationThis case is a divisional application. The parent case of this division is an invention patent application with an application date of March 10, 2014, an application number of 201480013988.3, and an invention title of "memory unit, manufacturing method, semiconductor device structure and memory system".Priority claimThis application claims US Patent Application No. 13/797,185 for "Memory Cells, Methods of Fabrication, Semiconductor Device Structures, and Memory Systems" filed on March 12, 2013 The rights and interests of the filing date of the case.Technical fieldThe present invention generally relates to the field of memory device design and manufacturing in various embodiments. More specifically, the present invention relates to the design and manufacture of a memory cell characterized as a spin torque transfer magnetic random access memory (STT-MRAM) cell.Background techniqueMagnetic Random Access Memory (MRAM) is a non-volatile computer memory technology based on magnetoresistance. One type of MRAM cell is the Spin Torque Transfer MRAM (STT-MRAM) cell, which contains a magnetic cell core supported by a substrate. The magnetic unit core includes at least two magnetic regions in which a non-magnetic region is in the middle, for example, a "fixed region" and a "free region". The fixed area contains a magnetic material with a fixed (e.g., non-switchable) magnetic orientation, while the free area contains a configuration that can be in a "parallel" configuration during the operation of the unit (where the magnetic orientation of the fixed area and the magnetic orientation of the free area are guided in In the same direction (for example, North and North, East and East, South and South, or West and West, respectively)) and "anti-parallel" configuration (where the magnetic orientation of the fixed area and the magnetic orientation of the free area are directed in opposite directions Magnetic materials that switch between the upper (for example, North and South, East and West, South and North, or West and East, respectively)).In a parallel configuration, STT-MRAM cells exhibit lower resistance across magnetoresistive elements (ie, fixed and free regions). This relatively low resistance state can be defined as the "0" state of the MRAM cell. In the anti-parallel configuration, the STT-MRAM cell exhibits a higher resistance across the magnetoresistive element (ie, regions of magnetic material, such as fixed regions and free regions). This relatively high resistance state can be defined as the "1" state of the MRAM cell. The magnetic orientation of the free area and the switching of the resulting high or low resistance state across the magnetoresistive element enable conventional MRAM cell writing and reading operations. Ideally, the amount of programming current required to switch the free area from the parallel configuration to the anti-parallel configuration is substantially the same as the amount of programming current required to switch from the anti-parallel configuration to the parallel configuration. Such equal programming currents for switching are referred to herein as "symmetrical switching".The free area and the fixed area of the STT-MRAM cell may exhibit a magnetic orientation oriented horizontally ("in-plane") or vertically ("out-of-plane") with respect to the width of the area. In an STT-MRAM cell with a perpendicularly oriented magnetic region, the characteristic of the magnetic material exhibiting perpendicular magnetic orientation may be the strength of the perpendicular magnetic anisotropy ("PMA") of the magnetic material. The strength (also referred to herein as "magnetic strength" or "PMA strength") is an indication of the change in magnetic orientation by the electrical resistance of the magnetic material. A magnetic material exhibiting a perpendicular magnetic orientation with high PMA intensity may be less inclined to change its magnetic orientation from the perpendicular orientation than a magnetic material exhibiting a perpendicular magnetic orientation with lower magnetic intensity. However, achieving high PMA strength by itself may not be sufficient for successful STT-MRAM cell operation. For example, low resistance-area (RA), low switching current, low switching voltage, and symmetrical switching can also contribute to the successful operation of STT-MRAM cells. However, finding materials and designs in which to exhibit high PMA strength without adversely affecting other characteristics of the operation of the STT-MRAM cell (especially the RA of the cell) can present challenges.Summary of the inventionThe present invention discloses a memory unit. The memory cell includes a magnetic cell core on the substrate. The magnetic unit core includes a magnetic region between an oxide region and another oxide region. The magnetic area exhibits perpendicular magnetic orientation. The magnetic unit core also includes a magnetic interface region between the oxide region and another oxide region.The present invention also discloses a memory cell including a magnetic cell core, the magnetic cell core including a free area configured to exhibit a switchable vertical magnetic orientation and a free area configured to exhibit a fixed vertical magnetic orientation. The non-magnetic area is between the free area and the fixed area. The magnetic interface area is separated from the non-magnetic area by one of a free area and a fixed area.The present invention discloses a method of forming a memory cell. The method includes forming an oxide material over the substrate. A magnetic material is formed over the oxide material. Another oxide material is formed above the magnetic material. An iron-based material is formed between the magnetic material and one of the oxide material and the other oxide material. The oxide material, the magnetic material, another oxide material, and the iron-based material are patterned to form a magnetic unit core. The magnetic unit core includes a tunnel junction region from an oxide region, one of a free region and a fixed region from a magnetic material, a magnetic interface region from an iron-based material, and an oxide cap region from another oxide material. Magnetic materials exhibit perpendicular magnetic orientation.The present invention also discloses a semiconductor device structure including a spin torque transfer magnetic random access memory (STT-MRAM) array. The STT-MRAM array includes a plurality of STT-MRAM cells. Each of the plurality of STT-MRAM cells includes a cell core, and the cell core includes a non-magnetic region between a magnetic region and another magnetic region. Each of the magnetic region and the other magnetic region are configured to exhibit perpendicular magnetic orientation. The unit core also includes an oxide region separated from the non-magnetic region by one of a magnetic region and another magnetic region. The unit core also includes a magnetic interface region between the oxide region and the non-magnetic region.The invention also discloses a spin torque transfer magnetic random access memory (STT-MRAM) system. The STT-MRAM system includes a magnetic unit core and a variety of conductive materials that can communicate with the magnetic unit core. The magnetic unit core includes a magnetic interface area on or in the magnetic area. The magnetic regions are configured to exhibit perpendicular magnetic orientation. The magnetic unit core also includes an oxide region separated from the magnetic interface region.Description of the drawingsFIG. 1 is a schematic cross-sectional front view of a magnetic cell core of an STT-MRAM cell including a magnetic interface region directly disposed between a free region and a magnetic tunnel junction region.2 is a schematic cross-sectional front view of a magnetic cell core of an STT-MRAM cell including a magnetic interface region directly disposed between a free region and an oxide cap region.3 is a schematic cross-sectional front view of a magnetic cell core of an STT-MRAM cell including a magnetic interface region directly disposed between the magnetic subregion of the free region and the oxide cap region.4 is a schematic cross-sectional front view of a magnetic cell core of an STT-MRAM cell including a magnetic interface region disposed in a free region.5 is a schematic cross-sectional front view of the magnetic cell core of an STT-MRAM cell containing two magnetic interface regions, one of which is directly arranged between the free area and the oxide cap area and the other is directly arranged between the free area and the Between magnetic tunnel junction regions.Figure 6 is a schematic cross-sectional front view of the magnetic cell core of an STT-MRAM cell containing four magnetic interface regions, one of which is placed on the top and bottom of the free area and the other pair is placed on the top and bottom of the fixed area superior.FIG. 7 is a schematic cross-sectional front view of the magnetic cell core of the STT-MRAM cell including one magnetic interface area in the free area and another magnetic interface area on top of the fixed area.FIG. 8 is a schematic diagram of an STT-MRAM system with memory cells according to an embodiment of the present invention.FIG. 9 is a simplified block diagram of the structure of a semiconductor device including a memory cell of an embodiment of the present invention.Figure 10 is a simplified block diagram of a system implemented in accordance with one or more embodiments of the invention.FIG. 11 is a graph showing PMA intensity measurement compared with a magnetic unit core incorporating a magnetic interface region and a magnetic unit core lacking a magnetic interface region.Detailed waysThe present invention discloses memory cells, semiconductor device structures including such memory cells, memory systems, and methods of forming such memory cells. The memory cell includes a magnetic area that exhibits perpendicular magnetic orientation, such as a free area or a fixed area. The memory cell also includes at least one oxide region, such as one or more of an oxide-based magnetic tunnel junction ("MTJ") region and an oxide cap region. Directly or indirectly placed between the magnetic region and the oxide region is the magnetic interface region. The magnetic interface region is configured to increase the PMA strength of the memory cell compared with the memory cell lacking the magnetic interface region, but does not significantly adversely affect the memory cell Other characteristics of the memory cell, such as the resistance-area of the memory cell. For example, even in the case of enhanced PMA intensity (for example, uniaxial anisotropic magnetic field (Hk) exceeding about 4,000 Oe (Oersteds) (about 318.3 kA/m)), low RA (for example, less than about 20Ω·μm2 (ohm x square micrometer)). Therefore, the magnetic interface region can enhance the operating performance of the magnetic region (for example, a free region or a fixed region) in a magnetic memory cell structure that is adapted to high data retention time and low power operation.As used herein, the term "substrate" means and includes a base material or other configuration on which components such as components in a memory cell are formed. The substrate may be a semiconductor substrate, a base semiconductor material on a supporting structure, a metal electrode, or a semiconductor substrate on which one or more materials, structures or regions are formed. The substrate may be a conventional silicon substrate or other bulk substrates containing semi-conductive materials. As used herein, the term "bulk substrate" means and includes not only silicon wafers, but also silicon-on-insulator ("SOI") substrates (eg, silicon-on-sapphire ("SOS") substrates or glass Silicon ("SOG") substrate), an epitaxial layer of silicon on a base semiconductor base, or other semiconductor or optoelectronic materials (e.g., silicon germanium (Si1-xGex, where x is, for example) a mole between 0.2 and 0.8 Fraction), germanium (Ge), gallium arsenide (GaAs), gallium nitride (GaN), or indium phosphide (InP), etc.). In addition, when the "substrate" is referred to in the following description, the material, region, or junction may have been formed in the base semiconductor structure or pedestal using previous process stages.As used herein, the term "STT-MRAM cell" means and includes a magnetic cell structure including a non-magnetic region disposed between a free region and a fixed region. The non-magnetic region may be an electrically insulating (e.g., dielectric) region in a magnetic tunnel junction ("MTJ") configuration. Alternatively, the non-magnetic area may be a conductive area in a spin valve configuration.As used herein, the term "cell core" means and includes a memory cell structure that includes a free area and a fixed area, and during the use and operation of the memory cell, current passes through the free area and the fixed area to affect Parallel or anti-parallel magnetic orientation in the free area.As used herein, the term "vertical" means and includes a direction perpendicular to the width and length of the corresponding area. "Vertical" may also mean and include a direction perpendicular to the main surface of the substrate on which the STT-MRAM cell is located.As used herein, the term "horizontal" means and includes a direction parallel to at least one of the width and length of the corresponding area. "Horizontal" can also mean and include a direction parallel to the main surface of the substrate on which the STT-MRAM cell is located.As used herein, the term "magnetic material" means and includes ferromagnetic materials, ferrimagnetic materials, and antiferromagnetic materials.As used herein, the term "iron-based material" means and includes a material containing iron. For example (and not limited to), iron-based materials include pure iron, iron alloys, and materials including cobalt and iron. The composition of the iron-based material may be changed due to the annealing of the iron-based material during the manufacture of the magnetic memory cell, but this material may still be referred to as an iron-based material herein.As used herein, the term "magnetic region" means and includes regions that exhibit magnetism. The magnetic region includes a magnetic material and may also include one or more non-magnetic materials.As used herein, the term "sub-region" means and includes a region contained in another region. Therefore, a magnetic region may include one or more magnetic sub-regions (ie, sub-regions of magnetic material) and non-magnetic sub-regions (ie, sub-regions of non-magnetic material).As used herein, the term "fixed area" means and includes a magnetic area within an STT-MRAM cell that contains a magnetic material and has a fixed magnetic orientation during the use and operation of the STT-MRAM cell, which is achieved The current or the applied magnetic field that changes the magnetization direction of one magnetic area (for example, the free area) of the unit core may not change the magnetization direction of the fixed area. The fixed region may include one or more magnetic materials, and optionally one or more non-magnetic materials. For example, the fixed region may be configured as a synthetic antiferromagnetic substance (SAF) containing sub-regions of ruthenium (Ru) adjacent to magnetic sub-regions. Each of the magnetic sub-regions may include one or more materials and one or more regions in them. As another example, the fixed region may be configured as a single homogeneous magnetic material. Therefore, the fixed region may have uniform magnetization or sub-regions of different magnetizations with fixed magnetic orientations are generally realized during the use and operation of the STT-MRAM cell.As used herein, the term "free area" means and includes a magnetic area within an STT-MRAM cell that contains magnetic materials and has a switchable magnetic orientation during the use and operation of the STT-MRAM cell. The magnetic orientation can be in the “parallel” direction (where the magnetic orientation exhibited by the free area and the magnetic orientation exhibited by the fixed area are oriented in the same direction) and the “anti-parallel” direction (where the magnetic orientation exhibited by the free area is the same as the magnetic orientation exhibited by the fixed area). The magnetic orientations exhibited by the regions are switched in directions opposite to each other).As used herein, the term "oxide region" means and includes a region within an STT-MRAM cell, the region including an oxide material. For example (and not limited to), the oxide region may include an oxide-based MTJ region, an oxide cap region, or both.As used herein, the term "between" is a spatially relative term used to describe the relative position of a material, region, or subregion with respect to at least two other materials, regions, or subregions. The term "between" can encompass the placement of a material, region or sub-region directly adjacent to other materials, regions or sub-regions, and a material, region that is not directly adjacent to other materials, regions or sub-regions Or the placement of sub-regions.As used herein, an element referred to as being "on" or "above" another element means and includes the element directly on top of the other element, adjacent to the other element, under the other element, or with another element. The components are in direct contact. It also includes an element indirectly on top of another element, adjacent to another element, under another element, or near another element (there is another element between the element and another element). In contrast, when an element is referred to as being "directly on" or "directly adjacent to" another element, there is no intervening element.As used herein, for ease of description, for example, "below", "lower", "lower", "bottom", "upper", "upper", "top", "front", "rear", "Left", "right" and other spatially relative terms of similar terms describe the relationship between one element or feature and another element or feature as illustrated in the figure. Unless otherwise stated, the spatial relative terms are intended to cover different orientations of materials other than those depicted in the diagrams. For example, if the materials in the drawings are reversed, then elements described as "below" or "below" or "below" or "on the bottom" of other elements or features will be oriented "on" or "on the top" of other elements or features superior". Therefore, depending on the context in which the term is used, the term "lower" may encompass both the upward and downward orientation, which will be obvious to those of ordinary skill in the art. The material can be oriented in other ways (rotated by 90 degrees, inverted, etc.) and therefore explain the spatial relative descriptors used herein.As used herein, the terms "comprises" ("comprises", "comprising") and/or "includes" ("includes", "including") designate stated features, regions, integers, stages, operations, elements, The existence of materials, components, and/or groups, but does not exclude the existence or addition of one or more other features, regions, integers, stages, operations, elements, materials, components, and/or groups thereof.As used herein, "and/or" includes any and all combinations of one or more of the listed associated items.As used herein, unless the context clearly dictates otherwise, the singular forms "a", "an", and "said" are also intended to encompass the plural forms.The description presented herein does not mean an actual view of any particular component, structure, device, or system, but merely an idealized representation adopted to describe an embodiment of the present invention.The embodiments are described herein with reference to cross-sectional views which are schematic diagrams. Therefore, a change in the illustrated shape due to manufacturing technology and/or tolerances, for example, can be expected. Therefore, the embodiments described herein should not be construed as being limited to a specific shape or area as illustrated, but include, for example, deviations in shapes caused by manufacturing. For example, regions illustrated or described as box-shaped may have rough and/or non-linear characteristics. In addition, the illustrated sharp corners may be rounded. Therefore, the materials, features, and regions described in the drawings are schematic in nature, and their shapes are not intended to describe the exact shapes of the materials, features, or regions and do not limit the scope of the claims.The following description provides detailed details (such as material types and processing conditions) in order to provide a detailed description of embodiments of the disclosed devices and methods. However, those of ordinary skill in the art should understand that the embodiments of the apparatus and method may be practiced without adopting such detailed details. In fact, the embodiments of the device and method can be practiced in combination with conventional semiconductor manufacturing techniques adopted in the industry.The manufacturing process described herein does not form a complete process flow for processing semiconductor device structures. Those of ordinary skill in the art know the rest of the process flow. Therefore, only the method and semiconductor device structure required to understand the embodiments of the device and method are described herein.Unless the context dictates otherwise, the materials described herein can be prepared by including (but not limited to) spin coating, blanket coating, chemical vapor deposition ("CVD"), atomic layer deposition ("ALD"), plasma enhanced It is formed by any suitable technique of ALD or physical vapor deposition ("PVD"). Alternatively, the material can be grown in situ. Depending on the specific material to be formed, the technique of depositing or growing the material can be selected by one of ordinary skill in the art.Unless the context dictates otherwise, the removal of the materials described herein can be accomplished by any suitable technique including, but not limited to, etching, ion milling, grinding planarization, or other known methods.Reference will now be made to the drawings, in which the same numbers refer to the same components throughout the text. The drawings are not necessarily drawn to scale.The present invention discloses a memory unit. The memory cell includes at least one magnetic region exhibiting perpendicular magnetic orientation (for example, a free region or a fixed region) and an oxide region (for example, an MTJ region or an oxide cap region), wherein the magnetic interface region is directly or indirectly disposed between the two between. The magnetic interface area can enhance the PMA strength of the magnetic memory cell. The magnetic interface area can be located near or within its corresponding magnetic area. In some embodiments, the memory cell may include only one magnetic interface region; however, in other embodiments, the memory cell may include more than one magnetic interface region.FIG. 1 illustrates a magnetic cell core 100 of an STT-MRAM cell according to an embodiment of the present invention. The magnetic unit core 100 is supported by the substrate 102. The magnetic unit core 100 includes at least two magnetic regions (for example, a "fixed region" 110 and a "free region" 120), wherein the non-magnetic region 130 is between the two magnetic regions. One or more lower middle regions 140 and one or more upper middle regions 150 can be optionally disposed below and above the magnetic regions (fixed region 110 and free region 120) of the magnetic unit core 100 structure, respectively.In some embodiments, as illustrated in FIG. 1, the magnetic unit core 100 may include an optional conductive material that forms a seed region 160 on the substrate 102. The seed region 160 (if present) or the lower intermediate region 140 (if the seed region 160 does not exist) may be formed above the bottom conductive material (not shown), which may include (for example and without limitation) copper, tungsten, Titanium, or a combination thereof. The seed region 160 (if present) may comprise (for example and without limitation) a nickel-based material and may be configured to control the crystal structure of the overlying material or region. The lower middle region 140 (if present) may include a material configured to ensure the desired crystal structure of the overlying material in the magnetic unit core 100.The STT-MRAM cell may be configured to exhibit perpendicular magnetic orientation in at least one of the magnetic regions (e.g., fixed region 110 and free region 120). The characteristic of the displayed perpendicular magnetic orientation may be the strength of perpendicular magnetic anisotropy ("PMA"). As illustrated by arrows 112 and 122 in FIG. 1, in some embodiments, each of the fixed area 110 and the free area 120 may exhibit a perpendicular magnetic orientation. The magnetic orientation of the fixed region 110 may remain guided in substantially the same direction throughout the operation of the STT-MRAM cell (e.g., in the direction indicated by the arrow 112 of FIG. 1). On the other hand, the magnetic orientation of the free area 120 can be switched between a "parallel" configuration and an "anti-parallel" configuration (as indicated by the double-headed arrow 122 in FIG. 1) during operation of the unit. In the parallel orientation, the magnetic orientation 122 of the free area 120 is substantially guided in the same direction (for example, north) as the magnetic orientation 112 (for example, north) of the fixed area 110, thereby crossing the magnetoresistive element (ie, , The fixed area 110 and the free area 120) produce lower resistance, which can be defined as the "0" state of the STT-MRAM cell. In the anti-parallel configuration, the magnetic orientation 122 of the free area 120 is substantially directed in the opposite direction (e.g., south) to the magnetic orientation 112 (e.g., north) of the fixed area 110, thereby crossing the magnetoresistive element (ie, , The fixed area 110 and the free area 120) produce higher resistance, which can be defined as the "1" state of the STT-MRAM cell.In use and operation, a programming current can be passed through the access transistor (not shown) and the magnetic cell core 100. The fixed region 110 in the magnetic unit core 100 polarizes the electron spin of the programming current. The spin-polarized electron current interacts with the free region 120 by applying torque to the free region 120. When the torque of the spin-polarized electron current passing through the free zone 120 is greater than the critical switching current density (Jc) of the free zone 120, the torque applied by the spin-polarized electron current is sufficient (for example) in the magnetic orientation and direction to the north The magnetization direction of the free area 120 is switched between the south orientation. Therefore, a programming current can be used to align the magnetic orientation 122 of the free area 120 parallel or anti-parallel to the magnetic orientation 112 of the fixed area 110.The free area 120 and the fixed area 110 can be made of ferromagnetic materials (for example, Co, Fe, Ni or its alloys, NiFe, CoFe, CoNiFe, or doped alloys CoX, CoFeX, CoNiFeX (X=B, Cu, Re, Ru, Rh, Hf, Pd, Pt, C)) or other semi-metallic ferromagnetic materials (for example, NiMnSb and PtMnSb) are formed or include ferromagnetic materials or other semi-metallic ferromagnetic materials. In some embodiments, for example, the free area 120, the fixed area 110, or both may be formed by CoxFeyBz (where x=10 to 80, y=10 to 80, and z=0 to 50). In other embodiments, the free region 120, the fixed region 110, or both may be formed of iron (Fe) and boron (B) and do not include cobalt (Co). The composition and structure (for example, thickness and other physical dimensions) of the free area 120 and the fixed area 110 may be the same or different.Alternatively or in addition, in some embodiments, the free area 120, the fixed area 110, or both may be formed of or include multiple materials, some of which may be magnetic materials and some may be non-magnetic materials. Magnetic material. For example, some such multi-material free regions, fixed regions, or both may contain multiple sub-regions. For example (and without limitation), the free area 120, the fixed area 110, or both may be formed of or include repeated sub-regions of cobalt and platinum, wherein the sub-region of platinum may be arranged in the sub-region of cobalt between. As another example (without limitation), the free region 120, the fixed region 110, or both may include repeated sub-regions of cobalt and nickel, wherein the sub-regions of nickel may be disposed between the sub-regions of cobalt.The non-magnetic region 130 disposed between the fixed region 110 and the free region 120 may include a non-magnetic material (for example, a non-magnetic oxide material, for example, magnesium oxide (MgO), aluminum oxide (Al2O3), titanium oxide (TiO2) or Other oxide materials in the conventional MTJ region). Therefore, such oxide-containing MTJ regions may be referred to herein as "oxide-based MTJ regions" or "oxide-based non-magnetic regions". The non-magnetic region 130 may include one or more such non-magnetic materials. Alternatively or in addition, the non-magnetic region 130 may include one or more sub-regions of non-magnetic materials.As illustrated in FIG. 1, in some embodiments, the magnetic unit core 100 may include an oxide cap region 170, which may include an oxide, for example, MgO, TiO2, tantalum pentoxide ( Ta2O5) or a combination thereof. Therefore, such an oxide-containing cap area may also be referred to herein as an "oxide-based non-magnetic area". In some embodiments, the oxide cap area 170 may include the same material, structure, or both of the non-magnetic area 130; for example, both the oxide cap area 170 and the non-magnetic area 130 may include magnesium oxide ( For example, MgO), aluminum oxide, titanium oxide, zinc oxide, hafnium oxide, ruthenium oxide, or tantalum oxide.Optionally, the upper middle region 150 (if present) may include materials configured to ensure the desired crystal structure in the adjacent materials of the magnetic unit core 100. The upper middle region 150 may (or or in addition) include metallic materials, barrier materials, or other materials of conventional STT-MRAM cell core structures configured to assist the patterning process during the manufacture of the magnetic cell core 100. In some embodiments, such as in the embodiment illustrated in FIG. 1, the upper middle region 150 may include a conductive cap area, and the conductive cap area may include one or more materials, such as copper, tantalum, and titanium. , Tungsten, Ruthenium, Tantalum Nitride or Titanium Nitride.The magnetic unit core 100 according to the present invention further includes one of a magnetic area or a magnetic sub-area (for example, the fixed area 110, the magnetic sub-area of the fixed area 110, the free area 120 or the magnetic sub-area of the free area 120) and The magnetic interface region 180 between one of the oxide regions (eg, the non-magnetic region 130 and the oxide cap region 170). As illustrated in FIG. 1, the magnetic interface region 180 may be directly disposed near one of the magnetic region or magnetic sub-region and one of the oxide region. According to the embodiment illustrated in FIG. 1, the magnetic interface area 180 may be directly disposed on the top of the non-magnetic area 130 and directly under the free area 120. As it is, the magnetic interface region 180 may be disposed between two oxide regions (ie, between the oxide-based MTJ (eg, non-magnetic region 130) and the oxide cap region 170).The magnetic interface area 180 may be configured to enhance the PMA strength of the magnetic unit core 100 or more specifically its adjacent magnetic area (for example, the free area 120 according to the embodiment illustrated in FIG. 1 ). The increased PMA can be achieved while maintaining the low resistance-area of the magnetic unit core 100 (for example, less than about 20 Ω·μm 2 (ohm×square micrometer)). The magnetic interface region 180 may be formed of a magnetic material (e.g., an iron-based material (e.g., only iron (Fe), iron alloy) or (in some embodiments) a cobalt iron (CoFe)-based material).The material of the magnetic interface region 180 may be in the form of a single layer of iron or other iron-containing compounds disposed between the non-magnetic region 130 and the oxide cap region 170. Alternatively or in addition, the magnetic interface region 180 may have a thickness of less than about (about 1.0 nm) (for example, less than about (about 0.5 nm), for example, about (about 0.3 nm)) (that is, along a perpendicular to the substrate 102). The height of the shaft of the upper surface). Thus, the magnetic interface area 180 may be thinner than its adjacent area. For example, the overlying free region 120 of FIG. 1 may be formed to have a thickness of about (about 1.5 nm) to about (about 3.0 nm), and the underlying non-magnetic region 130 of FIG. 1 may be formed to have a thickness of about (about 0.7 nm) to about (about 1.0 nm) thickness.The magnetic interface region 180 may be formed of a material that is formulated or otherwise configured to have the same crystal orientation as the crystal orientation of the material on which the material is formed. For example, according to the embodiment illustrated in FIG. 1, the magnetic interface region 180 may be formed of iron (Fe) in such a way that it has the same crystal orientation as the MgO in the non-magnetic region 130 (for example, by magnetron sputtering).The magnetic interface region 180 may be formed by, for example, magnetron sputtering. For example, the material of the lower region of the magnetic unit core 100 may be continuously formed in, for example, a layer, and then the magnetic material of the magnetic interface region 180 may be formed over the previously formed material. Next, in, for example, a layer on the magnetic material of the magnetic interface region 180, the material of the upper region of the magnetic unit core 100 may be continuously formed. Therefore, the material of the magnetic interface region 180 may be formed to be disposed between two oxide-based materials (ie, the oxide material forming the non-magnetic region 130 and the oxide cap region 170).After forming the material of the magnetic unit core 100, the material may be patterned to form the magnetic unit core 100 including various regions thereof. The technology of forming and patterning the material of the lower and upper regions of the magnetic unit core 100 is known in the art, and therefore, it is not described in detail herein. For example, the magnetic unit core 100 may be formed by sequentially forming each of the materials of its region from the base to the top and then patterning the material to define the magnetic unit core 100. The magnetic unit core 100 structure may be annealed at a temperature of at least 150°C (for example, between about 150°C and about 400°C) before or after patterning. Alternatively or in addition, the material of the magnetic unit core 100 structure may be annealed during the manufacture of the magnetic unit core 100 structure (e.g., after the one or more materials of which the magnetic unit core 100 structure is formed and before the other materials of which it is formed).In an embodiment such as the embodiment illustrated in FIG. 1, wherein the magnetic interface region 180 is directly disposed between the non-magnetic region 130 and the free region 120, and wherein the magnetic interface region 180 is disposed between the non-magnetic region 130 and the oxide cap Between the regions 170, it is expected that, without being limited to any particular theory, the magnetic interface region 180 realizes the oxidation of the iron in the magnetic interface region 180 and the adjacent oxide-based region (for example, the non-magnetic region 130). The ferro-oxygen combination of oxygen in the material. Ferro-oxygen bonding can contribute to the interface PMA strength. It is expected that the contribution of ferro-oxygen bonding to the interface PMA strength may be greater than the contribution from other oxygen bonding (for example, cobalt-oxygen bonding). Therefore, the inclusion of the magnetic interface region 180 in the magnetic unit core 100 can achieve a magnetic unit core structure between the magnetic region (for example, the free region 120) and the oxide region (for example, the non-magnetic region 130) than the lack of the magnetic interface region 180. PMA is stronger PMA.Therefore, the present invention discloses a memory cell including a magnetic cell core on a substrate. The magnetic unit core includes a magnetic region between an oxide region and another oxide region. The magnetic area exhibits perpendicular magnetic orientation. The magnetic interface region is disposed between the oxide region and another oxide region.Referring to FIG. 2, the magnetic unit core 200 is illustrated in which the magnetic interface region 180 is disposed between the non-magnetic region 130 and the oxide cap region 170 but on the free region 120. Therefore, the non-magnetic area 130 is disposed on one side of the free area 120 (for example, under the free area 120), and the magnetic interface area 180 is disposed on the other side of the free area 120 (for example, on the free area 120). The material of the magnetic unit core 200 may be the same as the material of the magnetic unit core 100 (FIG. 1) described above. The magnetic unit core 200 may be formed by sequentially forming each of the materials of its region from the base to the top and then patterning the material to define the magnetic unit core 200 structure. Therefore, the magnetic interface region 180 can be directly formed on the free region 120, and the oxide cap region 170 can be directly formed on the magnetic interface region 180. In other embodiments (not shown in FIG. 2), the positions of the free area 120 and the fixed area 110 are interchangeable, so that the magnetic interface area 180 will be placed between the oxide cap area 170 and the fixed area 110, which will Located on the non-magnetic area 130.Therefore, the present invention discloses a method of forming a memory cell, the method including forming an oxide material on a substrate. A magnetic material is formed over the oxide material. Another oxide material is formed above the magnetic material. An iron-based material is formed between the magnetic material and one of the oxide material and the other oxide material. The oxide material, the magnetic material, another oxide material, and the iron-based material are patterned to form one of the tunnel junction region from the oxide material, the free region from the magnetic material, and the fixed region, from the iron-based material The magnetic interface area of the material and the magnetic unit core from the oxide cap area of another oxide material. Magnetic materials exhibit perpendicular magnetic orientation.3, in some embodiments, the magnetic unit core 300 according to the present invention may include a magnetic region having a multi-material structure (for example, a free region, a fixed region, or both). For example, the fixed region 110 of the embodiment of FIG. 3 or any of the foregoing or later described embodiments may be configured as a SAF in which the Ru sub-region is adjacent to it on the top and bottom through the magnetic sub-region. As another example, as illustrated, the magnetic unit core 300 may include a multi-material free region 320. The multi-material free region 320 may include an upper magnetic sub-region 324 separated from the lower magnetic sub-region 326 by a spacer 328 (ie, not in direct physical contact). In other embodiments, the multi-material free area 320 may be free of spacers 328. In other embodiments, the multi-material free region 320 may have more than two magnetic sub-regions, more than one spacer 328, or both.The one or more materials forming the upper magnetic sub-region 324 and the lower magnetic sub-region 326 may be the same materials as the one or more materials forming the free region 120, as described above. For example (and without limitation), each of the upper magnetic sub-region 324 and the lower magnetic sub-region 326 may be formed by CoxFeyBz (where x=1, y=50 to 60, and z=1 to 30, for example, CoFe50B30). As another example, the upper magnetic sub-region 324 may be formed of CoFeB60, and the lower magnetic sub-region 326 may be formed of CoFe50B30.Each of the upper magnetic sub-region 324 and the lower magnetic sub-region 326 may be formed to be thicker than the spacer 328, respectively. In some embodiments, the lower magnetic sub-region 326 may have a thickness of about (about 1.0 nm), and the upper magnetic sub-region 324 may have a thickness of about (about 0.6 nm). In other embodiments, the upper magnetic sub-region 324 and the lower magnetic sub-region 326 may be formed to have approximately the same thickness, for example, from about (about 0.6 nm) to about (about 1.0 nm).The spacer 328 may be formed of a conductive material such as, for example, and without limitation, tantalum (Ta). Compared with the overlying and underlying sub-regions, the spacer 328 may be relatively thin. For example, the spacer 328 may have a thickness of less than about (about 0.3 nm) (e.g., about (about 0.15 nm)).The multi-material free region 320 may be formed by sequentially forming each of its materials from the base to the top before the material is patterned to form the magnetic unit core 300.According to the embodiment of FIG. 3, the magnetic interface region 180 may be formed above the multi-material free region 320 to be disposed between the non-magnetic region 130 and the oxide cap region 170. Therefore, the magnetic interface region 180 may be directly between the upper magnetic sub-region 324 and the oxide cap region 170.Therefore, the present invention discloses a memory cell including a magnetic cell core including a free area configured to exhibit a switchable vertical magnetic orientation and a fixed area configured to exhibit a fixed vertical magnetic orientation. The non-magnetic area is arranged between the free area and the fixed area. The magnetic interface area is separated from the non-magnetic area by one of a free area and a fixed area.4, the magnetic unit core 400 having a multi-material free area 420 including an upper magnetic sub-region 324, a lower magnetic sub-region 326, and spacers 328 according to the present invention may be structured to further include a magnetic interface region 180. That is, the magnetic interface region 180 may be directly disposed above or below the spacer 328 and one of the upper magnetic sub-region 324 and the lower magnetic sub-region 326. In this structure, the magnetic interface region 180 is separated from both of the oxide-based regions (ie, the non-magnetic region 130 and the oxide cap region 170). However, the presence of the magnetic interface region 180 can enhance the PMA intensity of at least the magnetic region incorporating the magnetic interface region 180, which can be a free region (for example, the multi-material free region 420), as illustrated in FIG. 4. For example, the PMA intensity of the magnetic region (eg, multi-material free region 420) may be greater than about 4,000 Oersteds (about 318.3 kA/m) (eg, greater than about 5,000 Oersteds (about 397.9 kA/m)).In a structure such as the structure of the magnetic unit core 400 of FIG. 4, the upper magnetic sub-region 324 and the lower magnetic sub-region 326 may have the same thickness. Alternatively, the overall thickness of the magnetic interface region 180 and one of the upper magnetic sub-region 324 and the lower magnetic sub-region 326 adjacent to the magnetic interface region 180 may be approximately equal to the other of the upper magnetic sub-region 324 and the lower magnetic sub-region 326. The thickness of the person. For example, the lower magnetic sub-region 326 may have a thickness of about (about 1.0 nm), while the upper magnetic sub-region 324 may have a thickness of about (about 0.6 nm) and the magnetic interface region 180 may have a thickness of about (about 0.4 nm).The material of the multi-material free region 420 can be formed sequentially from the base to the top, whereby the magnetic interface region 180 can be directly formed on the spacer 328, and the upper magnetic sub-region 324 can be directly formed on the magnetic interface region 180.Referring to FIG. 5, the magnetic unit core 500 according to the present invention may (or) include more than one magnetic interface region 180. For example, as illustrated in FIG. 5, a pair of magnetic interface regions 180 may be arranged such that one magnetic interface region 180 overlies one of the magnetic regions of the magnetic unit core 500 (for example, overlies the free region 120), and the The other magnetic interface region 180 of the pair underlies the same magnetic region (for example, the underlying free region 120). Furthermore, the material of the magnetic unit core 500 can be formed sequentially from the base to the top, and can be patterned to form the magnetic unit core 500.Referring to FIG. 6, in some embodiments, the magnetic unit core 600 may include more than two magnetic interface regions 180, for example, directly in each of the magnetic regions (for example, the free region 120 and the fixed region 110) of the magnetic unit core 600. One magnetic interface area 180 on each of the top and bottom of one. Furthermore, the material of the magnetic unit core 600 can be sequentially formed from the base to the top, and thereafter can be patterned to form the magnetic unit core 600.7, it is expected that one of the magnetic regions of the magnetic unit core 700 (for example, the free region or (for example) the multi-material free region 720) may incorporate the magnetic interface region 180, while the other magnetic region of the magnetic unit core 700 The area (for example, the fixed area 110) may be adjacent to another magnetic interface area 180. Furthermore, the material of such magnetic unit core 700 can be formed sequentially from the base to the top.Therefore, the number of magnetic interface regions 180 and the placement of such magnetic interface regions 180 can be customized according to the required STT-MRAM structure and operability. Likewise, the exact composition and thickness of the magnetic interface region 180 can be customized to achieve the desired PMA strength, which can be the highest PMA strength that can be achieved without adversely affecting the operation of the STT-MRAM cell (e.g., Hk(Oe)). It is expected that the thickness of the magnetic interface region 180 is optimized through testing to have a certain thickness that is large enough to enhance the PMA strength while being smaller than the thickness that will adversely affect the operating characteristics of the STT-MRAM cell.In an embodiment where the magnetic unit core (for example, the magnetic unit core 500, 600, or 700) includes the magnetic interface region 180, the magnetic interface region 180 in the magnetic unit core 500, 600, or 700 may have an equal thickness, or/or The thickness of the magnetic interface region 180 may be different from each other. Furthermore, it is expected that the relative thickness of the plurality of magnetic interface regions 180 can be optimized through testing.After forming the magnetic unit core (for example, one of the magnetic unit cores 100 to 700), as known in the art, the semiconductor device structure can undergo additional manufacturing steps to form an operational semiconductor device, such as STT-MRAM cell, STT- An array of MRAM cells, STT-MRAM systems, processor-based systems, or any combination thereof.Referring to FIG. 8, an STT-MRAM system 800 including a peripheral device 812 for operable communication with an STT-MRAM cell 814 is illustrated. A plurality of STT-MRAM cells 814 can be manufactured to form a grid pattern or display that includes multiple rows and columns. Various other arrangements (depending on system requirements and manufacturing technology) of memory cell arrays. The STT-MRAM cell 814 includes a cell core 802, an access transistor 803, a conductive material that can function as a data/sensing line 804 (for example, a bit line), and a conductive material that can function as an access line 805 (for example, a word line) , And a conductive material that can function as the source line 806. The peripheral device 812 of the STT-MRAM system 800 may include a read/write circuit 807, a bit line reference 808, and a sense amplifier 809. The unit core 802 may be any of the above-mentioned magnetic unit cores 100 to 700. Attributable to the structure of the cell core 802 (ie, including the magnetic interface region 180 (FIGS. 1 to 7) separated from the non-magnetic region 130 (e.g., tunnel region or MTJ) or the oxide cap region 170) and STT-MRAM With the resulting enhancement of the PMA strength of the cell 814, the STT-MRAM cell 814 can exhibit a higher data retention time and operate efficiently with lower power than the conventional STT-MRAM cell.In use and operation, when the STT-MRAM cell 814 is selected for programming, a programming current is applied to the STT-MRAM cell 814, and the current is passed through the fixed area of the cell core 802 to spin-polarize and apply torque to the cell On the free area of the core 802, the magnetization of the torque-switching free area is used to "write" or "program" the STT-MRAM cell 814. In the read operation of the STT-MRAM cell 814, a current is used to detect the resistance state of the cell core 802.To start programming the STT-MRAM cell 814, the read/write circuit 807 can generate a write current to the data/sense line 804 and the source line 806. The polarity of the voltage between the data/sensing line 804 and the source line 806 determines the switching of the magnetic orientation of the free area in the cell core 802. By changing the magnetic orientation of the free region using the spin polarity, the free region is magnetized according to the spin polarity of the programming current, and the programming state is written to the STT-MRAM cell 814.In order to read the STT-MRAM cell 814, the read/write circuit 807 generates a read voltage through the cell core 802 and the access transistor 803 to the data/sense line 804 and the source line 806. The programming state of the STT-MRAM cell 814 refers to the resistance across the cell core 802, which can be determined by the voltage difference between the data/sensing line 804 and the source line 806. In some embodiments, the voltage difference can be compared with the bit line reference 808 and amplified by the sense amplifier 809.FIG. 8 illustrates an example of an operable STT-MRAM system 800. However, it is expected that any STT-MRAM system configured to incorporate magnetic cell cores with magnetic regions exhibiting perpendicular magnetic orientation can incorporate and utilize magnetic cell cores 100 to 700 (FIGS. 1 to 7). Obviously, because the thickness of the magnetic interface region 180 (FIGS. 1 to 7) can be relatively thin relative to other regions of the magnetic unit cores 100 to 700, the overall height of the magnetic unit cores 100 to 700 is compared with the conventional magnetic properties of STT-MRAM cells. The height of the unit cores can be the same or not much larger. In addition, because the magnetic interface region 180 can be formed using the same or similar technology to the technology used to form other regions of the magnetic unit cores 100 to 700, the overall manufacturing process may not be significantly changed to complete the magnetic unit core according to the embodiment of the present invention. 100 to 700 formation.Therefore, the present invention discloses a Spin Torque Transfer Magnetic Random Access Memory (STT-MRAM) system including a magnetic cell core including a magnetic interface region on or in the magnetic region. The magnetic regions are configured to exhibit perpendicular magnetic orientation. The oxide region is separated from the magnetic interface region. The STT-MRAM system also includes a variety of conductive materials in operable communication with the magnetic cell core.Referring to FIG. 9, a simplified block diagram of a semiconductor device structure 900 implemented in accordance with one or more embodiments described herein is illustrated. The semiconductor device structure 900 includes a memory array 902 and a control logic component 904. The memory array 902 may include a plurality of STT-MRAM cells 814 (FIG. 8), the STT-MRAM cell 814 includes any of the above-mentioned magnetic unit cores 100 to 700 (FIGS. 1 to 7), the magnetic unit core 100 Up to 700 (Figures 1 to 7) can have been formed according to the above-mentioned method. The control logic component 904 can be configured to operatively interact with the memory array 902 to read from or write to any or all memory cells in the memory array 902 (e.g., STT-MRAM cell 814). Or all memory cells.Therefore, the present invention discloses a semiconductor device structure including a spin torque transfer magnetic random access memory (STT-MRAM) array. The spin torque transfer magnetic random access memory (STT-MRAM) array includes a plurality of STT- MRAM cell. Each of the plurality of STT-MRAM cells includes a unit core including a non-magnetic region between a magnetic region and another magnetic region. Each of the magnetic region and the other magnetic region are configured to exhibit perpendicular magnetic orientation. The oxide region is separated from the non-magnetic region by one of a magnetic region and another magnetic region. The magnetic interface region is positioned between the oxide region and the non-magnetic region.Referring to Figure 10, a processor-based system 1000 is depicted. The processor-based system 1000 may include various electronic devices manufactured according to embodiments of the present invention. The processor-based system 1000 may be, for example, one of many types of computers, pagers, mobile phones, personal notebooks, control circuits, or other electronic devices. The processor-based system 1000 may include one or more processors 1002 (such as a microprocessor) to control system functions and processing of requests in the processor-based system 1000. The processor 1002 and other sub-components of the processor-based system 1000 may include magnetic memory devices manufactured in accordance with embodiments of the present invention.The processor-based system 1000 may include a power supply 1004. For example, if the processor-based system 1000 is a portable system, the power source 1004 may include one or more of a fuel cell, a power capture device, a permanent battery, a replaceable battery, and a rechargeable battery. The power supply 1004 may also include an AC adapter; therefore, the processor-based system 1000 may be plugged into, for example, a wall outlet. The power supply 1004 may also include a DC adapter so that the processor-based system 1000 can be plugged into, for example, a car cigarette lighter or a car power port.Depending on the functions performed by the processor-based system 1000, various other devices may be coupled to the processor 1002. For example, the user interface 1006 may be coupled to the processor 1002. The user interface 1006 may include input devices, such as buttons, switches, keyboards, light pens, mice, digitizers and styluses, touch screens, voice recognition systems, microphones, or combinations thereof. The display 1008 may also be coupled to the processor 1002. The display 1008 may include an LCD display, an SED display, a CRT display, a DLP display, a plasma display, an OLED display, an LED display, a three-dimensional projection, an audio display, or a combination thereof. In addition, the RF subsystem/baseband processor 1010 may also be coupled to the processor 1002. The RF subsystem/baseband processor 1010 may include an antenna coupled to an RF receiver and RF transmitter (not shown). The communication port 1012, or more than one communication port 1012 may also be coupled to the processor 1002. The communication port 1012 may be adapted to be coupled to one or more peripheral devices 1014 (e.g., modem, printer, computer, scanner, or camera) or a network (e.g., for example, a local area network, a remote local area network, an intranet, or the Internet).The processor 1002 may control the processor-based system 1000 by implementing a software program stored in the memory. The software program may include, for example, an operating system, database software, drawing software, word processing software, media editing software, or media playing software. The memory is operatively coupled to the processor 1002 to store and facilitate the execution of various programs. For example, the processor 1002 may be coupled to a system memory 1016, which may include spin torque transfer magnetic random access memory (STT-MRAM), magnetic random access memory (MRAM), dynamic random access memory (DRAM) ), one or more of static random access memory (SRAM), racetrack memory, and other known memory types. The system memory 1016 may include volatile memory, non-volatile memory, or a combination thereof. The system memory 1016 is generally large so that it can dynamically store loaded applications and data. In some embodiments, the system memory 1016 may include a semiconductor device structure (for example, the semiconductor device structure 900 of FIG. 9), a memory cell including any one of the magnetic unit cores 100 to 700 (FIGS. 1 to 7), or a combination thereof .The processor 1002 may also be coupled to the non-volatile memory 1018, which does not imply that the system memory 1016 must be volatile. The non-volatile memory 1018 may include one or more of STT-MRAM, MRAM, read-only memory (ROM) (such as EPROM), resistive read-only memory (RROM), and flash memory used in conjunction with the system memory 1016. The size of the non-volatile memory 1018 is generally selected to be only large enough to store any necessary operating system, application programs, and fixed data. In addition, the non-volatile memory 1018 may include a high-capacity memory, such as a hard disk memory, for example, a hybrid drive including a resistive memory or other types of non-volatile solid-state memory. The non-volatile memory 1018 may include a semiconductor device structure, such as the semiconductor device structure 900 of FIG. 9, a memory cell including any one of the magnetic unit cores 100 to 700 (FIGS. 1 to 7), or a combination thereof.The following examples are presented to illustrate examples of the invention in more detail. This example should not be construed as being exhaustive or exhaustive about the scope of the invention.InstanceThe partial magnetic unit core structure without the magnetic contribution from the fixed area was manufactured to evaluate the PMA strength of the free area manufactured according to the embodiment of the present invention. Part of the magnetic unit core structure includes: a conductive seed region with a thickness of about (about 5.0 nm); an overlying virtual fixed region of CoFeB with a thickness of about (about 0.5 nm); an overlying non-magnetic region of MgO, It has a thickness of about (about 1.2 nm); the overlying multi-material free region includes a lower magnetic sub-region of CoFeB having a thickness of about (about 1.0 nm), and an upper layer of Ta with a thickness of about (about 0.15 nm). Covered spacer, and its B concentration is slightly different from the lower magnetic sub-region, the overlying upper magnetic sub-region of CoFeB having a thickness of about (about 0.6nm); the overlying magnetic interface region of Fe, which has about (about 0.4nm) ); the overlying oxide cap region of MgO, which has a thickness of about (about 0.7 nm); and the overlying upper conductive cap region, which has a thickness of about (about 50 nm). This part of the magnetic unit core structure exhibits a PMA intensity (measured by Hk(Oe)) of 5,007 Oe (398.4 kA/m), as indicated by the data line 1200 of FIG. 11. This is compared with the PMA intensity of 2,992 Oe (238.1 kA/m), as indicated by the data line 1100 of FIG. 11, exhibited by the same structure of the magnetic interface region lacking Fe. Therefore, the magnetic unit core structure in which the magnetic interface region is adjacent to the oxide cap region and arranged above the free region exhibits an increase of more than 50% of the PMA intensity compared with the same structure in the non-magnetic interface region.Although the implementation of the present invention is susceptible to various modifications and alternative forms, it has been shown as an example in the drawings and specific embodiments have been described in detail herein. However, the present invention is not intended to be limited to the specific forms disclosed. More precisely, the present invention covers all modifications, combinations, equivalents, changes and substitutions that fall within the scope of the present invention as defined by the appended claims and their legal equivalents.
A recording unit is incorporated into an electronic device to record an extent to which the electronic device is operated. In one embodiment, the recording unit is a code structure installed in a memory of the electronic device. The extent to which the electronic device has been operated can be interpreted based on information recorded on the electronic device by the recording unit. In one embodiment, the extent to which the electronic device has been operated is used to determine whether or not the electronic device is eligible for re-sale as a new product.
What is claimed is: 1. A method comprising:interpreting an extent to which an electronic device has been operated as recorded by a recording unit in the electronic device; coupling the electronic device to a tester; downloading data recorded by the recording unit to the tester; identifying with the tester a level of operation among a plurality of possible levels of operation; and sorting by a sorting device the electronic device based on the level of operation. 2. The method claim 1 wherein the electronic device is one of a plurality of electronic devices having incorporated recording units.3. The method of claim 2 wherein sorting the plurality of electronic devices comprises sorting the plurality of electronic devices into devices eligible for re-sale and devices not eligible for resale.4. The method of claim 3 wherein each recording unit comprises a plurality of registers corresponding to a plurality of levels of operation of each electronic device, the method further comprising:re-setting the plurality of registers in the devices eligible for re-sale. 5. The method of claim 2 wherein sorting the plurality of electronic devices comprises sorting the plurality of electronic devices into devices that have been used for their intended purpose and devices that have not been used for their intended purpose.6. An apparatus comprising:a testing device to interpret an extent to which an electronic device has been operated as recorded by a recording unit in the electronic device by accessing the recording unit to retrieve data indicating the extent to which the electronic device has been operated, wherein the testing device comprises: a memory reader; and a coupling device to couple the memory reader to the electronic device, the memory reader to download data recorded by the recording unit from a memory in the electronic device, the data to identify a level of operation among a plurality of possible levels of operation for the electronic device, wherein the testing device is external to the electronic device being tested; and a sorting device to sort based on the level of operation. 7. The apparatus of claim 6 wherein the electronic device is one of a plurality of electronic devices having incorporated recording units, wherein the sorting device sorts the plurality of electronic devices based on the extent to which the plurality of electronic devices have been operated as recorded by the recording units.8. The apparatus of claim 7 wherein the sorting device sorts the plurality of electronic devices into devices that have been used for their intended purpose and devices that have not been used for their intended purpose.
FIELD OF THE INVENTIONThe present invention pertains to the field of electronic devices. More particularly, this invention relates to determining to what extent a returned electronic product has been operated.BACKGROUNDWhen customers return products, it usually costs businesses money. The market for used products is often very limited. So, if a business has to sell a returned product as used, the business may be lucky to sell the product for a fraction of its original price. This is especially true for technology products like personal computers, audio/video equipment, networking hardware, and other electronic devices. Nonetheless, businesses often accept returned merchandise for various reasons, including things like promoting customer good will and complying with government regulations.The costs associated with returned products can often be greatly reduced if the returned products can be legally re-sold as new. The incentive to re-sell products as new, however, may be outweighed by other factors. For instance, government entities may impose sanctions on businesses that sell used products as new. More importantly, being publicly accused of wrongly selling used products as new can significantly damage a business's reputation. In which case, if it is not easy to tell whether or not a returned product has been used, businesses tend to err on the side of caution when it comes to re-selling returned products.For instance, in the United States, according to the Federal Trade Commission (FTC), if a product has been demonstrated to have been used by a customer for the product's intended purpose, then the product is said to be used. In which case, products such as electronic devices are often treated as used as soon as the products have been removed from their packaging. That is, even though an electronic device may never have even been turned on, much less used for an intended purpose, the product is treated as used to ensure compliance with FTC regulations and to avoid the possibility of bad publicity. Unnecessarily treating these products as used undoubtedly costs businesses millions of dollars every year.BRIEF DESCRIPTION OF THE DRAWINGSExamples of the present invention are illustrated in the accompanying drawings. The accompanying drawings, however, do not limit the scope of the present invention. Similar references in the drawings indicate similar elements.FIG. 1 illustrates one embodiment of the present invention to incorporate a recording unit.FIG. 2 illustrates one embodiment of the present invention to interpret a level of operation as recorded by the recording unit.FIG. 3 demonstrates a flow for one embodiment of the present invention.FIG. 4 demonstrates a flow for one embodiment of a code structure.FIG. 5 illustrates one embodiment of a hardware system.FIG. 6 illustrates one embodiment of a machine readable storage medium.DETAILED DESCRIPTIONIn the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, those skilled in the art will understand that the present invention may be practiced without these specific details, that the present invention is not limited to the depicted embodiments, and that the present invention may be practiced in a variety of alternate embodiments. In other instances, well known methods, procedures, components, and circuits have not been described in detail.Parts of the description will be presented using terminology commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. Also, parts of the description will be presented in terms of operations performed through the execution of programming instructions. As well understood by those skilled in the art, these operations often take the form of electrical, magnetic, or optical signals capable of being stored, transferred, combined, and otherwise manipulated through, for instance, electrical components.Various operations will be described as multiple discrete steps performed in turn in a manner that is helpful in understanding the present invention. However, the order of description should not be construed as to imply that these operations are necessarily performed in the order they are presented, or even order dependent. Lastly, repeated usage of the phrase "in one embodiment" does not necessarily refer to the same embodiment, although it may.As discussed more fully below, the present invention provides a way to determine the extent to which an electronic device has been operated. For instance, various embodiments of the present invention allow a business to determine whether or not a returned electronic device has been used for its intended purpose, and therefore whether or not the device can be re-sold as new. The potential cost savings as a result of the present invention are huge.In general, embodiments of the present invention incorporate functionality into an electronic device to record the extent to which the device is operated. If the product is returned, the recorded information can be interpreted to determine whether or not the device has been used for its intended purpose.For instance, the intended purpose of a television may be to tune an input signal and display an image based on the signal. In which case, the television could be incorporated with functionality to record when and if the television tunes an input signal and displays an image based on the signal. If the television is returned, the recorded information could be used to determine whether or not the television could be re-sold as new. The present invention can similarly be applied to a wide variety of electronic devices including, for instance, audio devices, computers, personal digital assistants, cellular devices, large and small scale networking devices, appliances, and the like. As used herein, the term electronic is intended to include, but is not limited to, a variety of digital, analog, and optical devices.FIG. 1 illustrates one embodiment of the present invention. An inventive memory programmer 110 couples to an electronic device 120 through data coupling device 130. Electronic device 120 includes a memory 140. Memory programmer 110 installs code structure 150 in memory 140. Code structure 150 includes a number of registers 160. The registers correspond to various levels of operation of electronic device 120. By using a number of registers, code structure 150 can, for instance, record how far along in a continuum of start-up procedures a user went before returning the device.As an example, if electronic device 120 is a networking device, a first register may be set when the networking device is powered-on, a second register may be set when the networking device downloads software (for instance to create firmware), (not shown), from memory 140, and a third register may be set when the networking device sends or receives data. Then, assuming there is no definitive definition for the intended purpose of a networking device, a business may choose a point in the continuum of operation levels to distinguish devices eligible for re-sale from those that are not eligible for re-sale.For instance, depending on supply and demand for the device, the business may be willing to risk re-selling devices that have not actually sent or received data based on the assertion that the intended use of a networking device is to send or receive data. Under different circumstances, the business may be less willing to accept risk, and may choose to only re-sell devices that have not been used beyond being powered-on based on the assumption the intended use for the device is something more than merely being powered-on.In alternate embodiments, any number of approaches can be used to record the extent to which a product has been operated. For instance, a single multi-bit vector could be used in which each bit corresponds to a different level of use. In other embodiments, a hardware solution could be used. For instance, an array of resettable switches or replaceable fuses could be set or blown as different levels of use are reached.In FIG. 1, except for the teachings of the present invention, memory programmer 110 and data coupling device 130 are intended to represent a variety of devices used to manufacture, assemble, program, initialize, install, etc., electronic devices and/or components used within electronic devices. For instance, in one embodiment, a manufacturing facility includes equipment to "burn" or program memory 140 (such as in firmware). In which case, the functionality of memory programmer 110 and data coupling device 130 could be performed by adding code structure 150 to the firmware.Memory 140 is intended to represent a variety of non-volatile memory devices such as electrically erasable programmable read only memory (EEPROM), flash EEPROM, or a combination of different kinds of memory.FIG. 2 illustrates an embodiment of the present invention for testing returned products. As with the embodiment of FIG. 1, except for the teachings of the present invention, the embodiment of FIG. 2 is intended to represent a variety of processing facilities for testing, sorting, manipulating, etc., a variety of electronic devices.In the embodiment of FIG. 2, returned devices 201 A, B, and C move past tester 210 on conveyor 220. Tester 210 includes a data coupling device 230 that couples the tester to the devices as they move by. Memory reader 240 downloads information recorded in the memories of the devices and identifies a level of operation for each device. For instance, referring to the embodiment of FIG. 1, memory reader 240 downloads values stored in registers 160. Depending on which registers where set prior to the devices being returned, memory reader 240 can identify the extent of operation for each device.In the illustrated embodiment, memory tester 240 also includes a register resetter to reset the registers in each memory device after the registers are read. By resetting the registers, the device is ready to be re-sold if indeed the device is determined to be eligible for re-sale. In alternate embodiments, the register re-setter may be located elsewhere in the process and/or may only reset the registers in devices that have been determined to be eligible for re-sale.Sorter 250 sorts devices into bins 265 and 275 using sorting arms 251 and 252. Bin 265 is for used devices and bin 275 is for devices eligible for resale. Sorter 250 times the sorting arms to push devices either down slide A 260 into the bin 265 or down slide B 270 into the bin 275. In various embodiments, memory reader 240 may instruct sorter 250 as to which sorting arm to use or simply inform sorter 250 of the identified level of operation for a given device and let sorter 250 decide.The embodiment of FIG. 2 is largely automated. Any variety of automated or manual alternatives are possible. For example, in alternate embodiments, human operator may play a larger roll. A human operator may manually attach data coupling device 230 to each electronic device 201. Then, depending on the level of operation identified by memory reader 240, the human operator may decide whether or not the device is eligible for resale and sort the device accordingly. Alternately, memory reader 240 may tell the human operate whether or not the device is eligible for resale and/or instruct the human operate how to sort the device.FIG. 3 demonstrates one embodiment of the present invention. In block 310, a memory programmer couples to new devices and installs a code structure in a memory of each device to record an extent of operation. In block 320, the registers in the code structures are initialized. In block 330, the devices are packaged in shrink wrap, and in block 340 the new devices are distributed for sale.In block 350, a certain percentage of the devices are returned by customers for various reasons. In block 360, if the shrink wrap is still intact, the devices can be distributed again and sold as new in block 340. If the shrink wrap is not intact, the devices are coupled to a memory tester to download the register values and identify a level of operation reached for each device 370.In block 380, if a returned device was not used for its intended use, as determined based on the level of operation reached, the registers in the device are initialized again in block 320, the device is re-packaged in shrink wrap in block 330, and the device is distributed for sale as new in block 330. If the returned device was used for its intended purpose, the device is either discarded or distributed to a secondary market in block 390.Alternate embodiments may not include all of the above illustrated blocks, and may perform one or more blocks in different order. For instance, as discussed above, registers may be re-set, or initialized, for all returned devices rather than just those eligible for resale. In another embodiment, the devices may not be packaged. For instance, large appliances are often delivered without packaging or shrink wrap. Furthermore, the flow of FIG. 3 can be re-entered as new devices are manufactured and/or returned.FIG. 4 demonstrates one embodiment of a code structure in an exemplary networking device as the code structure is triggered by various user and/or environmental actions. In block 405, the code structure waits for the device to be powered on. In reality, the code structure may simply be doing nothing since the device is not powered on. In any event, when the device is powered on, in block 410 the code structure sets a first register.In block 420, the code structure increments a counter that keeps tract of the number of times the device has been powered on. The value of the counter is stored in a register. In block 430, the code structure starts a timer that keeps tract of how long the device has been powered on. The value of the time is also stored in a register. The number of times and duration that a device has been powered on can be factors used in determining whether or not a device is eligible for resale. For instance, even if the device was only powered on and not otherwise used, if the device were powered on a certain number of times or for a certain one-time duration or accumulated duration, then the device may be considered used. Registers can be used throughout the illustrated flow to store a variety of additional operation-level factors such as maximum operating temperature, maximum moisture content, etc.In block 440, the code structure waits for software to be downloaded (for firmware). While the code structure waits for software to be downloaded, if the device is powered down in block 445, the code structure starts over at block 405. If the power comes back on, in the illustrated embodiment, the first register is set again in block 410. In alternate embodiments, block 410 may be skipped because the first register has previously been set. In either case, the first register value remains the same, indicating that the device has been powered on at least once. In block 420, the powered-on counter is incremented and the timer resumes counting in block 430.The first register and the powered-on counter may be redundant. For instance, the powered-on counter can be used to determine whether or not the device has ever been powered on. In which case, the first register can be eliminated. In various other embodiments however, having both registers or only the first register may be useful or provide added flexibility. For instance, the first register may be a single bit register and the powered-on counter may require several bits. In which case, depending on available memory, bus bandwidth, and a variety of other factors using one, the other, or both registers may be appropriate or beneficial.Returning to FIG. 4, if in block 440, software is downloaded (for firmware), the code structure sets a second register in block 450. In block 460, the code structure waits again. This time the code structure waits for data to be sent or received. While the code structure waits, if the device is powered down 465, the code structure will return to block 405 and begin again. In alternate embodiments, once block 460 has been reached, the code structure may skip one or more blocks between block 405 and block 460 as soon as the device is powered back on rather than again setting the first and second registers and determining whether or not software has been download. For instance, the code structure may skip block 410, perform blocks 420 and 430, and skip blocks 440 and 450 to return more directly to block 460.In any event, in block 460, when and if data is sent or received, a third register is set in block 470 to indicate that the highest level of operation has been reached. And, since the highest level of operation for the illustrated embodiment has been reached, the code structure disables itself in block 480 based on the assumption that, at this point in the flow, the device has been used for its intended purpose and cannot be eligible for re-sale as new. In alternate embodiments where there is no clear maximum level of operation, the code structure may continue to gather data, such as the number of times the device is powered on, the duration of use, etc., until the storage capacity of the registers is exceeded or until some other terminating event.FIG. 5 illustrates one embodiment of a hardware system intended to represent a broad category of computer systems such as personal computers, workstations, and/or embedded systems. In the illustrated embodiment, the hardware system includes processor 510 coupled to high speed bus 505, which is coupled to input/output (I/O) bus 515 through bus bridge 530. Temporary memory 520 is coupled to bus 505. Permanent memory 540 is coupled to bus 515. I/O device(s) 550 is also coupled to bus 515. I/O device(s) 550 may include a display device, a keyboard, one or more external network interfaces, etc.Certain embodiments may include additional components, may not require all of the above components, or may combine one or more components. For instance, temporary memory 520 may be on-chip with processor 510. Alternately, permanent memory 540 may be eliminated and temporary memory 520 may be replaced with an electrically erasable programmable read only memory (EEPROM), wherein software routines are executed in place from the EEPROM. Some implementations may employ a single bus, to which all of the components are coupled, or one or more additional buses and bus bridges to which various additional components can be coupled. Those skilled in the art will be familiar with a variety of alternate internal networks including, for instance, an internal network based on a high speed system bus with a memory controller hub and an I/O controller hub. Additional components may include additional processors, a CD ROM drive, additional memories, and other peripheral components known in the art.In one embodiment, the present invention, as described above, is implemented using one or more computers such as the hardware system of FIG. 5. Where more than one computer is used, the systems can be coupled to communicate over an external network, such as a local area network (LAN), an IP network, etc. In one embodiment, the present invention is implemented as software routines executed by one or more execution units within the computer(s). For a given computer, the software routines can be stored on a storage device, such as permanent memory 540.Alternately, as shown in FIG. 6, the software routines can be machine executable instructions 610 stored using any machine readable storage medium 620, such as a diskette, CD-ROM, magnetic tape, digital video or versatile disk (DVD), laser disk, ROM, Flash memory, etc. The series of instructions need not be stored locally, and could be received from a remote storage device, such as a server on a network, a CD ROM device, a floppy disk, etc., through, for instance, I/O device 550 of FIG. 5.From whatever source, the instructions may be copied from the storage device into temporary memory 520 and then accessed and executed by processor 510. In one implementation, these software routines are written in the C programming language. It is to be appreciated, however, that these routines may be implemented in any of a wide variety of programming languages.In alternate embodiments, the present invention is implemented in discrete hardware or firmware. For example, one or more application specific integrated circuits (ASICs) could be programmed with one or more of the above described functions of the present invention. In another example, one or more functions of the present invention could be implemented in one or more ASICs on additional circuit boards and the circuit boards could be inserted into the computer(s) described above. In another example, field programmable gate arrays (FPGAs) or static programmable gate arrays (SPGA) could be used to implement one or more functions of the present invention. In yet another example, a combination of hardware and software could be used to implement one or more functions of the present invention.Thus, a method and apparatus for determining an extent to which an electronic device has been operated is described. Whereas many alterations and modifications of the present invention will be comprehended by a person skilled in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Therefore, references to details of particular embodiments are not intended to limit the scope of the claims.
An optical module package may include a body that, in one embodiment, may be made of metal. Ceramic inserts may be inserted into the metal body. The ceramic inserts may have a pair of shelves to facilitate electrical connection. In one embodiment, each shelf may include an upwardly directed contact surface and a downwardly directed contact surface. As a result, a more compact design may be formed in some embodiments which has desirable strength characteristics.
1. A method comprising: providing an optical module package including a body and a cover over said body; and providing an insert in said body including an upwardly facing surface and a downwardly facing surface, each of said surfaces to enable electrical connections to be made to said package. 2. The method of claim 1 including providing an insert which is made of a different material than said body. 3. The method of claim 2 including providing a via between said upwardly and downwardly facing surfaces. 4. The method of claim 1 including forming said insert from a plurality of portions. 5. The method of claim 4 including forming said insert from a plurality of stacked portions. 6. The method of claim 1 including providing a plurality of upwardly facing surfaces at different levels. 7. The method of claim 1 including providing a plurality of'upwardly facing surfaces at the same level. 8. The method of claim 1 including providing a plurality of surfaces on one side of said insert facing the same direction. 9. The method of claim 1 including providing said upwardly facing and downwardly facing surfaces in different planes. <Desc/Clms Page number 7> 10. The method of claim 1 including providing said upwardly facing and downwardly facing surfaces in the same plane., 11. The method of claim 9 including providing said upwardly and downwardly facing surfaces on opposite sides of said insert. 12. The method of claim 1 including forming said insert from three sections including a middle section defining said upwardly and downwardly facing surfaces and upper and lower sections having a width less than the width of a middle section. 13. An optical module package comprising: a body; a cover over said body; and an insert in said body including an upwardly facing surface and a downwardly facing surface, each of said surfaces to enable electrical connections to said package. 14. The package of claim 13 wherein said body is formed of metal. 15. The package of claim 13 wherein said insert is made of ceramic. 16. The package of claim 13 wherein said insert is made of a different material than said body. 17. The package of claim 13 including a via between said upwardly and downwardly facing surfaces. <Desc/Clms Page number 8> 18. The package of claim 13 wherein said insert is formed of a plurality of discrete portions. 19. The package of claim 18 wherein said insert is formed of a plurality of stacked discrete portions. 20. The package of claim 13 wherein said insert includes at least two upwardly facing surfaces at different levels. 21. The package of claim 13 wherein said insert includes at least two upwardly facing surfaces at the same level. 22. The package of claim 13 wherein said upwardly and downwardly facing surfaces are in different planes. 23. The package of claim 13 wherein said upwardly and downwardly facing surfaces are in the same plane. 24. The package of claim 22 wherein said upwardly and downwardly facing surfaces are on opposite sides of said insert. 25. The package of claim 13 wherein said insert includes at least three sections including a middle section defining said upwardly and downwardly facing surfaces, and upper and lower sections each having a width less than a width of said middle section. 26. A package comprising: a body; and <Desc/Clms Page number 9> an insert in said body including an upwardly facing surface and a downwardly facing surface, each of said surfaces to enable electrical connections to said package. 27. The package of claim 26 wherein said upwardly facing surface is on the interior of said body and said downwardly facing surfaces is on the exterior of said body. 28. The package of claim 26 wherein each of said surfaces include contacts for making electrical connections. 29. The package of claim 26 wherein said body is formed of metal. 30. The package of claim 26 wherein said insert is made of ceramic. 31. The package of claim 26 including an electrical link between said upwardly and downwardly facing surfaces. 32. The package of claim 26 wherein one of said surfaces is on the interior of said body and the other of said surfaces is on the exterior of said body, said surfaces being in different planes. 33. The package of claim 26 wherein one of said surfaces is on the interior of said body and the other of said surfaces is on the exterior of said body, said surfaces being in the same plane. 34. The package of claim 32 including a plurality of surfaces on the interior of said package at different levels, each of said surfaces including electrical contacts to make an electrical connection. <Desc/Clms Page number 10> 35. The package of claim 26 including a plurality of leads coupled to said surfaces.
<Desc/Clms Page number 1> PACKAGE FOR OPTICAL MODULE WITH COMPACT CERAMIC INSERTS Background This invention relates generally to packages for optical modules and, particularly, to packages that receive an optical fiber and provide electrical connections thereto. Standard techniques to carry an electrical signal across the wall of a package for optical modules include multi-layer ceramic inserts. Standard ceramic designs for optical modules, commonly called butterfly packages, may include a base, a fiber feed through, a can body, and a ring frame made of metal, as well as one or more multi-layer ceramic inserts that receive electrical connectors. A lid is typically used to hermetically close the package by welding or soldering to a ring frame. The ceramic inserts may be composed of multi-layer ceramic. The inserts typically carry direct current or low frequency electrical signals. An insert may include a base ceramic layer, a plated bottom, and a patterned plating on the top surface. A narrower ceramic layer or top layer is attached to the base ceramic layer in a way that creates two shelves. An outside shelf allows electrical leads to be soldered to the package and an inside shelf allows an electrical signal to be accessible from inside the package. The inside shelf typically receives wire bonds that further carry the signal to or from the circuitry mounted inside the package. The ceramic inserts may be fitted and soldered between the body and the ring frame forming a hermetic seal. Usually, the ceramic insert is wider than the surrounding body and the ring frame. This ensures good hermetic soldering between the plated ceramic surfaces and the metal body and frame. As the metal wall thickness is reduced, the <Desc/Clms Page number 2> wall thickness of the ceramic inserts may be reduced as well. However, reliability issues start to appear as the ceramic thickness decreases and stress concentration increases. This minimum wall thickness dictates the minimum overall insert width and, therefore, the minimum size of the package. Thus, there is a need for a way to make inserts for optical module packages that have substantial ceramic wall thicknesses with reduced overall width. Brief Description of the Drawings Figure 1 is a perspective view of one embodiment of the present invention; Figure 2 is a partial cross-sectional view of a portion of the embodiment shown in Figure 1 with the lid removed; Figure 3 is a scaled view of an insert in accordance with one embodiment of the present invention; and Figure 4 is a perspective view of an insert in accordance with another embodiment of the present invention. Detailed Description Referring to Figure 1, a ceramic package 10 for an optical module includes a base 101, a fiber feed through 102, a can body 103, and a ring frame 104, which in one embodiment may be made of a metal such as Kovar.'One or more multi-layer ceramic inserts, such as the insert 105, may be fitted into the body 103. A lid 106 may be used to hermetically close the package 10 by welding or soldering, as two examples, to the ring frame 104. An electrical lead frame 108 may include leads 110 that electrically contact the inserts 105. In particular, the lead frame 108 may provide electrical signals to or receive electrical signals from the package 10. Likewise, the package 10 may receive an optical fiber (not shown) in the <Desc/Clms Page number 3> fiber feed through 102. In another embodiment, an optical window (not shown) may be provided to allow the passage of light between the inside and outside of the package 10. The ceramic inserts 105 may be composed of multiple layers of ceramic in one embodiment. In some embodiments, the inserts 105 may be adapted to carry direct current or low frequency electrical signals. In other embodiments the inserts 105 may be adapted to handle high frequency signals. For example, inserts 105 may be located on each of three sides of the body 103. As shown in Figure 2, a pair of contact shelves 601 and 114 may be located in different planes. Three ceramic inserts, including a top insert 603, a spacer 604, and a base insert 605, may be stacked up in a way that they create a shelf 601 facing upwardly, on one side and another shelf 114 facing downwardly, on another side of the package 10. The inserts 603,604, and 605 may be fitted and soldered between the body 103 and the ring frame 112. The shelf 114 on the spacer 604 may be adapted to receive electrical connections. For example, it may be metalized or otherwise plated for conducting electrical current and enabling electrical connections, for example, by soldering. Similarly, the shelf 601 defined on the upper surface of a spacer 604 may include a metallized or otherwise plated surface for allowing internal electrical connections, for example, by wire bonding or soldering. Vias 601 may extend through the spacer 604 to allow electrical connection between leads 110 electrically coupled to the shelf 601 and leads electrically coupled to the shelf 114. Thus, referring to Figure 3, the top insert 603, spacer 604, and base insert 605 create a structure having a width A corresponding to the width of the spacer 604. The width A <Desc/Clms Page number 4> includes the width B of the base insert 605. The width A also includes the width C of the top insert 603. Because electrical contacts can be made to the shelf 114 and the shelf 601, the overall width of the insert may be reduced. Namely, because electrical connections can be made from the top down, for example, to the shelf 601 and from the bottom up, for example, to the shelf 114, a more width compact structure can be defined. For example, as shown in dashed lines in Figure 3, a structure D in accordance with the prior art is illustrated having a width F needed to define an upwardly facing shelf E. The width F is the width necessary to make the appropriate electrical connections to the shelf E. Thus, in order to make an additional top down connection, an additional extension D of width F would be provided in conventional designs. As a result of the extension indicated at D, the overall width of the insert is increased by the width F compared to one embodiment of the present invention shown in solid lines in Figure 3. Increasing the width of the insert 105 results in larger packages with less real estate available inside the package. At the same time, in some embodiments the wall thickness of the insert 105 may not be materially changed, thereby increasing ceramic insert strength, decreasing internal stress, and improving the reliability of the overall package. For example, the bulk of a package 10 has the overall width A of the spacer 604 and no weak points are provided in the overall structure. The interior shelf 601 is on a different level than the exterior shelf 114 in one embodiment of the present invention. Changing the shelf plane in the package 10 may provide advantages, such as offsetting the optical plane relative to the electrical leads so that an optical fiber <Desc/Clms Page number 5> does not necessitate a cut in a printed circuit board or allowing the inside substrate to be quasi-planar and avoiding changes of planes inside the package 10. Referring to Figure 4, in accordance with another embodiment of the present invention, an insert 700 includes a lower inner shelf 702, an upper inner shelf 704, and an inverted outer shelf 706. Thus, the insert 700 includes multiple shelves at different heights and includes multiple internal shelves. In other embodiments any number of internal and external shelves may be provided. Thus, in accordance with some embodiments of the present invention, optical module packages may be made that have substantial ceramic wall thickness and reduced overall width. By avoiding the need to have two opposed upwardly facing shelves on opposed sides of the package, a more efficient package design may be achieved. In some embodiments this result may be achieved by allowing both upwardly and downwardly facing contact surfaces on shelves. Vias may be provided through the shelves to interconnect the upwardly and downwardly facing surfaces on the same shelf. While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. What is claimed is:
A semiconductor package includes an interposer, a number of a first integrated circuit (IC) dies, one or more second IC dies, and one or more dummy dies. The first IC dies, the second IC dies and the dummy dies are implemented on the interposer. The dummy dies are configured to enable routing of pins of the first IC dies to selected circuits of the second IC dies while conforming to predefined routing rules.
A semiconductor package comprising:an interposer;a plurality of a first integrated circuit (IC) dies;one or more second IC dies; andone or more dummy dies,wherein the plurality of the first IC dies, the one or more second IC dies and the one or more dummy dies are implemented on the interposer, and wherein the one or more dummy dies are configured to enable routing of pins of the plurality of first IC dies to selected circuits of the one or more second IC dies while conforming to predefined routing rules.The semiconductor package of claim 1, wherein the plurality of the first IC dies comprise a plurality of high-bandwidth memory (HBM) dies.The semiconductor package of claim 1 or 2, wherein the one or more second IC dies comprise one or more application-specific integrated circuit (ASIC) dies.The semiconductor package of claim 3, wherein the selected circuits of the one or more second IC dies comprise physical layer (PHY) circuits of the one or more ASIC dies.The semiconductor package of any of claims 1 to 4, wherein the one or more dummy dies are placed between the plurality of the first IC dies and the one or more second IC dies.The semiconductor package of any of claims 1 to 5, wherein the one or more dummy dies are placed in between the one or more second IC dies.The semiconductor package of claim 6, wherein the one or more dummy dies comprise metallic dies implemented in a metal layer of the interposer.The semiconductor package of claim 7, wherein the one or more dummy dies vary in dimensions.The semiconductor package of claim 7 or 8, wherein the predefined routing rules comprise 45-degree and orthogonal routing rules.The semiconductor package of any of claims 1 to 9, wherein the one or more dummy dies are configured to enable routing with various amounts of offsets between the plurality of first IC dies and the one or more second IC dies.A method of packaging semiconductor dies, the method comprising:placing a plurality of a first integrated circuit (IC) dies on an interposer;placing one or more second IC dies on the interposer; andplacing one or more dummy dies on the interposer,wherein placing the one or more dummy dies comprises configuring the one or more dummy dies to enable routing of pins of the plurality of first IC die to selected circuits of the one or more second IC dies while conforming to predefined routing rules.The method of claim 11, comprising at least one of the following features:wherein the plurality of the first IC dies comprise a plurality of high-bandwidth memory (HBM) dies, and wherein the one or more second IC dies comprise one or more application-specific integrated circuit (ASIC) dies;wherein placing the one or more dummy dies comprises placing the one or more dummy dies in-between the one or more second IC dies;wherein placing the one or more dummy dies comprises placing the one or more dummy dies between the plurality of the first IC dies and the one or more second IC dies;wherein placing the one or more dummy dies comprises placing one or more metallic dies in a metal layer of the interposer;wherein conforming to the predefined routing rules comprises conforming to 45-degree and orthogonal routing rules;further comprising configuring the one or more dummy dies to enable routing with various amounts of offsets between the plurality of first IC dies and the one or more second IC dies.An interposer with extended high-bandwidth memory (HBM) offsets, the interposer comprising:a plurality of HBM dies;two or more application-specific integrated circuits (ASICs); andone or more dummy dies placed on the interposer,wherein sizes and placement locations of one or more dummy dies are configured to enable routing of pins of the plurality of HBM dies to selected circuits of the one or more ASIC dies while conforming to predefined routing rules; and wherein the one or more dummy dies are placed at the configured placement locations on the interposer.The interposer of claim 13, wherein the configured placement locations on the interposer comprise in-between the two or ASICs, and wherein placement of the one or more dummy dies comprises placement of one or more metallic dies in a metal layer of the interposer.The interposer of claim 13 or 14, the configured placement locations on the interposer comprise in-between the plurality of HBM dies and the two or more ASIC dies, and wherein conforming to the predefined routing rules comprises conforming to 45-degree and orthogonal routing rules.
TECHNICAL FIELDThe present description relates generally to Ethernet communications and, in particular, to extended high-bandwidth memory (HBM) offsets in 2.5 D interposers.BACKGROUNDSemiconductor integration has evolved to placing integrated circuit (IC) devices side-by-side on a silicon or organic interposer. The interposer provides high-density connections between ICs typically along the facing edges of one another. In a 2.5 D interposer, unlike the 3D interposers, there is no stacking of dies on dies, but dies are packaged on the surface of a silicon interposer. The dies are incorporated into a single package in a single plane and are placed on the silicon interposer using a flip-chip technique. Commonly, the ICs used in 2.5 D interposers include custom application-specific ICs (ASICs) and high-bandwidth memories (HBMs).As shown in FIG. 1A , one or more HBM devices can be connected to an ASIC along a given edge of that ASIC. There are typically minimum and maximum spacing rules between dies, and there exist several thousands of connections between each HBM and its associated ASIC, which are routed in the interposer. It is common practice to center each HBM to its associated PHY circuit (e.g., transceiver) pins within the ASIC. This, however, is not always practical due to IC size mismatches; as a result, the HBM should be offset from corresponding pins on the ASIC (see. FIGs. 1A and 1B ).BRIEF DESCRIPTION OF THE DRAWINGSCertain features of the subject technology are set forth in the appended claims. However, for purposes of explanation, several embodiments of the subject technology are set forth in the following figures.FIGs. 1A and 1B are diagrams illustrating examples of semiconductor integration schemes.FIG. 2 is a diagram illustrating an example of a semiconductor integration scheme, according to various aspects of the subject technology.FIG. 3 is a diagram illustrating an example of a semiconductor integration scheme, according to various aspects of the subject technology.FIG. 4 is a flow diagram illustrating an example of a method of semiconductor integration, in accordance with some aspects of the subject technology.FIG. 5 is an electronic system within which some aspects of the subject technology may be implemented.DETAILED DESCRIPTIONThe detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute part of the detailed description, which includes specific details for providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and may be practiced without one or more of the specific details. In some instances, structures and components are shown in a block-diagram form in order to avoid obscuring the concepts of the subject technology.The subject technology is directed to methods and systems for providing integrated circuits with extended high-bandwidth memory (HBM) offsets in 2.5 D interposers. The disclosed solution inserts dummy dies between an application-specific integrated circuit (ASIC) and an HBM to increase the available escape region. This allows 45-degree routing to protract from the physical-layer (PHY) circuit (hereinafter, PHY) of the ASIC to HBM pins at extended offsets. In an inexpensive process appropriate to fill empty regions of the interposer, a metallic dummy die can be used to meet design rules. In some implementations, multiple dummy dies can be used. The dummy dies can be the same size or vary in size depending on the application. The die-to-die gap rules (e.g., requirements for the gaps between dies) can still be maintained while the subject technology is used.The subject technology includes multiple advantageous features. For example, increased HBM offsets are possible, and the ASIC dimension does not need to grow in order to route to the HBMs. Further, the dummy die inserted between the ASIC and HBM increases the escape region, allowing routing to be completed. The routing may be 45 degrees or orthogonal routing, and any arbitrary offset is applicable as long as signal integrity is not affected. Other routing rules include metal min/max width, metal min/max spacing, and metal density (defined as the area of the metal as a proportion of the total available area). The subject technology can be used in any semiconductor integration using the 2.5 D interposers and is not limited to integrating HBMs and ASICs.FIGs. 1A and 1B are diagrams illustrating examples of semiconductor integration schemes 100A and 100B. In the example semiconductor integration scheme 100A, three HBM dies 110 (110-1, 110-2 and 110-3) are integrated with an ASIC die 120 on an interposer 102. Each HBM die 110 has a size of 11 x 11 mm, and the ASIC die 120 has dimensions of 20 x 20 mm. The HBM dies 110 have to be connected to a PHY of the ASIC die 120. The connections (routing) of the pins of the HBM 110-2 to the PHY 124 have no issue, as it is centered with its corresponding PHY 124. The HBM dies 110-1 and 110-3 need to have an offset because of the large number of connection routings, which can be several thousands (e.g., more than 2,000) wires and need to spread over a finite width W. There is a geometric limit on the size of the offset that is set by a 45-degree routing rule projected from the HBM. For example, the HBM 110-3 can be routed to the PHY 126 using the 45-degree projection. However, the HBM die 110-1 cannot be routed as is seen from FIG. 1A . One solution is to increase the size of the ASIC die 120 so that the PHY 122 can be properly routed to the pins of the HBM die 110-1 using the 45-degree routing. This solution increases the chip area and manufacturing cost, and in some situations may not even be possible to implement, for example, when there are restrictions on the ASIC reticle (negative mask) field. The subject technology solves this problem as discussed herein.FIG. 1B shows the semiconductor integration scheme 100B, an example of integration of multiple ASIC dies with multiple HBM dies. In this example, the ASIC dies 140 and 150 are supposed to be integrated with the HBM dies 130 (130-1, 130-2, 130-3 and 130-4) on one side and the HBM dies 160 (160-1, 160-2, 160-3 and 160-4) on the other side. The HBM dies 130-2, 130-3, 160-2 and 160-3 are centered with their corresponding PHYS of the ASIC dies 140 and 150 and can be routed properly. However, the routings 132 of the HBM dies 130-1, 130-4 and similarly the routings of the HBM dies 160-1 and 160-4 have similar issues as explained above with respect the FIG. 1A . The disclosed techniques of the subject technology provide solutions for different scenarios without having to increase the ASIC sizes, as discussed below.FIG. 2 is a diagram illustrating an example of a semiconductor integration scheme 200, according to various aspects of the subject technology. In the semiconductor integration scheme 200, a number of first integrated circuit (IC) dies such as the HBM dies 210 (210-1, 210-2 and 210-3) are integrated with a second IC die such as an ASIC die 250. The HBM dies 210 and the ASIC die 250 are similar to the HBM dies 110 and the ASIC die 120 of FIG. 1A and are integrated on an interposer 202. The additional feature of the subject technology are the dummy dies D, which are inserted between in the HBM dies 210 and the ASIC die 250 on a metal layer of the interposer 202.The dummy dies D increase the ASIC-to-HBM available routing region to allow 45-degree routing to protract from the pins of the HBM die 210-1 to a corresponding PHY 252, which was not possible without the dummy die D, as discussed above with respect to FIG. 1A . The routing from pins of the HBM dies 210-2 and 210-3 to the corresponding PHYs 254 and 256 of the ASIC die 250 are also realized on their corresponding dummy dies D. In some aspects, the dummy dies D can be implemented as a single dummy die. In some implementations, the size of the dummy dies can vary to maintain the die-to-die gap rules. In some implementations, the dummy dies D can be realized by using a metal such as aluminum, copper or other suitable materials.FIG. 3 is a diagram illustrating an example of a semiconductor integration scheme 300, according to various aspects of the subject technology. In the semiconductor integration scheme 300, a number of first integrated circuit (IC) dies such as HBM dies 310 (310-1, 310-2 and 310-3) and HBM dies 360 (360-1, 360-2 and 360-3) are integrated with one or more second IC dies such as an ASIC dies 340 and 350. The HBM dies 310 and 360 and the ASIC die 340 and 350 are similar to the HBM dies 130 and 160 and the ASIC dies 140 and 150 of FIG. 1B and are integrated on an interposer 302. The additional feature of the subject technology is a dummy die D, which is inserted between the ASIC dies 340 and 350 on a metal layer of the interposer 302.The dummy die D increases the ASIC-to-ASIC space, which results in providing sufficient available routing region to allow 45-degree routing to protract from the pins of the HBM dies 310-1 and 310-4 to their corresponding PHY 342 and 352. Similarly, the dummy die D provides sufficient available routing region to allow 45-degree routing to protract from the pins of the HBM dies 360-1 and 360-4 to their corresponding PHY 344 and 354 . In some aspects, the dummy die D can be implemented as a single metal dummy die. In some implementations, the size of the dummy die D can vary to maintain the die-to-die gap rules. In some implementations, the dummy die D can be realized by using a metal such as aluminum, copper or other suitable materials.As described above with the example implementations of FIGs. 2 and 3 , the subject technology uses dummy dies at suitable places on the interposer to increase the offset between the dies (e.g., HBM dies and ASIC dies). This allows 45-degree routing for corner dies without increasing the dimensions of the dies such as the ASIC dies, which was not possible without the dummy dies of the subject technology. In some implementations, some of the routings can be orthogonal routing instead of the 45-degree routing, depending on the geometrical configuration of the dies on the interposer. The disclosed technique allows routing with any arbitrary die-to-die offset distance as long as the signal integrity can be preserved. Accordingly, the disclosed technology enables larger die-to-die (e.g., HBM dies to ASIC die) offset and/or smaller die (e.g., ASIC die) dimensions.FIG. 4 is a flow diagram illustrating an example of a method 400 of semiconductor integration, in accordance with some aspects of the subject technology. The method 400 includes placing a number of a first IC dies (e.g., 210 of FIG. 2 ) on an interposer (e.g., 202 of FIG. 2 ) (410). The method also includes placing one or more second IC dies (e.g., 250 of FIG. 2 ) on the interposer (420). The method further includes placing one or more dummy dies (e.g., D of FIG. 2 ) on the interposer by configuring the dummy dies to enable routing of pins of the first IC dies to selected circuits of the second IC dies while conforming to predefined routing rules (430).FIG. 5 is an electronic system within which some aspects of the subject technology may be implemented. The electronic system 500 can be, and/or can be a part of, a portable communication device such as a smart phone, a smart watch or a tablet, a desktop computer or the network switch, for example, of a data center or an enterprise network. The electronic system 500 may include various types of computer readable media and interfaces for various other types of computer readable media. The electronic system 500 includes a bus 508, one or more processing unit(s) 512, a system memory 504 (and/or buffer), a ROM 510, a permanent storage device 502, an input device interface 514, an output device interface 506, and one or more network interfaces 516, or subsets and variations thereof.The bus 508 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 500. In one or more implementations, the bus 508 communicatively connects the one or more processing unit(s) 512 with the ROM 510, the system memory 504, and the permanent storage device 502. From these various memory units, the one or more processing unit(s) 512 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 512 can be a single processor or a multi-core processor in different implementations. In one or more aspects, the one or more processing unit(s) 512 may be used to execute instructions to cause performance of the method 4 of FIG. 4 .The ROM 510 stores static data and instructions that are needed by the one or more processing unit(s) 512 and other modules of the electronic system 500. The permanent storage device 502, on the other hand, may be a read-and-write memory device. The permanent storage device 502 may be a nonvolatile memory unit that stores instructions and data even when the electronic system 500 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 502.In one or more implementations, a removable storage device such as a floppy disk, flash drive and its corresponding disk drive) may be used as the permanent storage device 502. Similar to the permanent storage device 502, the system memory 504 may be a read-and-write memory device. However, unlike the permanent storage device 502, the system memory 504 may be a volatile read-and-write memory, such as random-access memory (RAM). The system memory 504 may store any of the instructions and data that one or more processing unit(s) 512 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 504, the permanent storage device 502 and/or the ROM 510. From these various memory units, the one or more processing unit(s) 512 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.The bus 508 also connects to the input and output device interfaces 514 and 506. The input device interface 514 enables a user to communicate information and select commands to the electronic system 500. Input devices that may be used with the input device interface 514 may include, for example, alphanumeric keyboards and pointing devices (also called "cursor control devices"). The output device interface 506 may enable, for example, the display of images generated by electronic system 500. Output devices that may be used with the output device interface 506 may include, for example, printers and display devices, such as a liquid crystal display, a light emitting diode display, an organic light emitting diode display, a flexible display, a flat panel display, a solid state display, a projector or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech or tactile input.Finally, as shown in FIG. 5 , the bus 508 also couples the electronic system 500 to one or more networks and/or to one or more network nodes, through the one or more network interface(s) 516. In this manner, the electronic system 500 can be a part of a network of computers (such as a local area network or a wide area network), an intranet, or a network of networks (such as the internet). Any or all components of the electronic system 500 can be used in conjunction with the subject disclosure.Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be nontransitory in nature.The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG and Millipede memory.Further, the computer-readable storage medium can include any nonsemiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections or any combination thereof.Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or nonexecutable machine code or as instructions in a high-level language that can be compiled to produce executable or nonexecutable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets and functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing and output.While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods and algorithms described herein may be implemented as electronic hardware, computer software or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology. Further, various functional blocks need not be connected directly (even though, for convenience, they are illustrated that way in the figures).It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that not all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.As used in this specification and any claims of this application, the terms "base station," "receiver," "computer," "server," "processor," and "memory" all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms "display" or "displaying" means displaying on an electronic device.As used herein, the phrase "at least one of" preceding a series of items, with the term "and" or "or" to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase "at least one of" does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases "at least one of A, B, and C" or "at least one of A, B, or C" each refer to only A, only B or only C; any combination of A, B and C; and/or at least one of each of A, B and C.The predicate words "configured to," "operable to" and "programmed to" do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.Phrases such as "an aspect," "the aspect," "another aspect," "some aspects," "one or more aspects," "an implementation," "the implementation," "another implementation," "some implementations," "one or more implementations," "an embodiment," "the embodiment," "another embodiment," "some embodiments," "one or more embodiments," "a configuration," "the configuration," "another configuration," "some configurations," "one or more configurations," "the subject technology," "the disclosure," "the present disclosure" and other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" or as an "example" is not necessarily to be construed as preferred or advantageous over other embodiments. Furthermore, to the extent that the term "include," "have" or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term "comprise" as "comprise" is interpreted when employed as a transitional word in a claim.All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "step for."The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
Generally, this disclosure provides systems, methods and computer readable media for binary translation (BT) reuse. The system may include a (BT) module to translate a region of code from a first instruction set architecture (ISA) to a second ISA, for execution associated with a first process. The BT module may also be configured to store a first physical page number associated with the translated code and the first process. The system may also include a processor to execute the translated code and to update a virtual address instruction pointer associated with the execution. The system may further include a translation reuse module to validate the translated code for reuse by a second process. The validation may include generating a second physical page number based on a page table mapping of the updated virtual address instruction pointer and matching the second physical page number to the stored first physical page number.
CLAIMSWhat is claimed is1. A system for binary translation reuse, said system comprising:a binary translation module to translate a region of code from a first instruction set architecture (ISA) to a second ISA, for execution associated with a first process; said binary translation module further to store a first physical page number associated with said translated code and said first process;a processor to execute said translated code and to update a virtual address instruction pointer associated with said execution;a translation reuse module to validate said translated code for reuse by a second process, said validation comprising:generating a second physical page number based on a page table mapping of said updated virtual address instruction pointer; andmatching said second physical page number to said stored first physical page number.2. The system of claim 1 , wherein said binary translation module is further to store an offset of said virtual address instruction pointer associated with said translated code and said first process; and said page table mapping is further based on said stored offset.3. The system of claim 1, wherein said binary translation module is further to store a first page attribute associated with said translated code and said first process; and said processor is further to determine an updated page attribute associated with said translated code and said second process; wherein said validation further comprises matching said stored first page attribute with said updated page attribute.4. The system of claim 1, wherein said binary translation module is further to perform a second binary translation of said region of code for execution associated with said second process, if said validation fails.5. The system of claim 1, further comprising a register (FL_RIP) to maintain said updated virtual address instruction pointer, wherein said processor provides an instruction (ADDRIP) to modify an offset of said virtual address instruction pointer in said FL_RIP register, said ADDRIP instruction associated with said second ISA.6. The system of claim 1 , wherein virtual page numbers associated with said translated code differ between said first process and said second process due to Address Space Layout Randomization (ASLR).7. The system of claim 2, wherein said binary translation further comprises embedding a prologue in said translated region of code, said prologue comprising instructions to: store said first physical page number and said offset; andperform said validation in response to detecting that execution of said binary translation traverses a memory page boundary.8. The system of claim 1, wherein said updating of said virtual address instruction pointer is performed in association with execution of a branch instruction. 9. The system of claim 1, wherein said page table is cached in a translation lookaside buffer (TLB).10. The system of claim 1, wherein said system is selected from the group consisting of a smart phone, a laptop computing device, a smart TV and a smart tablet.11. The system of claim 1 , further comprising a user interface, wherein said user interface is a touch screen.12. A method for binary translation reuse, said method comprising:performing a binary translation of a region of code from a first instruction set architecture(ISA) to a second ISA of a processor, said binary translation for execution associated with a first process;storing a first physical page number associated with said binary translation and said first process;storing an offset of a virtual address instruction pointer associated with said binary translation and said first process;updating said virtual address instruction pointer during execution by said processor; and verifying that said binary translation is valid for reuse for execution associated with a second process, wherein said verification comprises:generating a second physical page number based on a page table mapping, said mapping based on said updated virtual address instruction pointer and said stored offset; and matching said second physical page number to said stored first physical page number. 13. The method of claim 12, further comprising:storing a first page attribute associated with said binary translation and said first process; anddetermining an updated page attribute associated with said binary translation and said second process; wherein said verifying further comprises matching said stored first page attribute with said updated page attribute.14. The method of claim 12, further comprising, if said verifying fails, performing a second binary translation of said region of code for execution associated with said second process.15. The method of claim 12, wherein said updating of said virtual address instruction pointer further comprises:maintaining said virtual address instruction pointer in a register (FL_RIP); and executing an instruction (ADDRIP) to modify an offset of said virtual address instruction pointer in said FL_RIP register, wherein said ADDRIP instruction is associated with said second ISA.16. The method of claim 12, wherein virtual page numbers associated with said binary translation differ between said first process and said second process due to Address Space Layout Randomization (ASLR).17. The method of claim 12, wherein said binary translation further comprises embedding a prologue in said translated region of code, said prologue comprising instructions to: store said first physical page number and said offset; andperform said verification in response to detecting that execution of said binary translation traverses a memory page boundary.18. The method of claim 12, wherein said updating of said virtual address instruction pointer is performed in association with execution of a branch instruction.19. The method of claim 12, wherein said page table is cached in a translation lookaside buffer (TLB).20. At least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for binary translation reuse, said operations comprising:performing a binary translation of a region of code from a first instruction set architecture (ISA) to a second ISA of a processor, said binary translation for execution associated with a first process;storing a first physical page number associated with said binary translation and said first process;storing an offset of a virtual address instruction pointer associated with said binary translation and said first process;updating said virtual address instruction pointer during execution by said processor; and verifying that said binary translation is valid for reuse for execution associated with a second process, wherein said verification comprises:generating a second physical page number based on a page table mapping, said mapping based on said updated virtual address instruction pointer and said stored offset; and matching said second physical page number to said stored first physical page number.21. The computer-readable storage medium of claim 20, further comprising the operations of:storing a first page attribute associated with said binary translation and said first process; anddetermining an updated page attribute associated with said binary translation and said second process; wherein said verifying further comprises the operation of matching said stored first page attribute with said updated page attribute.22. The computer-readable storage medium of claim 20, further comprising, if said verifying fails, performing a second binary translation of said region of code for execution associated with said second process.23. The computer-readable storage medium of claim 20, wherein said updating of said virtual address instruction pointer further comprises the operations of: maintaining said virtual address instruction pointer in a register (FL_RIP); and executing an instruction (ADDRIP) to modify an offset of said virtual address instruction pointer in said FL_RIP register, wherein said ADDRIP instruction is associated with said secondISA.24. The computer-readable storage medium of claim 20, wherein virtual page numbers associated with said binary translation differ between said first process and said second process due to Address Space Layout Randomization (ASLR).25. The computer-readable storage medium of claim 20, wherein said binary translation further comprises the operation of embedding a prologue in said translated region of code, said prologue comprising instructions to:store said first physical page number and said offset; andperform said verification in response to detecting that execution of said binary translation traverses a memory page boundary.26. The computer-readable storage medium of claim 20, wherein said updating of said virtual address instruction pointer is performed in association with execution of a branch instruction.27. The computer-readable storage medium of claim 20, wherein said page table is cached in a translation lookaside buffer (TLB).
BINARY TRANSLATION REUSE IN A SYSTEM WITH ADDRESS SPACE LAYOUTRANDOMIZATION FIELDExample embodiments described herein generally relate to binary translation (BT) systems, and more particularly, to reuse of binary translations in a system employing Address Space Layout Randomization (ASLR). BACKGROUNDComputing systems may employ binary translation (BT) to translate code dynamically from a public instruction set architecture (ISA), such as, for example the Intel® x86 architecture, to a private or native ISA that is executed by the processors or cores. The capability of a computing system to support the public ISA enables the execution of legacy code that generally provides backward compatibility and access to a large collection of existing software. The native ISA, on the other hand, may be designed to provide increased processor performance or improved power consumption. Additionally, the processors may be regularly updated or redesigned to take advantage of new technology which may change their native ISA while still maintaining the public ISA and the ability to run existing software.The translation cost is typically high, however, so it is desirable to store translations in memory for reuse whenever possible, for example when the same sequence of instructions is executed at a later point in time and the previous translation remains valid. This allows the cost of the translation to be amortized over time.Address Space Layout Randomization (ASLR) is increasingly used by operating systems (OSs) to provide security between processes running in different virtual address spaces. ASLR may randomly (or pseudo-randomly) modify the virtual addresses associated with pages of code of different processes, even though those code pages are mapped to the same physical address. This may prevent malicious code from launching an attack that relies on a common layout of code between different processes or between the executions of the same process on the same or different processors.The use of ASLR, however, typically invalidates stored translations because the validity of a previously stored translation may require that the virtual address, physical address and page attributes of the region of code to be translated match up with those of the stored translation. This may therefore prevent the binary translator from reusing previous translations and may significantly reduce the overall efficiency of the binary translation system. BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which:Figure 1 illustrates a top level system diagram of an example embodiment consistent with the present disclosure;Figure 2 illustrates a block diagram of one example embodiment consistent with the present disclosure;Figure 3 illustrates a block diagram of another example embodiment consistent with the present disclosure;Figure 4 illustrates a flowchart of operations of one example embodiment consistent with the present disclosure;Figure 5 illustrates a flow diagram of operations of one example embodiment consistent with the present disclosure;Figure 6 illustrates a flowchart of operations of another example embodiment consistent with the present disclosure; andFigure 7 illustrates a top level system diagram of a platform of another example embodiment consistent with the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONGenerally, this disclosure provides systems, devices, methods and computer readable media for binary translation (BT) reuse, for example in a system that includes a processor and operating system (OS) configured for Address Space Layout Randomization (ASLR). A BT module of this system may be configured to translate regions of code from a first instruction set architecture (ISA) to a second ISA suitable for execution by the processor (e.g., an ISA native to the processor). The regions of code may include shared code that can be reused by different processes or applications, enabling the cost of the translation to be amortized, if the code regions remain valid for each process that attempts to use it. A translation reuse module may be configured to verify the validity of a code region, as will be explained in greater detail below, based on the physical page numbers and page attributes of memory pages associated with that code region. A translated code region may be determined as valid for reuse by different processes even though the virtual addresses that map each process to the code region may differ, for example due to ASLR, so long as the physical page numbers and page attributes for those regions remain unchanged from their values associated with the original translation.Figure 1 illustrates a top level system diagram 100 of an example embodiment consistent with the present disclosure. An OS that is configured for ASLR 102 is shown to host one or more processes or applications 108. The ASLR may provide security between processes running in different virtual address spaces by randomly modifying the virtual addresses associated with the code pages of the different processes, even though those code pages are mapped to the same physical address. This may prevent malicious code from launching an attack that relies on a common layout of code between different processes or between the executions of the same process on the same or different processors.A binary translation module 104 is configured to translate regions of code, associated with the processes 108, from a first ISA to a second ISA. The first ISA may be a public ISA such as, for example, the Intel® x86 architecture or a variant thereof. The second ISA may be the native ISA that is executed by the host processor 106. The native ISA may generally bear little or no resemblance to the public ISA. While the public ISA provides support for legacy code that enables access to a large collection of existing software, the native ISA may be designed for targeted goals such as, for example, increased processor performance or improved power consumption. The processors may be regularly updated to take advantage of new technology and may change their native ISA while maintaining the ability to run existing software.The processes/applications 108 may include OS components (including Basic Input-Output System (BIOS), device drivers, etc.) and/or any other software such as, for example, higher level applications or other user provided code that is run on the system. The processes 108 may share common code, such as for example library routines and the like. Translation reuse module 110 may be configured to determine if a previously translated region of code associated with one process may be reused by another process, thus avoiding the expense of re-translation. Reuse may be permitted if memory pages in the translated code remain valid between the times they were translated on behalf of a first process and the times that they might be reused by a second process. The validation may be based on the physical page numbers and page attributes of the translated regions of code, as will be explained in greater detail below.Figure 2 illustrates a block diagram 200 of one example embodiment consistent with the present disclosure. The OS configured for ASLR 102 is shown, for illustrative purposes, to host two processes, process A 108a and process B 108b, although in practice any number of processes may be hosted. Shared code 202, which may for example include common library functions, is shared by processes 108a and 108b. BT translation module 104 may be configured to translate 204 a region of code, for example from the shared code/library 202, when that code is needed (called) by process A 108a. Translation reuse module 110 may be configured to determine if the previously translated region of code may be reused 206 when called by process B 108b. If reuse is not possible, then the region of code, or parts thereof, may be retranslated 208. The translated code may be stored in translator memory or other suitable storage that is accessible by processor 106 for execution.Figure 3 illustrates a block diagram 300 of another example embodiment consistent with the present disclosure. In this more detailed illustration, three processes are shown: process A 108a, process B 108b and process C 108c, all of which may utilize shared code 202 at various times. Each process may be associated with a different virtual address that identifies a memory space for that process. The virtual address may include a virtual page number (VPN) and an offset. The ASLR system may randomly assign or modify the VPNs (but generally not the offsets) associated with each process. A virtual address to physical address mapping module 302 may be configured to map the virtual addresses to physical addresses, for example through the use of page tables or other suitable mechanisms. The physical addresses may include a physical page number (PPN) and an offset. A region of code may span one or more memory pages and each page will have a virtual address (VPN and offset) and corresponding physical address (PPN and offset).Instruction pointer registers, for both virtual and physical addresses, provide address pointers to the instruction that is currently being executed by the processor 106. The virtual address instruction pointer register may be referred to as the RIP and the physical address instruction pointer register may be referred to as the PIP. The address offset may generally be stored in the lower order bits of these registers while the page numbers (VPN/PPN) may generally be stored in the higher order bits.In this example, shared code 202 from process A 108a has a virtual address 1 that includes VPN1 (and an offset), while shared code 202 from process B 108b has a virtual address 2 that includes VPN2 (and an offset) and shared code 202 from process C 108c has a virtual address 3 that includes VPN3 (and an offset). Process A 108a may be the first process to call a library routine from the shared code 202 which may cause that region of code to be translated 204 into page N of translator memory 210. The translation is shown to have a physical address 1 that includes PPN1 (and an offset) that point to a location in page N where the translated code now resides.When process B 108b calls that same shared code library routine, the associated virtual address (VPN2) is mapped to a physical address, which in this case is the same as the physical address of the previous translation (i.e., PPNl plus offset). Translation reuse module 110 detects this fact and determines that the translation 204 remains valid for reuse 206 by process B.Continuing with this example, however, when process C 108c calls that same shared code library routine, the associated virtual address (VPN3) is mapped to a different physical address (i.e., PPN2 plus offset). Translation reuse module 110 also detects this fact and determines that the translation 204 is therefore not valid for reuse by process C and instead causes a retranslation 208 which may be mapped into a different page, for example page N+k as illustrated.In some embodiments, the page tables may be cached in a translation lookaside buffer (TLB) 304 that is configured to provide faster access and more efficient virtual to physical address translations. The TLB may store the more frequently used translation page tables.In addition to page numbers and offsets, the virtual and physical addresses may also include or be associated with a page attribute or context indication. The page attribute may indicate an access mode (for example, read/write/executable types of access permission), page size, mapping state, modification state and/or caching policy. These page attributes may also be employed by the translation reuse module 110 as part of the translation validity check. For example, if the page attribute associated with the physical address mapping from process A differs from the page attribute associated with the physical address mapping of process B, then the translation 204 may no longer be considered valid for reuse 206 by process B. Additionally, in some embodiments, the translation reuse module 110 may be configured to verify that the translated code has not changed, for example as a result of the execution of self-modifying or cross-modifying code, as part of the validity check.In some embodiments, the translation reuse module 110 may also be configured to insert or embed instructions into one or more pages of the translated code, which, when executed, may assist in the validation of those pages for translation reuse. These embedded instructions may be referred to as an Inter-Page Prologue (IPP), or simply prologue, and may further be configured to validate pages during control transfers (e.g., branching) between different pages of the translated regions of code. The IPP may include instructions and/or data that provide an indication of the VPN and PPN used by the BT module 104 during the original translation of those pages. The IPP may then access the page tables to determine if the VPN to PPN mapping is still valid and the page is executable. If these checks fail, a fault may be raised resulting in the translation being discarded and a new translation being generated. Alternatively, in the case of a fault, the processor may execute code in a lower performance mode (e.g., without the benefit of some aspects of the translation) until a new translation can be generated at a future point in time.Figure 4 illustrates a flowchart of operations 400 of one example embodiment consistent with the present disclosure. The operations provide a method for BT reuse. At operation 410, a region of code or instructions is received for execution. At operation 420, a check is performed to determine if the region has been previously translated. If not, then the region is translated, at operation 450, and executed at operation 460. If the region has already been translated, then at operation 430, the existing translation is validated. The validation is based on the physical address and the page attributes of the translation. If the validation is successful, the translation is reused and executed, at operation 460, otherwise the region is re-translated at operation 450.Figure 5 illustrates a flow diagram of operations 500 of one example embodiment consistent with the present disclosure. Shown are a first instruction translation region (Tl) 520, a second instruction translation region (T2) 530, a third instruction translation region (T3) 540, a fourth instruction translation region (T4) 560 and a fifth instruction translation region (T5) 570. Chaining and/or branching of execution between the translation regions is also shown by directed line segments along with a number of validation operations 510, 550 and 580.Translation regions Tl, T2 and T3 are mapped to a first memory page associated with a page number PPN1, while translation regions T4 and T5 are mapped to a second memory page associated with a page number PPN2. It will be appreciated that the number of translation regions, the number of validation operations and their interconnection across page boundaries is presented as an illustrative example and may, of course, vary.Validation operation 510 may be performed upon initial access to, or execution of, the translations on the first memory page to verify the validity of that page. Execution may proceed through translation regions Tl, T2 and T3 until a page boundary is crossed, at which point a validation operation 550 is performed to verify the validity of the second page. If, after execution of translation regions T4 and T5, the page boundary is again crossed, then validation operation 580 may be performed to verify that the first page is still valid for execution.The validation operations may be performed by the IPP that is included or embedded in the translated code associated with each memory page, for example by the translation reuse module 110. The IPP may include instructions and/or data to provide the virtual address and physical address that was associated with that page when the original translation was performed. The IPP may also include instructions to access the page tables that provide the mapping between the current virtual address instruction pointer and the current physical address instruction pointer. The instruction pointers are associated with the current instruction being executed by the processor 106. The current virtual address instruction pointer may be maintained as described below. The IPP may further determine if the current physical address and page attributes match the physical address and page attributes at the time of translation and thus if the translation regions on that page are valid for execution. The VPN portion of the virtual address that was in use at the time of the original translation (and which may be randomly modified by the ASLR) is not used for the validation match.The current virtual address instruction pointer may be maintained or updated, for example by the processor 106 or the BT module 104, as execution proceeds. In some embodiments, the current virtual address instruction pointer may be maintained for use by the IPP in a hardware register (referred to as FL_RIP) and may be updated using an instruction included in the native ISA (referred to as ADDRIP). The ADDRIP instruction may be configured to modify the offset component of the virtual address stored in the FL_RIP register in a relatively efficient manner. In some embodiments, for example, a memory page size may be 4k bytes and the virtual address offset may therefore be 12 bits in length. The ADDRIP instruction may thus be configured to clear the least significant 12 bits of the FL_RIP register and add a new value to that register, where the new value is stored as an immediate operand of the ADDRIP instruction.The IPP may use the ADDRIP instruction in this manner to effect a relative branch, where the immediate operand represents the relative branch offset. In the case of an absolute branch, whether direct or indirect, the IPP may simply write a new value into the FL_RIP register corresponding to the absolute branch location. In the case of subroutine calls and returns, the return virtual address may be computed and pushed onto the stack at call time and later popped from the stack at return time and written to the FL_RIP register. Branches that do not cross page boundaries do not need to update the FL_RIP register since that register need only maintain the correct VPN and an intra-page branch would only affect the offset. Most operations(instructions) that involve the FL_RIP register supply a page offset which is implicitly combined with the VPN in the FL_RIP. This reduces the number of FL_RIP update instructions that are required in the translated code. In some embodiments, the FL_RIP register may also be used as an implicit base with an offset to implement RIP-relative addressing for loads and stores BT systems.In some embodiments, IPP checks may be added to the translated code dynamically. For example, it may be statically determined (e.g., at the time of the translation) that there are no branches entering the translation region from any other pages and therefore the generation and insertion of an IPP for that translation region may be avoided to reduce overhead. At some later point (e.g., during execution) the system may detect a page transition into that translation region and dynamically insert an IPP, in-place, to handle the validation check. This may increase system efficiency by inserting IPPs only when necessary.Figure 6 illustrates a flowchart of operations 600 of another example embodiment consistent with the present disclosure. The operations provide a method for BT reuse, for example in a system that employs ASLR. At operation 610, a binary translation of a region of code is performed. The BT is a translation from a first ISA to a second ISA, where the second ISA is native to the processor. The translation is performed for execution associated with a first process. At operation 620, a first physical page number is stored. The first physical page number is associated with the binary translation and the first process. At operation 630, an offset of a virtual address instruction pointer is stored. The offset is associated with the binary translation and the first process. At operation 640, the virtual address instruction pointer is updated during execution by the processor. At operation 650, the binary translation is verified to be valid for reuse for execution associated with a second process. The verification includes generating a second physical page number based on a page table mapping, the mapping based on the updated virtual address instruction pointer and the stored offset. The verification also includes matching the second physical page number to the stored first physical page number.Figure 7 illustrates a top level system diagram 700 of a platform 710 of another example embodiment consistent with the present disclosure. The platform 710 be a hardware platform or computing device such as, for example, a smart phone, smart tablet, personal digital assistant (PDA), mobile Internet device (MID), convertible tablet, notebook or laptop computer, desktop computer, server, smart television or any other device whether fixed or mobile. The device may generally present various interfaces to a user via a display 770 such as, for example, a touch screen, liquid crystal display (LCD) or any other suitable display type.The system 700 is shown to include a processor 720. In some example embodiments, processor 720 may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, a fieldprogrammable gate array or other device configured to execute code. Processor 720 may be a single-threaded core or, a multithreaded core in that it may include more than one hardware thread context (or "logical processor") per core. System 700 is also shown to include a memory 730 coupled to the processor 720. The memory 730 may be any of a wide variety of memories (including various layers of memory hierarchy and/or memory caches) as are known or otherwise available to those of skill in the art. System 700 is also shown to include an input/output (IO) system or controller 740 which may be configured to enable or manage data communication between processor 720 and other elements of system 700 or other elements (not shown) external to system 700. System 700 may also include communication interface 750 configured to enable communication between system 700 and any external entities. The communications may conform to or otherwise be compatible with any existing or yet to be developed communication standards including mobile phone communication standards. For example, the communication interface 750 may use a predetermined wired or wireless communications protocol, such as but not limited to an Internet Protocol, WI-FI protocol, BLUETOOTH protocol, a wide area network (WAN), combinations thereof, and the like. The communication interface 750 may therefore include hardware (i.e., circuitry), software, or a combination of hardware and software allowing the hardware platform 710 to send and receive data signals to/from any of the external entities.The system 700 may further include binary translation module 104 configured to provide translation reuse in connection with OS 102 employing ASLR hosting applications/processes 108.It will be appreciated that in some example embodiments, the various components of the system 700 may be combined in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.Example embodiments of the methods described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or incombination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a system CPU (e.g., core processor) and/or programmable circuitry. Thus, it is intended that operations according to the methods described herein may be distributed across a plurality of physical devices, such as processing structures at several different physical locations. Also, it is intended that the method operations may be performed individually or in a subcombination, as would be understood by one skilled in the art. Thus, not all of the operations of each of the flow charts need to be performed, and the present disclosure expressly intends that all subcombinations of such operations are enabled as would be understood by one of ordinary skill in the art.The processor 720 may be any device capable of processing data including, for example, a microprocessor configured to carry out specific operations based on input data, or capable of executing instructions included in computer readable code. For example, the processor 720 may be configured to be programmed to operate according to some example embodiments and the memory 730 may be configured to store the program. The type and nature of the processor 720 may be selected based on numerous factors such as form factor, desired power consumption, desired processing capability, combinations thereof, and the like. Non-limiting examples of suitable processors that may be used in the processing unit 240 include the mobile and desktop processors commercially available from INTEL ®, Advanced Micro Devices (AMD®), APPLE®, SAMSUNG®, and NVIDIA®.The storage medium may be any storage medium capable of storing, containing or carrying instruction(s) and/or data and may include any type of tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), digital versatile disks (DVDs) and magneto-optical disks, non-volatile memory, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memory devices (which may include, for example NAND or NOR type memory structures), magnetic or optical cards, ), combinations thereof and/or any type of media suitable for storing electronic instructions.As used in any example embodiment herein, the term "module" may refer to software, firmware and/or circuitry that is/are configured to perform or cause the performance of one or more operations consistent with the present disclosure. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. "Circuitry," as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, software and/or firmware that stores instructions executed by programmable circuitry. The circuitry may be embodied as an integrated circuit, such as an integrated circuit chip. Software and/or applications may be embodied as code or instructions which may be executed on programmable circuitry such as a host processor or other programmable circuitry.Thus, the present disclosure provides systems, devices, methods and computer readable media for binary translation reuse. The following examples pertain to further embodiments.According to example 1 there is provided a system for binary translation reuse. The system may include a binary translation module to translate a region of code from a first instruction set architecture (ISA) to a second ISA, for execution associated with a first process. The binary translation module of this example may further be configured to store a first physical page number associated with the translated code and the first process. The system of this example may also include a processor to execute the translated code and to update a virtual address instruction pointer associated with the execution. The system of this example may further include a translation reuse module to validate the translated code for reuse by a second process. The validation may include generating a second physical page number based on a page table mapping of the updated virtual address instruction pointer; and matching the second physical page number to the stored first physical page number.Example 2 may include the elements of the foregoing example, and the binary translation module is further to store an offset of the virtual address instruction pointer associated with the translated code and the first process; and the page table mapping is further based on the stored offset.Example 3 may include the elements of the foregoing examples, and the binary translation module is further to store a first page attribute associated with the translated code and the first process; and the processor is further to determine an updated page attribute associated with the translated code and the second process; and the validation further includes matching the stored first page attribute with the updated page attribute.Example 4 may include the elements of the foregoing examples, and the binary translation module is further to perform a second binary translation of the region of code for execution associated with the second process, if the validation fails.Example 5 may include the elements of the foregoing examples, and the system further includes a register (FL_RIP) to maintain the updated virtual address instruction pointer, and the processor provides an instruction (ADDRIP) to modify an offset of the virtual address instruction pointer in the FL_RIP register, the ADDRIP instruction associated with the second ISA.Example 6 may include the elements of the foregoing examples, and virtual page numbers associated with the translated code differ between the first process and the second process due to Address Space Layout Randomization (ASLR).Example 7 may include the elements of the foregoing examples, and the binary translation further includes embedding a prologue in the translated region of code, the prologue including instructions to: store the first physical page number and the offset; and perform the validation in response to detecting that execution of the binary translation traverses a memory page boundary.Example 8 may include the elements of the foregoing examples, and the updating of the virtual address instruction pointer is performed in association with execution of a branch instruction.Example 9 may include the elements of the foregoing examples, and the page table is cached in a translation lookaside buffer (TLB).Example 10 may include the elements of the foregoing examples, and the system is a smart phone, a laptop computing device, a smart TV or a smart tablet.Example 11 may include the elements of the foregoing examples, and the system further includes a user interface, and the user interface is a touch screen.According to example 12 there is provided a method for binary translation reuse. The method may include performing a binary translation of a region of code from a first instruction set architecture (ISA) to a second ISA of a processor, the binary translation for execution associated with a first process. The method of this example may also include storing a first physical page number associated with the binary translation and the first process. The method of this example may further include storing an offset of a virtual address instruction pointer associated with the binary translation and the first process. The method of this example may further include updating the virtual address instruction pointer during execution by the processor. The method of this example may further include verifying that the binary translation is valid for reuse for execution associated with a second process. The verification may include generating a second physical page number based on a page table mapping, the mapping based on the updated virtual address instruction pointer and the stored offset; and matching the second physical page number to the stored first physical page number.Example 13 may include the elements of the foregoing examples, and further includes storing a first page attribute associated with the binary translation and the first process; and determining an updated page attribute associated with the binary translation and the second process; and the verifying further includes matching the stored first page attribute with the updated page attribute.Example 14 may include the elements of the foregoing examples, and further includes performing, if the verifying fails, a second binary translation of the region of code for execution associated with the second process.Example 15 may include the elements of the foregoing examples, and the updating of the virtual address instruction pointer further includes maintaining the virtual address instruction pointer in a register (FL_RIP); and executing an instruction (ADDRIP) to modify an offset of the virtual address instruction pointer in the FL_RIP register, and the ADDRIP instruction is associated with the second ISA.Example 16 may include the elements of the foregoing examples, and virtual page numbers associated with the binary translation differ between the first process and the second process due to Address Space Layout Randomization (ASLR).Example 17 may include the elements of the foregoing examples, and the binary translation further includes embedding a prologue in the translated region of code, the prologue including instructions to store the first physical page number and the offset; and perform the verification in response to detecting that execution of the binary translation traverses a memory page boundary.Example 18 may include the elements of the foregoing examples, and the updating of the virtual address instruction pointer is performed in association with execution of a branch instruction.Example 19 may include the elements of the foregoing examples, and the page table is cached in a translation lookaside buffer (TLB). According to example 20 there is provided a system for binary translation reuse. The system may include means for performing a binary translation of a region of code from a first instruction set architecture (ISA) to a second ISA of a processor, the binary translation for execution associated with a first process. The system of this example may also include means for storing a first physical page number associated with the binary translation and the first process. The system of this example may further include means for storing an offset of a virtual address instruction pointer associated with the binary translation and the first process. The system of this example may further include means for updating the virtual address instruction pointer during execution by the processor. The system of this example may further include means for verifying that the binary translation is valid for reuse for execution associated with a second process. The verification may include means for generating a second physical page number based on a page table mapping, the mapping based on the updated virtual address instruction pointer and the stored offset; and means for matching the second physical page number to the stored first physical page number.Example 21 may include the elements of the foregoing examples, and further includes means for storing a first page attribute associated with the binary translation and the first process; and means for determining an updated page attribute associated with the binary translation and the second process; and the verifying further includes means for matching the stored first page attribute with the updated page attribute.Example 22 may include the elements of the foregoing examples, and further includes means for performing, if the verifying fails, a second binary translation of the region of code for execution associated with the second process.Example 23 may include the elements of the foregoing examples, and the updating of the virtual address instruction pointer further includes means for maintaining the virtual address instruction pointer in a register (FL_RIP); and means for executing an instruction (ADDRIP) to modify an offset of the virtual address instruction pointer in the FL_RIP register, and the ADDRIP instruction is associated with the second ISA.Example 24 may include the elements of the foregoing examples, and virtual page numbers associated with the binary translation differ between the first process and the second process due to Address Space Layout Randomization (ASLR).Example 25 may include the elements of the foregoing examples, and the binary translation further includes means for embedding a prologue in the translated region of code, the prologue including instructions to store the first physical page number and the offset; and perform the verification in response to detecting that execution of the binary translation traverses a memory page boundary. Example 26 may include the elements of the foregoing examples, and the updating of the virtual address instruction pointer is performed in association with execution of a branch instruction.Example 27 may include the elements of the foregoing examples, and the page table is cached in a translation lookaside buffer (TLB).According to another example there is provided at least one computer-readable storage medium having instructions stored thereon which when executed by a processor, cause the processor to perform the operations of the method as described in any of the examples above.According to another example there is provided an apparatus including means to perform a method as described in any of the examples above.The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
Technologies for managing the power usage of components of a computing device, while the components and the computing device are in a low-power state, such as a connected standby state. An embedded controller includes a wake-up timer designed to wake up the embedded controller during a low-power state to allow the embedded controller to perform its tasks. A power control system is configured to dynamically alter the timing cycle of the wake-up timer of the embodied controller based on operation data received. The dynamically altered timing cycle is designed to conserve power, but maintain functionality of the embedded controller.
1.A computing device for managing power during a connection standby state, the computing device comprising:One or more electrical components for entering a low power state;An embedded controller for executing one or more tasks of the computing device; andPower control module for:Determine whether the computing device is in a connection standby state,Initiate a wake-up loop for periodically waking the one or more electrical components of the computing device in response to a determination that the computing device is in the connection standby state,Waking the embedded controller to allow the embedded controller to perform the one or more tasks,In response to waking up the embedded controller, receiving operational data from the embedded controller related to the one or more tasks to be executed by the embedded controller,Generate timing cycle data for the embedded controller, wherein the timing cycle data defines a wake-up period for the embedded controller, andSend the timing cycle data to the embedded controller to set a wake-up timing cycle of the embedded controller based on the timing cycle data.2.The computing device of claim 1, wherein the power control module sends a wake-up command to the embedded controller in response to the determination that the wake-up cycle has been initiated.3.The computing device of claim 1, wherein the power control module receives thermal data indicative of an operating temperature of a processor of the computing device.4.The computing device of claim 3, wherein the power control module determines a wake-up period of the embedded controller based on the thermal data.5.The computing device of claim 1, wherein the power control module receives battery life data indicating an amount of power available in a battery of the computing device.6.The computing device of claim 5, wherein the power control module determines a wake-up period of the embedded controller based on the battery life data.7.The computing device of claim 1, wherein the wake-up period defined by the timing cycle data is less than a period of the wake-up cycle initiated by the power control module.8.An embedded controller for managing power during a low-power state, the embedded controller comprising:A wake up management module to (i) receive a wake up command from a power control module of a computing device, wherein the wake up command is generated based on a wake up cycle of the power control module, (ii) in response to the wake up command, The power control module sending operational data, wherein the operational data is related to one or more tasks to be performed by the embedded controller, (iii) receiving a timing cycle from the power control module in response to operational data Data, and (iv) setting a wake-up timing cycle for the embedded controller based on the timing cycle data received from the power control module.9.The embedded controller of claim 8, wherein the wake-up management module is configured to determine whether the embedded controller should wake-up and perform one or more tasks based on an embedded controller wake-up cycle.10.The embedded controller according to claim 8, wherein the wake-up management module is configured to:Measure operating data based on conditions present in the computing device; andSend operational data to the power control module, wherein the operational data relates to one or more tasks to be performed by the embedded controller.11.The embedded controller of claim 8, wherein the wake-up management module sends thermal data indicative of an operating temperature of a processor of the computing device.12.The embedded controller of claim 8, wherein the wake-up management module sends battery life data indicative of an amount of power available in the battery of the computing device.13.A method for managing power of a component during a connection standby state, the method comprising:By the power control module of the computing device, whether the computing device is in a connection standby state;Initiate, by the power control module, a wake-up loop for periodically waking up components of the computing device in response to determining that the computing device is in the connection standby state;By the power control module, and waking up an embedded controller of the computing device during the wake-up cycle to allow the embedded controller to perform one or more tasks;By the power control module and in response to waking up the embedded controller, operational data from the embedded controller related to the one or more tasks to be performed by the embedded controller;By the power control module, timing loop data of the embedded controller, wherein the timing cycle data defines a wake-up period of the embedded controller; andSend the timing cycle data to the embedded controller to set a wake-up timing cycle for the embedded controller based on the timing cycle data.14.The method of claim 13, wherein waking up the embedded controller comprises sending, by the power control module, a wake-up command to the embedded controller in response to determining that the wake-up cycle has been initiated.15.The method of claim 13 wherein receiving operational data from the embedded controller includes receiving, by the power control module, thermal data indicative of an operating temperature of a processor of the computing device.16.The method of claim 15, wherein generating timing cycle data comprises determining, by the power control module, a wake-up period of the embedded controller based on the thermal data.17.The method of claim 13 wherein receiving operational data from the embedded controller includes receiving, by the power control module, battery life data indicative of an amount of power available in a battery of the computing device.18.The method of claim 17, wherein generating timing cycle data comprises determining, by the power control module, a wake-up period of the embedded controller based on the battery life data.19.The method of claim 13, wherein the wake-up period defined by the timing cycle data is less than a period of the wake-up cycle initiated by the power control module.20.A method for managing power of an embedded controller during a low power state, the method comprising:By the embedded controller, a wake-up command from a power control module of the computing device, the wake-up command being generated based on a wake-up cycle of the power control module;By the embedded controller, and in response to the wake-up command, operational data to the power control module, wherein the operational data is related to one or more tasks to be performed by the embedded controller ;By the embedded controller, timing cycle data from the power control module in response to the operational data; andSetting, by the embedded controller, a wake-up timing cycle of the embedded controller based on the timing cycle data received from the power control module.21.The method of claim 20, wherein transmitting the operation data comprises:By the embedded controller, operating data based on conditions present in the computing device; andBy the embedded controller, operational data to the power control module, wherein the operational data relates to one or more tasks to be performed by the embedded controller.22.The method of claim 20, wherein sending operational data comprises sending, by the embedded controller, thermal data indicative of an operating temperature of a processor of the computing device.23.The method of claim 20, wherein sending operational data comprises sending, by the embedded controller, battery life data indicating an amount of power available in a battery of the computing device.24.One or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a computing device to perform the method of any one of claims 13-23.25.One or more units for causing a computing device to perform the method of any one of claims 13-23.
Technology for managing the power of an embedded controller during a low-power stateCross reference to related applicationsThis application claims the priority of U.S. Utility Patent Application Serial No. 14 / 671,721, filed on March 27, 2015, entitled TECHNOLOGIES FOR MANAGING POWER OF AN EMBEDDED CONTROLLER DURING A LOW-POWER STATE.Background techniqueMany computing systems include one or more low-power states, and one of those low-power modes may include a connected standby mode. The connection standby is a low-power state characterized by low power consumption while maintaining an Internet connection. Connection standby allows applications and applications to save power and update automatically. Another advantage of connection standby is that the computing device can resume normal operation quickly from the connection standby. A typical computing device may include hardware, firmware, and / or software that manages power consumption during connection standby.BRIEF DESCRIPTION OF THE DRAWINGS FIGThe concepts described herein are illustrated by way of example and not limitation in the figures of the accompanying drawings. For simplicity and clarity of illustration, the elements shown in the figures are not necessarily drawn to scale. Where appropriate, reference numerals have been repeated in the figures to indicate corresponding or analogous elements.Figure 1 is a simplified block diagram of at least one embodiment of a computing device that enables a connection standby state;Figure 2 is a simplified block diagram of at least one embodiment of an environment that may be established by the computing device of Figure 1 during a connection standby state;Figure 3 is a simplified flowchart of at least one embodiment of a method for dynamically adjusting a timing cycle of an embedded controller implemented by the computing device of Figure 1;Figure 4 is a simplified flowchart of at least one embodiment of a method for dynamically adjusting a timing cycle of an embedded controller as implemented by the embedded controller of Figure 1; andFigure 5 is a simplified diagram of at least one embodiment of a computing device's power output data during a connection standby state.detailed descriptionWhile the concepts of the disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the concepts of this disclosure to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and appended claims.Reference in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicates that the embodiment described may include a particular feature, structure, or characteristic, but each embodiment may or may not necessarily include Specific characteristics, structures or characteristics. In addition, such phrases do not necessarily refer to the same embodiment. In addition, when a particular feature, structure, or characteristic is described in connection with the embodiment, it is considered in the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones whether or not explicitly described. In addition, it should be understood that items included in the list in the form of "at least one of A, B and C" may mean (A); (B); (C); (A and B) (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B or C" may mean (A); (B); (C); (A and B) And C); or (A, B and C).In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (eg, computer-readable) storage medium that may be read by and executed by one or more processors. A machine-readable storage medium may be implemented as any storage device, mechanism, or other physical means for storing or transmitting information in a form readable by a machine (eg, a volatile or non-volatile memory, a media platter, or other media) structure).In the drawings, some structural or methodological features may be shown with specific arrangement and / or order. However, it should be understood that such specific arrangement and / or ordering may not be required. Conversely, in some embodiments, these features may be arranged differently and / or sequentially than shown in the illustrative figures. In addition, including structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments and, in some embodiments, may not be included in these features or may be combined with other features.Referring now to FIG. 1, the computing device 102 is configured to dynamically adjust a timing cycle of the embedded controller 128 included in the computing device 102 during a connection standby state or when preparing to enter a connection standby state. In use, as discussed in more detail below, the computing device 102 is configured to manage the power consumption of one or more power-controlled devices while the computing device 102 is in a connection standby state to save power. For example, quickly pressing and releasing the power button on many smartphones may cause the smartphone to enter a connection standby where the screen and other components of the smartphone enter a low-power mode. However, the components and applications stored on the smartphone stay connected to the Internet while in the connected standby state. For example, an email application on a smartphone may still receive new emails and alert users of new emails, even though the screens and other components of the smartphone are in low-power mode.The embedded controller 128 is also in a low-power state when the computing device 102 enters connected standby. Typically, the embedded controller 128 sets the wake-up timer to a constant timing cycle (eg, wakes up every 1 second) when in a low-power state. The wake-up timer is configured to wake up the embedded controller 128 according to a timing cycle so that the embedded controller 128 can perform the task. Wake-up timers and timing cycles allow the embedded controller 128 to conserve power while still performing some of its functions, such as fan control and thermal management of the computing device 102. When the computing device 102 is in connection standby, the rules of the embedded controller 128 and frequent default timing cycles may consume more power than needed. For example, in many computing devices, the embedded controller is configured to monitor the thermal events of the CPU; however, when the computing device 102 is in connected standby, the CPU of the computing device 102 may not experience many of the actions that require the embedded controller 128 Hot event. The computing device 102 may be configured by hardware, firmware and / or software to dynamically adjust the embedded controller wake-up timer to further save power when the computing device 102 is in a connection standby state. The wake-up timer of the embedded controller 128 may be implemented using hardware, firmware, software, or any combination thereof.Computing device 102 may be implemented as any type of computing or computer device capable of performing the functions described herein including, but not limited to, a computer, a multiprocessor system, a server, a rack server, a blade server, a laptop computer, a notebook computer , Network devices, web devices, distributed computing systems, processor-based systems, and / or consumer electronic devices. As shown in FIG. 1, an illustrative computing device 102 includes a processor 120, an input / output (I / O) subsystem 122, a memory 124, a data storage device 126, an embedded controller 128, and a peripheral device 130. Of course, in other embodiments, computing device 102 may include other or additional components, such as those typically found in computing devices (eg, various input / output devices). In addition, in some embodiments, one or more illustrative components may incorporate or otherwise form part of another component. For example, in some embodiments, the memory 124, or portions thereof, may be incorporated into the processor 120.The processor 120 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor 120 may be embodied as a single-core or multi-core processor, a digital signal processor, a microcontroller or other processor or a processing / control circuit. Similarly, memory 124 may be embodied as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, memory 124 may store various data and software used during the operation of computing device 102, such as operating systems, applications, programs, libraries, and drivers. Memory 124 is communicatively coupled to processor 120 via I / O subsystem 122, which may be implemented as circuitry that facilitates input / output operations with processor 120, memory 124, and other components of computing device 102 and / Or component. For example, I / O subsystem 122 may be embodied as a memory controller hub, an input / output control center, a firmware device, a communication link (ie, a point-to-point link, a bus link, a wire, a cable, a light guide, a printed circuit Board traces, etc.) and / or other components and subsystems that facilitate input / output operations or otherwise include them. In some embodiments, I / O subsystem 122 may form part of a system-on-chip (SoC) and be incorporated onto a single integrated circuit chip along with processor 120, memory 124, and other components of computing device 102.Data storage device 126 may be embodied as any type of one or more devices, such as memory devices and circuits, memory cards, hard drives, solid state drives, or other data storage devices, configured for short or long term storage of data. Data storage device 126 may store compressed and / or decompressed data that is processed by computing device 102.The computing device 102 also includes an embedded controller 128, which may be embodied as a microcontroller or any other circuit, device, firmware, software, or collection thereof that is capable of executing various tasks of the computing device 102 not being handled by the host processor 120 . The embedded controller 128 may include various devices and sub-circuits to facilitate the functionality of the implemented controller. For example, in some embodiments, embedded controller 128 may include its own RAM for use in performing tasks. The particular tasks performed by the embedded controller 128 may depend on the type of computing device 102, the current operating state of the computing device 102, and / or other criteria. For example, in some embodiments, the tasks of the embedded controller 128 may be to receive and process signals for various buttons and switches (eg, a keyboard) as desired, and the thermal measurements of the processor 120, including signals responsive to temperature increases Associated fan control, CPU throttling, and emergency shutdown), control lights, monitor and manage battery stored power (including managing battery chargers), control watchdog timer, or other features.Peripherals 130 of computing device 102 may include any number of additional input / output devices or interface devices. In an illustrative embodiment, the peripheral device 130 includes a power controlled component 132. Each power controlled component 132 may be implemented as any electrical component, device, or circuit capable of entering a low power state including a connection standby state. Examples of power controlled components include, but are not limited to, data storage devices, communication circuits and devices, sensors, secure digital readers, and / or any other component capable of entering a low power state. Of course, computing device 102 may include additional or other peripheral devices that may be required to perform the functions of computing device 102 such as, for example, communication devices, displays, keyboards, and other input / output devices.Referring now to FIG. 2, in an illustrative embodiment, computing device 102 establishes environment 200 during operation. The exemplary embodiment 200 includes an embedded controller module 202 and a power control module 212. Various modules of environment 200 may be embodied as hardware, firmware, software, or a combination thereof. For example, the various modules, logic, and other components of environment 200 may form or otherwise be part of processor 120, embedded controller 128, or other hardware components of computing device 102, or otherwise be executed by processor 120, embedded controller 128, Other hardware components of computing device 102 are established. As such, in some embodiments, any one or more of the modules of environment 200 may be implemented as a circuit or set of electrical devices (eg, an embedded controller circuit, a power control circuit, etc.).The embedded controller module 202 is configured to perform the functions of the embedded controller 128 including controlling the embedded controller 128 during the low power state and during the active state. The embedded controller module 202 may be established by the embedded controller 128 and illustratively includes a power status determination module 204, a wake-up management module 206, and a command module 210.The power status determination module 204 is configured to determine a current power status of the computing device 102 and a current power status of the embedded controller 128. The power state determination module 204 may use sensors to determine which power state the computing device 102 is currently in or otherwise monitor signals received from the computing device 102 regarding the power state of the computing device. For example, if computing device 102 is about to enter the connection standby state, computing device 102 may send a signal to embedded controller 128 to enter the low power state as part of the overall connection standby state, which may be detected by the power state determination module. Once the power status determination module 204 detects that the embedded controller 128 is in a low power state, the power status determination module notifies the wake-up management module 206.The wake up management module 206 is configured to control the timing cycle of the embedded controller 128 when the embedded controller 128 is in a low power state. In order to conserve power while in a low power state, the embedded controller 128 sets a wake-up timer to periodically power up the embedded controller 128 to perform its assigned tasks. In some embodiments, the wake-up timer is set as a default timing cycle to wake-up the embedded controller 128 periodically (eg, every second). When the embedded controller 128 wakes up, the embedded controller 128 performs its assigned tasks, such as measuring the temperature of the processor of the computing device 102 and determining if the temperature of the processor is above a certain threshold. The wake-up management module 206 may include a timing cycle adjustment module 208 to determine a timing cycle or wake-up cycle of the embedded controller 128 (ie, how the embedded controller 128 should be woken up frequently when in a low power state). The timing cycle / wakeup period of the embedded controller 128 refers to how the embedded controller 128 wakes up frequently when in a low power state. For example, the timing cycle may require the embedded controller 128 to wake up every second, while at other times, the timing cycle may require the embedded controller 128 to wake up every 15 seconds. The timing cycle adjustment module 208 is configured to receive the timing cycle data and adjust the timing cycle of the embedded controller 128 based on the timing cycle data. In some embodiments, the timing cycle data may include information related to tasks performed by the embedded controller 128 in the computing device 102, such as thermal measurement data related to the temperature of the processor or stored in a battery of the computing device 102 Available battery life data.The embedded controller command module 210 is configured to perform the tasks required by the embedded controller 128. As described above, the embedded controller 128 performs several tasks for the computing device 102 such as monitoring for thermal events of the processor 120 and computing device 102 battery life when the embedded controller is fully powered or active. After the wake-up management module 206 wakes up the embedded controller 128, the command module 210 executes the tasks required by the embedded controller 128.The power control module 212 is configured to manage the power consumption of the power controlled components 132 of the computing device 102 when the computing device 102 is in a connection standby state and / or is preparing to enter a connection standby state. In some embodiments, the power control module 212, or a portion thereof, may be embodied as firmware, software, or a combination thereof that is executed by the operating system of the computing device 102. For example, a portion of power control module 212 may, in some embodiments, be embodied as a power engine plug-in for Microsoft Windows. Regardless, the illustrative power control module 212 includes a wake up determination module 214, an embedded controller state detection module 216, and an embedded controller management module 218.Wake-up determination module 214 is configured to wake certain components of computing device 102 periodically while computing device 102 is in a connection standby state. Many of the power controlled components 132 enter a low power mode to save energy when the computing device 102 enters the connected standby state. As part of entering most of the low-power modes, the power-controlled components 132 are no longer able to perform certain functions, such as connecting to an external network and checking for updates. The power control module 212 periodically wakes up the power-controlled component 132 in accordance with the wake up cycle of the power control module 212 in order to maintain functionality while maintaining power while in the connected standby state. For example, the wake up cycle of the power control module 212 may require the power controlled component 132 wake up every thirty seconds, connect to the Internet, and check for updates. In some embodiments, the power-controlled component 132, after waking up by the power control module 212, performs various functions in addition to connecting to the Internet. Wake-up determination module 214 determines a wake-up loop that causes all power-controlled components 132 controlled by power control module 212 to wake up at regular intervals. In some embodiments, the awake cycle wakes up all power-controlled components 132 at one time, while in other embodiments, the power-controlled components may be awakened in a staggered mode to prevent energy surges. Typically, the wake-up cycle of the power control module 212 is a predetermined period of time after which all the power-controlled components 132 are awakened. For example, a wake-up cycle may require that all components be woken up by the power control module 212 every 30 seconds. In some embodiments, the wake-up determination module 214 causes the power-controlled component 132 to wake-up by sending a wake-up command to all of the power-managed components 132 managed by the power control module 212.The embedded controller state detection module 216 is configured to detect whether the embedded controller 128 is in a low power state. The power control module 212 may include the embedded controller 128 in the list of power controlled components 132 managed by the power control module 212 if the computing device 102 is in a connection standby state and the embedded controller 128 is in a low power state. Since the embedded controller 128 performs tasks that may be considered critical to the normal operation of the computing device 102, the embedded controller 128 may be required to wake up more frequently than other power-controlled components 132.The embedded controller management module 218 is configured to dynamically manage the timing cycle of the embedded controller 128 when the embedded controller 128 is in a low power state. If the embedded controller 128 wakes up and performs a task using a static predetermined timing cycle during a low power state, the embedded controller 128 may wake up more than necessary. In general, the designer can select a default timing cycle such that the embedded controller 128 wakes more often than needed to prevent damage to the computing device 102. To mitigate such unnecessary wake up, the embedded controller management module 218 receives the operational data from the embedded controller 128 and uses the operational data to determine a new timing cycle for the embedded controller 128. The operational data may include any information related to the embedded controller 128 or tasks that the embedded controller 128 needs to perform. For example, the operational data may include thermal data of the processor 120 as measured by the embedded controller 128. Based on the thermal data and other received operational data, the embedded controller management module 218 determines new timing cycle data for the embedded controller 128. The new timing cycle data is used to set the wake-up timer of the embedded controller 128.The embedded controller management module 218 includes an embedded controller timing cycle determination module 220 that is configured to determine a new timing cycle of the embedded controller 128. For example, the embedded controller timing cycle determination module 220 may generate timing cycle data that includes instructions that the embedded controller 128 wakes up and performs a task every five seconds.In some embodiments, the embedded controller management module 218 determines a new timing cycle of the embedded controller 128 only during the wake-up cycle of the power control module 212. Between wake-up cycles of the power control module 212, the embedded controller 128 can wake up and execute tasks without sending operational data to the power control module 212. For example, during a wake-up cycle of the power control module 212, the embedded controller management module 218 will receive the operational data from the embedded controller 128, determine new timing cycle data based on the operational data, and send new timing cycle data to Embedded Controller 128. Between wake-up cycles of the power control module 212, the embedded controller 128 wakes up and performs tasks according to its timing cycle. At the next wake-up cycle of the power control module 212, the embedded controller management module 218 will again determine new timing data for the embedded controller 128. In some embodiments, the embedded controller 128 performs its task at every wake-up cycle of the power control module 212.With reference to FIG. 3, in use, computing device 102 may perform a method 300 for managing power of components of computing device 102 during a connection standby state and / or when preparing to enter a connection standby state. In an illustrative embodiment, the method 300 is performed by the power control module 212. At block 302, the power control module 212 monitors the computing device 102 and determines whether the computing device 102 is in a connection standby state. If computing device 102 is not in a connection standby state, method 300 will continue to monitor the state of computing device 102. If the computing device 102 is in a connection standby state, the power control module 212 determines if it is time to wake up the power-controlled component 132 based on a wake-up cycle of the power control module 212. Once the wake-up cycle has been initiated, the power control module 212 wakes-up all of the power-controlled components 132 controlled by the power control module 212 at block 306.At block 308, the power control module 212 wakes up and manages the embedded controller 128, including a wake-up timer that manages the embedded controller. At block 310, the embedded controller 128 is awakened by the power control module 212. Once powered, the embedded controller 128 determines embedded controller operational data, such as the battery life of the battery and the temperature of the processor, and sends the operational data to the power control module 212. At block 312, the power control module 212 receives embedded controller operational data. At block 314, the power control module 212 generates timing cycle data based on the operational data received from the embedded controller 128. For example, if the operational data indicates that the processor is operating at a temperature within the normal operating parameters of the processor 120, the timing cycle data may indicate that the embedded controller 128 should wake up every fifteen seconds to perform its task. However, if the operational data indicates that the processor 120 is operating at an elevated temperature, the timing cycle data may indicate that the embedded controller 128 should wake up more frequently, such as every five seconds, to ensure that the temperature of the processor 120 does not exceed the processing Parameters of the device. In some embodiments, the timing cycle data is determined by balancing the operational data for all tasks performed by the embedded controller 128. For example, the operational data may include information regarding the operating temperature of the processor 120, power available in the battery of the computing device 102, signals received from input / output devices (eg, keyboards and other buttons), power from the power button of the computing device 102 Received signals, or other information related to the tasks of the embedded controller 128.At block 316, the power control module 212 sends the timing cycle data to the embedded controller 128. The embedded controller 128 then uses the timing loop data to set its own timing cycle by setting its wake-up timer. At block 318, the power control module 212 determines whether the computing device 102 has exited the connection standby state. If the computing device 102 does not exit the connection standby state, the power control module 212 loops back to block 304 and waits until it is time to start the next wake-up cycle.Referring to FIG. 4, in use, computing device 102 may perform method 400 for managing the power of embedded controller 128 during a low power state. In an illustrative embodiment, the method 400 is performed by the embedded controller 128. At block 402, the embedded controller 128 continuously monitors the computing device 102 until it determines that the computing device 102 is in a connection standby state or is ready to enter a connection standby state. Once the computing device 102 enters the connection standby state, then at block 404, the embedded controller 128 enters a low power state. In general, the low-power state of the embedded controller 128 involves powering down most of the embedded controller 128 and setting a wake-up timer that periodically wakes up the embedded controller 128.At block 406, the embedded controller 128 waits until the wake-up timer of the embedded controller 128 or the power control module 212 indicates that the embedded controller 128 will be powered on. At block 408, the embedded controller 128 determines whether the wake-up command is from the power control module 212 or from a wake-up timer of the embedded controller 128. If wake-up is initiated by the wake-up timer, the embedded controller 128 proceeds to block 416 and performs the task assigned to the embedded controller 128. If wake-up is initiated by the power control module 212 in response to the awake cycle of all of the power-controlled components 132, the embedded controller 128 begins the process of acquiring new timing cycle data.At block 410, the embedded controller 128 obtains the embedded controller operational data and sends the embedded controller operational data to the power control module 212. For example, the operational data may include information regarding the operating temperature of the processor 120, power available in the battery of the computing device 102, signals received from input / output devices (eg, keyboards and other buttons), power from the power button of the computing device 102 Received signals, or other information related to the tasks of the embedded controller 128. At block 412, the embedded controller 128 receives the timing cycle data from the power control module 212. The timing cycle data is based on the operational data sent to the power control module 212 and includes a new timing cycle for use by the embedded controller 128 up to the next wake-up cycle of the power control module 212. At block 414, the embedded controller 128 sets its timing cycle based on the received timing cycle data. For example, the old timing cycle may require waking up the embedded controller 128 every second, but the new timing cycle may require the embedded controller 128 to wake up every 5 seconds as the processor is operating at colder temperatures.At block 416, the embedded controller 128 performs its tasks, such as fan control and thermal event monitoring, battery life monitoring, monitoring of input / output commands, or other tasks. At block 418, the embedded controller 128 determines whether the computing device 102 has exited the connection standby state. If the computing device 102 has not exited the connection standby state, the embedded controller 128 continues to monitor the wake-up signal from the wake-up timer of the power control module 212 or the embedded controller 128.Referring to FIG. 5, an embodiment 500 of power output data of a computing device 102 in a connection standby state is shown. Elements 504, 506, 508 show operational data collected by embedded controller 128 and used by power control module 212 to determine timing cycle data. In the illustrative embodiment, graph 502 represents the power usage of computing device 102 during the connection standby state. Gray bars 510, 512, 514 at times t1, t2, and t3 represent wake-up cycles of power control module 212 where power control module 212 wakes up all power-controlled components 132. The white bars 516, 518, 520, 522, 524, 526, 528, 530, 532, 534 represent the timing cycle of the embedded controller 128 or the time at which the embedded controller is awakened by the wake-up timer or power control module 212. In the illustrative embodiment, the white bars 516, 518, 524 are shown separately from the gray bars 510, 512, 514 at times t1, t2, and 53 to better illustrate the embedded controller 128 using power during the wake-up cycle of the power control module 212. It should be appreciated that in practice, the power usage of embedded controller 128 represented by white bars 516, 518, 524 has been incorporated into gray bars 510, 512, 514. White bars 516, 518, 524 are shown for illustrative purposes only.In the illustrative embodiment, elements 504, 506, 508 depict a range of possible thermal temperatures for processor 120 where the bottom of elements 504, 506, 508 represents a colder temperature and the top of elements 504, 506, 508 indicate a higher temperature. Each element 504, 506, 508 is broken up into three separate zones, merely to illustrate that different measured temperatures of the processor 120 result in different timing cycles of the embedded controller 128. The arrows 540, 542, 544 depict the temperature of the processor 120 measured by the embedded controller 128 during the wake-up cycle of the power control module 212 at t1, t2, and t3. 5 depicts the temperature of the processor 120 as the only operational data used by the power control module 212 to determine the timing cycle of the embedded controller 128 by way of example only. In some embodiments, many types of operational data may be used to determine the timing cycle of the embedded controller 128.In operation, when the power control module 212 causes all of the power-controlled components 132 to wake up at time t1 (the total power consumed in the wake-up cycle is represented by the gray bar 510), the embedded controller 128 measures the temperature of the processor 120 . Element 504 represents a range of possible temperatures for processor 120 at time t1 and arrow 540 represents the temperature measured by the embedded controller at t1. The area of ​​element 504 shows that the temperature indicated by arrow 540 is entirely within the operating parameters of processor 120. The power control module 212 uses the operational data represented by arrow 540 to determine timing cycle data that will be used by the embedded controller 128 to set the timing cycle. As shown in diagram 502, based on the temperature represented by arrow 540, the timing cycle of embedded controller 128 does not wake up embedded controller 128 until the next overall wake-up cycle, represented by gray bar 512. In the representation shown in FIG. 5, the timing cycle of the embedded controller 128 between time t1 and time t2 is equal to the wake-up cycle length because the temperature measured by the embedded controller 128 is completely within the safe operating parameters of the processor 120 within.At time t2, the power control module 212 again activates all of the power-controlled components 132 (the total power consumed in the wake-up cycle is represented by the gray bar 512). When awake, the embedded controller 128 measures the temperature of the processor 120. The measured temperature of processor 120 at time t2 is indicated by arrow 542 and element 506. Arrow 542 shows that the temperature of the processor 120 is rising and the temperature may be worrying. To ensure that the processor is not overheating, the power control module 212 generates a timing cycle for the embedded controller 128 that causes the embedded controller 128 to wake up and execute tasks more frequently than previous timing cycles. The new timing cycle of the embedded controller 128 is indicated by white bars 520, 522. The new timing cycle of the embedded controller 128 causes the wake-up timer to wake up the embedded controller 128 twice throughout the wake-up cycle. A more frequent timing cycle of the embedded controller 128 is completed to protect the computing device 102 from damage. The total power consumed by each single embedded controller wake-up is represented by the height of white bars 520, 522.At time t3, the power control module 212 again activates all power-controlled components 132 (the total power consumed in the wake-up cycle is represented by the gray bar 514). Again, the embedded controller 128 collects operational data by measuring the temperature of the processor 120 and sends the operational data to the power control module 212. The temperature of processor 120 measured at time t3 is indicated by arrow 544 and shows that the temperature of processor 120 may rise to dangerously high levels. Upon receiving the temperature indicated by arrow 544, the power control module 212 generates a new timing cycle for the embedded controller 128 represented by the white bars 526, 528, 530, 532, 534. The time period between waking up of the embedded controller 128 of the new timing cycle is much shorter than the timing cycles shown by the two previous ones because of the difference in the operational data received by the power control module 212, ie, the temperature of the processor 120 is high.In some embodiments, if the embedded controller 128 determines that measures are required to protect the computing device 102 from damage (eg, the temperature of the processor 120 has exceeded normal operating parameters), the embedded controller 128 may cause the computing device 102 Exit the connection standby and take precautions. For example, the embedded controller 128 may communicate with the power control module 212 to wake up the computing device 102 so that action may take. Additionally or alternatively, the embedded controller 128 may communicate with the power control module 212 to adjust a wake-up timing cycle for the other components of the computing device 102 (or of the computing device 102 itself). As shown in embodiment 500, the computing device 102 may dynamically adjust the timing cycle used by the embedded controller 128 during a connection standby state to save power and maintain the key functionality of the embedded controller.ExampleIllustrative examples of the technology disclosed herein are provided below. One embodiment of the technology may include any one or more of the examples described below, as well as any combination.Example 1 includes a computing device for managing power during a connection standby state, the computing device including one or more electrical components that enter a low-power state; an embedded controller for executing one or more of the computing devices Task; and a power control module for determining whether the computing device is in a connection standby state, initiating, in response to the determination that the computing device is in a connection standby state, to periodically wake the one or more of the computing devices Wakeup cycles of one electrical component, waking up the embedded controller to allow the embedded controller to perform one or more tasks, in response to waking of the embedded controller, receiving one or more tasks associated with execution by the embedded controller Of operating data from the embedded controller to generate timing cycle data for the embedded controller, wherein the timing cycle data defines a wake-up period of the embedded controller and sends the timing cycle data to the embedded controller for timing based on a timing cycle Data sets the embedded controller wake-up timing cycle.Example 2 includes the subject matter of Example 1, and wherein the power control module receives the command to enter the connection standby state.Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the power control module is to determine if one or more of the electrical components has entered the low-power state.Example 4 includes the subject matter of any of Examples 1-3, and wherein the power control module sends a wake-up command to the embedded controller in response to the determination that the wake-up cycle has been initiated.Example 5 includes the subject matter of any of Examples 1-4, and wherein the power control module receives thermal data indicative of an operating temperature of a processor of the computing device.Example 6 includes the subject matter of any of Examples 1-5, and wherein the power control module determines the wake-up period of the embedded controller based on the thermal data.Example 7 includes the subject matter of any of Examples 1-6, and wherein the power control module receives battery life data indicating an amount of power available to the computing device's battery.Example 8 includes the subject matter of any of Examples 1-7, and wherein the power control module determines the wake-up period of the embedded controller based on the battery life data.Example 9 includes the subject matter of any of Examples 1-8, and wherein the wake-up period defined by the timing cycle data is less than the period of the wake-up cycle initiated by the power control module.Example 10 includes the subject matter of any of Examples 1-9, and wherein the timing cycle data sent to the embedded controller to set the embedded controller wake-up timing loop causes the embedded controller to wake up from the wake-up cycle initiated by the power control module Wake up more often.Example 11 includes the subject matter of any of Examples 1-10 and wherein the timing cycle data sent to the embedded controller to set the embedded controller wake-up timing loop causes the embedded controller to wake up from the wake-up cycle initiated by the power control module Wake up more infrequently.Example 12 includes an embedded controller for managing power during a low power state, the embedded controller including a wake-up management module for (i) receiving a wake-up command from a power control module of a computing device, wherein the wake-up command is based on a power control module (Ii) transmit operational data to the power control module in response to the wake-up command, wherein the operational data is related to one or more tasks performed by the embedded controller, (iii) in response to the operational data, derive a wake-up command from the power The control module receives the timing cycle data, and (iv) sets the wake-up timing cycle of the embedded controller based on the timing cycle data received from the power control module.Example 13 includes the subject matter of Example 12, and further includes a power status determination module for determining whether the computing device is in a connection standby state and putting the embedded controller into a low power state.Example 14 includes the subject matter of any of Examples 12 and 13, and wherein the wake-up management module is to determine whether the embedded controller should wake-up and perform one or more tasks based on an embedded controller wake-up cycle.Example 15 includes the subject matter of any of Examples 12-14, and wherein the wake-up management module measures the operational data based on conditions that are present in the computing device; and transmits operational data to the power control module, wherein the operational data is associated with Related to one or more tasks performed by the embedded controller.Example 16 includes the subject matter of any of Examples 12-15, and wherein the wake up management module sends the thermal data indicative of the operating temperature of the computing device's processor.Example 17 includes the subject matter of any of Examples 12-16, and wherein the wake-up management module sends battery life data indicating an amount of power available to the computing device's battery.Example 18 includes the subject matter of any of Examples 12-17, and wherein the wake-up management module sets the wake up timing cycle of the embedded controller to wake up the embedded controller than the wake up cycle of the power control module wake up the embedded controller more frequently .Example 19 includes the subject matter of any of Examples 12-18, and wherein the wake-up management module sets the wake-up timing cycle of the embedded controller to wake the embedded controller more awake than the wake-up cycle of the power control module and wakes up the embedded control less frequently Device.Example 20 includes a method for managing power of a component during a connection standby state, the method comprising: determining, by a power control module of the computing device, whether the computing device is in a connection standby state; in response to determining that the computing device is in By the power control module, a wake-up cycle for periodically waking up components of the computing device; waking up by the power control module and waking up an embedded controller of the computing device during a wake-up cycle To allow the embedded controller to perform one or more tasks; receiving, by the power control module and in response to waking up the embedded controller, one or more of the one or more Task-related operational data from the embedded controller; generating, by the power control module, timing cycle data for the embedded controller, wherein the timing cycle data defines a wake-up period of the embedded controller ; And sends the timing cycle data to the embedded controller to set up the embedded control based on the timing cycle data Waking up the timing cycle.Example 21 includes the subject matter of Example 20, and wherein determining whether the computing device is in a connection standby state includes receiving, by the power control module, a command to enter a connection standby state.Example 22 includes the subject matter of any one of Examples 20 and 21, and wherein determining whether the computing device is in a connection standby state comprises determining, by the power control module, whether one or more components of the computing device have entered the low power state.Example 23 includes the subject matter of any of Examples 20-22, and wherein waking up the embedded controller includes sending a wake-up command to the embedded controller through the power control module in response to determining that the wake-up cycle has started.Example 24 includes the subject matter of any of Examples 20-23, and wherein receiving the operational data from the embedded controller includes receiving, by the power control module, thermal data indicative of an operating temperature of a processor of the computing device.Example 25 includes the subject matter of any one of Examples 20-24, and wherein generating the timing loop data includes determining, by the power control module, a wake-up period of the embedded controller based on the thermal data.Example 26 includes the subject matter of any of Examples 20-25, and wherein receiving operational data from the embedded controller includes receiving, by the power control module, battery life data indicative of a power available in a battery of the computing device.Example 27 includes the subject matter of any of Examples 20-26, and wherein generating the timing cycle data includes determining by the power control module a wake-up period of the embedded controller based on the battery life data.Example 28 includes the subject matter of any of Examples 20-27, and wherein the wake-up period defined by the timing cycle data is less than the period of the wake-up cycle initiated by the power control module.Example 29 includes the subject matter of any of Examples 20-28, and wherein sending the timing cycle data to the embedded controller to set the embedded controller wake-up timing loop causes the embedded controller to wake up from the wake-up cycle initiated by the power control module Wake up more often.Example 30 includes the subject matter of any of Examples 20-29, and wherein sending the timing loop data to the embedded controller to set the embedded controller wake-up timing loop causes the embedded controller to wake up from the wake-up cycle initiated by the power control module Wake up more infrequently.Example 31 includes a method for managing power of an embedded controller during a low power state, the method comprising: receiving, by the embedded controller, a wake up command from a power control module of the computing device, based on the power control Module wake-up cycle; generating, by the embedded controller and in response to the wake-up command, operational data to the power control module, wherein the operational data is executed by the embedded controller ; Receiving, by the embedded controller, timing cycle data from the power control module in response to the operation data; and based on the timing cycle data received from the power control module, determining, by the embedded controller, The embedded controller sets the wake-up timing loop of the embedded controller.Example 32 includes the subject matter of Example 31, further including determining, by the embedded controller, whether the computing device is in a connection standby state and entering the low power state by the embedded controller.Example 33 includes the subject matter of any of Examples 31 and 32 and further includes determining, by the embedded controller based on the embedded controller wake-up cycle, whether the embedded controller should wake-up and perform one or more tasks.Example 34 includes the subject matter of any of Examples 31-33, and wherein transmitting the operational data includes measuring, by the embedded controller, operational data based on conditions that are present in the computing device; and sending the operational data by the embedded controller To the power control module, wherein the operational data relates to one or more tasks performed by the embedded controller.Example 35 includes the subjectline of any of Examples 31-34, and wherein sending the operational data includes sending, by the embedded controller, thermal data indicative of an operating temperature of a processor of the computing device.Example 36 includes the subject matter of any of Examples 31-35, and wherein sending the operational data includes sending, by the embedded controller, battery life data indicating an amount of power available in the battery of the computing device.Example 37 includes the subject matter of any of Examples 31-36, and wherein setting the wake-up timing cycle includes setting, by the embedded controller, a wake-up timing loop of the embedded controller to awaken the embedded controller more than a wake-up cycle of the power control module Wake up embedded controller frequently.Example 38 includes the subject matter of any of Examples 31-37, and wherein setting the wake-up timing cycle of the embedded controller includes setting, by the embedded controller, a wake-up timing cycle of the embedded controller to cause imbedding than a wake-up cycle of the power control module Wake from the controller wakes up the embedded controller less frequently.Example 39 includes one or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause a computing device to perform the method of any of Examples 20-38.Example 40 includes a computing device for managing power of a component during a connection standby state, the computing device comprising: means for determining whether the computing device is in a connection standby state; means for, in response to determining that the computing device is in A connection standby state to initiate a wake up loop to periodically wake up a component of the computing device; an embedded controller to wake up a computing device during a wake up loop to allow the embedded controller to perform one or more tasks ; Means for receiving operational data from the embedded controller related to the one or more tasks performed by the embedded controller in response to waking up the embedded controller; means for A unit that generates timing cycle data for the embedded controller, wherein the timing cycle data defines a wake-up period for the embedded controller; and means for sending timing cycle data to an embedded controller to set based on timing cycle data Wake-up timer unit for embedded controller.Example 41 includes the subject matter of Example 40, and wherein the means for determining whether the computing device is in a connection standby state includes means for receiving a command to enter a connection standby state.Example 42 includes the subject matter of Examples 40 and 41, and wherein the means for determining whether the computing device is in a connection standby state includes means for determining if one or more components of the computing device have entered a low-power state.Example 43 includes the subject matter of any of Examples 40-42, and wherein the means for waking up the embedded controller includes means for sending a wake-up command to the embedded controller in response to determining that the wake-up cycle has started.Example 44 includes the subject matter of any of Examples 40-43, and wherein the means for receiving operational data from the embedded controller includes means for receiving thermal data indicative of an operating temperature of a processor of the computing device.Example 45 includes the subject matter of any of Examples 40-44, and wherein the means for generating the timing cycle data comprises means for determining a wake-up period of the embedded controller based on the thermal data.Example 46 includes the subject matter of any of Examples 40-45, and wherein the means for receiving operational data from the embedded controller includes receiving battery life data indicative of a power available in a battery of the computing device.Example 47 includes the subject matter of any of Examples 40-46, and wherein the means for generating the timing cycle data comprises means for determining a wake-up period of the embedded controller based on battery life data.Example 48 includes the subject matter of any of Examples 40-47, and wherein the wake-up period defined by the timing cycle data is less than the period of the wake-up cycle initiated by the power control module.Example 49 includes the subject matter of any of Examples 40-48, and wherein the means for sending the timing cycle data to the embedded controller to set the wake-up timing loop of the embedded controller causes the embedded controller to start up than the power control module Wake-up cycle awakens more often.Example 50 includes the subject matter of any of Examples 40-49, and wherein the means for sending the timing loop data to the embedded controller to set the wake-up timing loop of the embedded controller causes the embedded controller to be more active than the power controller module The awakening cycle awakens less often.Example 51 includes a computing device for managing power of an embedded controller during a low power state, the computing device including a unit for receiving a wake up command from a power control module of the computing device, the wake up command based on the Power control module; means for sending operational data to the power control module in response to the wake-up command, wherein the operational data is associated with one or more tasks performed by the embedded controller ; Means for receiving timing cycle data from the power control module in response to the operational data; and means for setting a wake-up timing cycle for the embedded controller based on the timing cycle data received from the power control module.Example 52 includes the subject matter of Example 51 and further includes means for determining if the computing device is in a connection standby state and means for entering a low power state.Example 53 includes the subject matter of any of Examples 51 and 52, and further includes means for determining whether the embedded controller should wake-up and perform one or more tasks based on the embedded controller wake-up cycle.Example 54 includes the subject matter of any of Examples 51-53, and wherein the means for sending the operational data comprises means for measuring the operational data based on the conditions present in the computing device; and means for sending the operational data to A unit describing a power control module, wherein the operational data is related to one or more tasks performed by the embedded controller.Example 55 includes the subject matter of any of Examples 51-54, and wherein the means for sending the operational data comprises means for sending thermal data indicative of an operating temperature of a processor of the computing device.Example 56 includes the subject matter of any of Examples 51-55, and wherein the means for sending the operational data comprises means for sending battery life data indicative of the amount of power available to the computing device's battery.Example 57 includes the subject matter of any of Examples 51-56, and wherein the means for setting the wake-up timing cycle includes means for setting a wake-up timing cycle of the embedded controller to wake up the embedded controller than a wake-up cycle of the power control module Warn the embedded controller more frequently.Example 58 includes the subject matter of any of Examples 51-57, and wherein the means for setting the wake-up timing cycle of the embedded controller includes means for setting the wake-up timing cycle of the embedded controller to be greater than the wake-up cycle of the power control module Wakes the embedded controller wakes up the embedded controller unit more infrequently.
Implementations of systems, methods and apparatus include aspects of resource conservation strategies that may be useful for a USB compliant device that experiences resource limitations over durations longer than contemplated by the USB standards. Implementations of systems, methods and apparatus disclosed herein enable a USB compliant device to selectively process interrupts and/or other overhead resulting from USB communications between a host and the device. By not processing some interrupts and/or other overhead, based in part on the current level of resource utilization, a device can free up resources needed to process relatively high data-rate incoming traffic from the host. In some implementations, when locally implemented techniques prove to be insufficient, the device may optionally request that the host reduce the data-rate on the downlink.
WHAT IS CLAIMED IS: 1. A method comprising: sensing at least one signal indicative of a measurement of a corresponding resource located on a first device to support communication between the first device and a second device; determining a resource utilization value based on the at least one signal; and adjusting an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the first device. 2. The method of Claim 1 further comprising: at least one of providing and determining discrete levels of resource utilization based on one or more resources located on the first device to support communication; and at least one of providing and determining a respective operating parameter value for each of the discrete levels of resource utilization, wherein adjusting the operating parameter comprises selecting one of the operating parameter values for the operating parameter based at least in part on the resource utilization value. 3. The method of Claim 2, wherein each discrete level of resource utilization is defined at least in part by at least one of a lower threshold and an upper threshold. 4. The method of Claim 2, wherein the at least one operating parameter comprises a minimum interrupt interval time value, wherein a duration of time between sequential interrupts is at least the selected minimum interrupt interval time value. 5. The method of Claim 4 further comprising gating interrupts generated by a data link interface with the selected minimum interrupt interval time value. 6. The method of Claim 5, wherein the interrupts generated by the data link interface comprise at least one of software interrupts and hardware interrupts. 7. The method of Claim 5, wherein gating comprises at least one of software gating and hardware gating. 8. The method of Claim 5, wherein the data link interface comprises a bus interface as defined by at least one of the releases of the Universal Serial Bus standard. 9. The method of Claim 2 wherein the at least one operating parameter comprises a probability value of processing in response to a received event-signal. 10. The method of Claim 9, wherein event-signal is at least one of an interrupt generated by a data link interface and an indicator that the amount of data in a buffer has crossed a threshold. 11. The method of Claim 10 further comprising: receiving an interrupt from the data link interface; and determining whether to process in response to the received interrupt based at least in part on one of the selected probability value and prior determinations. 12. The method of Claim 11 wherein determining whether to process in response to the received interrupt further comprises: comparing a randomly generated number to the selected probability value; and processing in response to the received interrupt based on the comparison. 13. The method of Claim 12, wherein the selected probability value comprises one of a probability of not processing in response to a received interrupt and a probability of processing in response to a received interrupt. 14. The method of Claim 11 wherein determining whether to process in response to the received interrupt further comprises: electing not to process when processing occurred in response to a combination of previously received interrupts; and electing to process when processing did not occur in response to a combination of previously received interrupts. 15. The method of Claim 11, wherein processing in response to one of the received interrupt and threshold amount of data in the buffer comprises processing at least one of the interrupt and data in the buffer. 16. The method of Claim 1, further comprising: evaluating a change in operating performance in response to the adjustment made to the operating parameter;determining whether to adjust a transmissions setting regarding a data link between the first device and second device based on the evaluation of the change in operating performance; and transmitting a signal indicative of a request to change the transmissions setting to the second device. 17. A device comprising : a monitoring entity configured to sense at least one signal indicative of a measurement of a corresponding resource located on the device to support communication between the device and a second device; and a controller configured to: determine a resource utilization value based on the at least one signal; and adjust an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the device. 18. The device of Claim 17 wherein the controller is further configured to: at least one of provide and determine discrete levels of resource utilization based on one or more resources located on the device to support communication; and at least one of provide and determine a respective operating parameter value for each of the discrete levels of resource utilization, wherein adjusting the operating parameter comprises selecting one of the operating parameter values for the operating parameter based at least in part on the resource utilization value. 19. The device of Claim 18, wherein each discrete level of resource utilization is defined at least in part by at least one of a lower threshold and an upper threshold. 20. The device of Claim 18, wherein the at least one operating parameter comprises a minimum interrupt interval time value, wherein a duration of time between sequential interrupts is at least the selected minimum interrupt interval time value. 21. The device of Claim 20, wherein the controller is further configured to gate interrupts generated by a data link interface with the selected minimum interrupt interval time value. 22. The device of Claim 21, wherein the interrupts generated by the data link interface comprise at least one of software interrupts and hardware interrupts. 23. The device of Claim 21, wherein gating comprises at least one of software gating and hardware gating. 24. The device of Claim 21, wherein the data link interface comprises a bus interface as defined by at least one of the releases of the Universal Serial Bus standard. 25. The device of Claim 18 wherein the at least one operating parameter comprises a probability value of processing in response to a received event-signal. 26. The device of Claim 25, wherein event-signal is an interrupt generated by a data link interface. 27. The device of Claim 26 wherein the controller is further configured to: receive an interrupt from the data link interface; and determine whether to process in response to the received interrupt based at least in part on one of the selected probability value and prior determinations. 28. The device of Claim 27 wherein in order to determine whether to process in response to the received interrupt the controller is further configured to: compare a randomly generated number to the selected probability value; and process in response to the received interrupt based on the comparison. 29. The device of Claim 28, wherein the selected probability value comprises one of a probability of not processing in response to a received interrupt and a probability of processing in response to a received interrupt. 30. The device of Claim 27 wherein in order to determine whether to process in response to the received interrupt the controller is further configured to: elect not to process when processing occurred in response to a combination of previously received interrupts; and elect to process when processing did not occur in response to a combination of previously received interrupts. 31. The device of Claim 27, wherein processing in response to the received interrupt comprises processing at least one of the interrupt and data in a buffer. 32. The device of Claim 1, wherein the controller is further configured to: evaluate a change in operating performance in response to the adjustment made to the operating parameter; determine whether to adjust a transmissions setting regarding a data link between the first device and second device based on the evaluation of the change in operating performance; and transmit a signal indicative of a request to change the transmissions setting to the second device. 33. A device comprising : means for sensing at least one signal indicative of a measurement of a corresponding resource located on the device to support communication between the device and a second device; means for determining a resource utilization value based on the at least one signal; and means for adjusting an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the device. 34. The device of Claim 33 further comprising: at least one of means for providing and means for determining discrete levels of resource utilization based on one or more resources located on the device to support communication; and at least one of means for providing and means for determining a respective operating parameter value for each of the discrete levels of resource utilization, wherein the adjusting means is further configured to adjust the operating parameter by selecting one of the operating parameter values for the operating parameter based at least in part on the resource utilization value. 35. The device of Claim 34, wherein each discrete level of resource utilization is defined at least in part by at least one of a lower threshold and an upper threshold. 36. The device of Claim 34, wherein the at least one operating parameter comprises a minimum interrupt interval time value, wherein a duration of time between sequential interrupts is at least the selected minimum interrupt interval time value. 37. The device of Claim 36, further comprising means for gating interrupts generated by a data link interface with the selected minimum interrupt interval time value. 38. The device of Claim 36, wherein the interrupts generated by the data link interface comprise at least one of software interrupts and hardware interrupts. 39. The device of Claim 37, wherein the gating means comprises at least one of software gating and hardware gating. 40. The device of Claim 37, wherein the data link interface comprises a bus interface as defined by at least one of the releases of the Universal Serial Bus standard. 41. The device of Claim 34 wherein the at least one operating parameter comprises a probability value of processing in response to a received event-signal. 42. The device of Claim 41, wherein event-signal is an interrupt generated by a data link interface. 43. The device of Claim 42 further comprising: means for receiving an interrupt from the data link interface; and means for determining whether to process in response to the received interrupt based at least in part on one of the selected probability value and prior determinations. 44. The device of Claim 43 wherein in order to determine whether to process in response to the received interrupt the determining means is further configured to: compare a randomly generated number to the selected probability value; and process in response to the received interrupt based on the comparison. 45. The device of Claim 44, wherein the selected probability value comprises one of a probability of not processing in response to a received interrupt and a probability of processing in response to a received interrupt. 46. The device of Claim 43 wherein in order to determine whether to process in response to the received interrupt the determining means is further configured to: elect not to process when processing occurred in response to a combination of previously received interrupts; and elect to process when processing did not occur in response to a combination of previously received interrupts. 47. The device of Claim 43, wherein processing in response to the received interrupt comprises processing at least one of the interrupt and data in a buffer. 48. The device of Claim 33, further comprising: means for evaluating a change in operating performance in response to the adjustment made to the operating parameter; means for determining whether to adjust a transmissions setting regarding a data link between the first device and second device based on the evaluation of the change in operating performance; and means for transmitting a signal indicative of a request to change the transmissions setting to the second device. 49. A computer program product comprising a computer readable medium comprising instructions, stored in a non-transitory memory, that when executed cause an apparatus to: sense at least one signal indicative of a measurement of a corresponding resource located on a first device to support communication between the first device and a second device; determine a resource utilization value based on the at least one signal; and adjust an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the first device. 50. The computer program product of Claim 49 further comprising instructions, stored in the non-transitory memory, that when executed cause the apparatus to: at least one of provide and determine discrete levels of resource utilization based on one or more resources located on the first device to support communication; and at least one of provide and determine a respective operating parameter value for each of the discrete levels of resource utilization, wherein adjusting the operating parameter comprises selecting one of the operating parameter values for the operating parameter based at least in part on the resource utilization value. 51. The computer program product of Claim 50, wherein each discrete level of resource utilization is defined at least in part by at least one of a lower threshold and an upper threshold. 52. The computer program product of Claim 50, wherein the at least one operating parameter comprises a minimum interrupt interval time value, wherein a duration of time between sequential interrupts is at least the selected minimum interrupt interval time value. 53. The computer program product of Claim 52 further comprising instructions, stored in a non-transitory memory, that when executed cause an apparatus to gate interrupts generated by a data link interface with the selected minimum interrupt interval time value. 54. The computer program product of Claim 53, wherein the interrupts generated by the data link interface comprise at least one of software interrupts and hardware interrupts. 55. The computer program product of Claim 53, wherein gating comprises at least one of software gating and hardware gating. 56. The computer program product of Claim 53, wherein the data link interface comprises a bus interface as defined by at least one of the releases of the Universal Serial Bus standard. 57. The computer program product of Claim 50 wherein the at least one operating parameter comprises a probability value of processing in response to a received event-signal. 58. The computer program product of Claim 57, wherein event-signal is an interrupt generated by a data link interface. 59. The computer program product of Claim 58 further comprising instructions, stored in the non-transitory memory, that when executed cause the apparatus to: receive an interrupt from the data link interface; and determine whether to process in response to the received interrupt based at least in part on one of the selected probability value and prior determinations. 60. The computer program product of Claim 59 wherein in order to determine whether to process in response to the received interrupt the instructionsstored in the non-transitory memory further comprise instructions that when executed cause the apparatus to: compare a randomly generated number to the selected probability value; and process in response to the received interrupt based on the comparison. 61. The computer program product of Claim 60, wherein the selected probability value comprises one of a probability of not processing in response to a received interrupt and a probability of processing in response to a received interrupt. 62. The computer program product of Claim 59 wherein in order to determine whether to process in response to the received interrupt the instructions stored in the non-transitory memory further comprise instructions that when executed cause the apparatus to: elect not to process when processing occurred in response to a combination of previously received interrupts; and elect to process when processing did not occur in response to a combination of previously received interrupts. 63. The computer program product of Claim 59, wherein processing in response to the received interrupt comprises processing at least one of the interrupt and data in a buffer. 64. The computer program product of Claim 49, further comprising instructions, stored in the non-transitory memory, that when executed cause the apparatus to: evaluate a change in operating performance in response to the adjustment made to the operating parameter; determine whether to adjust a transmissions setting regarding a data link between the first device and second device based on the evaluation of the change in operating performance; and transmit a signal indicative of a request to change the transmissions setting to the second device.
SYSTEMS, METHODS AND APPARATUS FOR DATA COMMUNICATION CLAIM OF PRIORITY [0001] The present Application for Patent claims priority to both U.S. Provisional Application No. 61/259,054, entitled "RESOURCE CONSERVATION STRATEGIES FOR USB COMPLIANT DEVICES," filed November 6, 2009, and U.S. Provisional Application No. 61/259,323, entitled "RESOURCE CONSERVATION STRATEGIES FOR USB COMPLIANT DEVICES," filed November 9, 2009; both of which are hereby expressly incorporated by reference herein. BACKGROUND Field [0002] The present application relates to regulating communication between devices, and in particular, to resource conservation for data communication protocol compliant devices. Background [0003] The Universal Serial Bus (USB) standard defines a data communication protocol for connecting electronic peripheral devices to a host device. Thus far, there have been three releases of the USB standard (USB 1.0, USB 2.0 and USB 3.0). The USB standard was originally conceived to replace non-standardized serial and parallel data ports on computers, which called for various device drivers to be developed and maintained. However, the ensuing popularity of the USB standard has made USB ports standard features on video game consoles, DVD players, smart phones, and a wide variety of other consumer electronics. [0004] Peripherals are sometimes referred to as functions, and may include other computers and devices such as keyboards, scanners, digital cameras, printers, external storage devices, etc. The USB standard enables plug-and-play capabilities, meaning that peripheral devices can be connected and disconnected from a host without powering down or rebooting the host. Rather, when a device is first connected, the host enumerates and recognizes it, and loads the device driver needed for that device. The host and connected peripheral are then able to communicate data to one another.SUMMARY [0005] Various embodiments of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the desirable attributes described herein. Without limiting the scope of the appended claims, some prominent features are described herein. After considering this discussion, and particularly after reading the section entitled "Detailed Description" one will understand how the features of various embodiments are used to manage monitoring of a page channel or the like. [0006] One aspect of the disclosure is an implementation for a method including sensing at least one signal indicative of a measurement of a corresponding resource located on a first device to support communication between the first device and a second device; determining a resource utilization value based on the at least one signal; and adjusting an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the first device. [0007] Another aspect of the disclosure is an implementation for a device including a monitoring entity configured to sense at least one signal indicative of a measurement of a corresponding resource located on the device to support communication between the device and a second device. The device also includes a controller configured to determine a resource utilization value based on the at least one signal; and adjust an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the device. [0008] Yet another aspect of the disclosure is an implementation for a device including means for sensing at least one signal indicative of a measurement of a corresponding resource located on the device to support communication between the device and a second device; means for determining a resource utilization value based on the at least one signal; and means for adjusting an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the device. [0009] Yet even another aspect of the disclosure is an implementation for a computer program product including computer readable medium comprisinginstructions, stored in a non-transitory memory, that when executed cause an apparatus to sense at least one signal indicative of a measurement of a corresponding resource located on a first device to support communication between the first device and a second device; determine a resource utilization value based on the at least one signal; and adjust an operating parameter by selecting an operating parameter value based at least in part on the resource utilization value, wherein the operating parameter affects the processing of communication by the first device. BRIEF DESCRIPTION OF THE DRAWINGS [0010] So that the manner in which features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to aspects, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical aspects of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective aspects. [0011] FIG. 1 is a simplified block diagram of a USB system. [0012] FIG. 2 is a simplified block diagram of an implementation of a peripheral device. [0013] FIG. 3 is a simplified block diagram of an implementation of a peripheral device. [0014] FIG. 4 is a simplified block diagram of an implementation of a peripheral device. [0015] FIG. 5 is a simplified block diagram of an implementation of a peripheral device. [0016] FIG. 6 is a flowchart of an implementation of a method. [0017] FIG. 7 is a flowchart of an implementation of a method. [0018] FIG. 8 is a signal diagram of an implementation of a method. [0019] In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.DETAILED DESCRIPTION [0020] Various aspects of embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein. [0021] Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof. [0022] As noted above, USB ports are now included as standard features on a wide variety of electronic devices to enable the devices interface with a host controller and/or to enable devices to interface with one another. The host is typically a computer, but in numerous applications the host can also be a video game consol, a smart phone, a camera, a tablet computer or any other electronic device. There are also various types of devices that can interface with a host. For example, peripheral devices include hubs, other computers, mouse devices, keyboards, scanners, digital cameras, printers, external storage devices, etc. [0023] Those skilled in the art will appreciate from the present disclosure that many of the types of devices that can be considered host devices can also beconsidered peripheral devices and vice versa. That is, a host and a peripheral device can be the same type of device. The USB standard provides the titles "host" and "peripheral device" as a convenient way of identifying which of the two devices has greater relative control over the data link that is established between the two devices according to the USB standard. The host functions as the primary controller of the USB data link, while the peripheral device has secondary control if at all. [0024] Furthermore, devices in a USB system often connect to a USB host in a tiered star topology. In such a configuration, a USB system includes a polled bus in which a host controller includes a single USB controller that manages all communication on the bus and monitors the bus topology for changes due to devices being connected and/or disconnected. [0025] While the USB system supports multiple peripherals connected to the bus, the USB protocol is a point-to-point protocol. In other words, a single host can send data to a single uniquely addressed device at a time. Thus, data for the various devices are time multiplexed so that each device can receive or transmit data during its time slot. [0026] A USB system generally defines frames that are one millisecond long. Within that frame, the USB system may allocate different time slots to many or all of the devices on the bus. Each device has a unique address so the device knows that data transmitted is for it, or supplies the unique address with data it sends so the host knows from which device the data is received. [0027] FIG. 1 is a simplified block diagram of a simple USB system 100 showing a single host 110 and a single device 120. As noted above, those skilled in the art will appreciate that one or more devices may be connected to a host, and that a single device has been shown in FIG. 1 merely to illustrate more pertinent aspects of implementations disclosed herein. [0028] The host 110 and the device 120 share an uplink 124a and a downlink 124b. The uplink is used to communicate data from the device 120 to the host 110. The downlink is used to communicate data from the host 1 10 to the device. While the uplink 124a and downlink 124b have been illustrated as separate connections, those skilled in the art will appreciate that the uplink 124a and downlink 124b can exist on the same physical connection between the host 110 and the device 120. The uplink 124a and downlink 124b are typically included in the USB bus managed by the host 110.The device 120 also includes a buffer 121, which is used to temporarily store at least data received via the downlink 124a. [0029] Most USB bus transactions include three packets. The host 110 sends a token packet describing the type and direction of the transaction, a device address, and an endpoint number. The device 120 that is addressed recognizes its address from the token packet. Data is transferred either from the host 110 to the addressed device 120 or from the addressed device 120 to the host 110 based on the direction specified in the token packet. In most, cases, the destination of the data responds with a handshake packet indicating a receipt status for the transferred data, which is described in greater detail below. [0030] In operation, when the device 120 is first connected to the host 110, the device 120 goes through an initialization, enumeration, and configuration process to set up the device 120 for use by the host 110, and in some implementations, the client software thereon. [0031] While the USB standards provide a convenient standardized interface that supports data-rates of the order of 480Mbps (raw data-rates) on a USB link, relatively high data-rates pose unique challenges to devices with limited resources. The USB 2.0 standard provides peripheral devices with only a limited set of mechanisms for controlling the rate of the incoming data flow from the host. However, the flow-control mechanisms are of limited use, and can exacerbate limitations caused by resource constraints when the resources available to the peripheral device are limited over a relatively long duration and/or permanently. For example, some devices may be handheld and/or portable devices that have limited resources. These resources may include, without limitation, available operating power, relatively small allotments of memory and processing power. [0032] More specifically, currently available USB the flow-control mechanisms were designed to alleviate short term elevated demand caused by temporary constraints on the resources available to the device. In other words, the currently available USB 2.0 standard provides basic flow-control mechanisms that were designed based on the assumption that the resource constraints that trigger the flow- control mechanisms will last for a brief time-span. The available flow-control mechanisms do not provide an adequate solution to situations where a peripheral device has limited resources, such as available battery power.[0033] With further reference to FIG. 1, According to the USB standards, one such flow-control mechanism specifies that every segment of data/control traffic sent over the link from the host 110 to the device 120 must be acknowledged in one of three possible ways. First, the device 120 sends a positive acknowledgement (ACK) when data is received error- free. Second, the device 120 send a negative acknowledgement (ERR) to denote cyclic redundancy check (CRC) failures, erroneous data reception and/or corruption over the USB link to the host 110. Third, the device 120 may send a NAK signal to the host 110 to flow-control the host 110. [0034] A NAKs signal from the device 120 to the host 110 indicates to the host 110 that the device 120 is unable to process the incoming data due to temporary resource constraints. The host may attempt a retransmission as early as at the next micro-frame triggering more NAKs from the device 120 in the process. [0035] For portable devices that are self-powered, and constrained at least by available battery power, this system is inefficient because the transmission of each NAK translates to wastage of battery power and bandwidth over the USB link. As an alternative, a PING protocol also defined in the USB standards helps reduce the generation of excessive NAK traffic on the bus, but even this mechanism is based on the assumption that the resource constraints that triggered the flow-control mechanisms will occur for short intervals. Hence even the PING protocol is fairly inefficient if the device is resource constrained for extended periods of time. But resources such as available battery power, memory and processing power are sometimes limited for portable devices over long durations and/or even permanently given the initial configuration of the portable device. [0036] Implementations of systems, methods and apparatus include aspects of resource conservation strategies that may be useful for a USB compliant device that experiences resource limitations over durations longer than contemplated by the USB standards. Implementations of systems, methods and apparatus disclosed herein enable a USB compliant device to selectively process interrupts and/or other overhead resulting from USB communications between a host and the device. By not processing some interrupts and/or other overhead, based in part on the current level of resource utilization, a device can free up resources needed to process relatively high data-rate incoming traffic from the host. In some implementations, when locally implementedtechniques prove to be insufficient, the device may optionally request that the host reduce the data-rate on the downlink. [0037] FIG. 2 is a simplified block diagram of an implementation of a peripheral device 120. The device 120 illustrated in FIG 2 is similar to and adapted from the device 120 illustrated in FIG. 1. Accordingly, elements common to both devices share common reference indicia, and only differences between the devices are described herein for the sake of brevity. With reference to FIG. 2, the device 120 includes a USB bus interface 123, a USB logical device 125 and a functional element 127, as well as the aforementioned buffer 121 and uplink/downlink connection 124a,b described above. Those skilled in the art will appreciate that a USB compliant device may include other components, however the device 120 illustrated in FIG. 2 includes those components that are more pertinent to aspects of implementations within the ambit of the appended claims. [0038] The USB bus interface 123, the USB logical device 125 and the functional element 127 comprise the USB stack of the device 120. In operation, the bus interface 123 is responsible for physical transmission of data (i.e. transmission/reception of packets over the link with a host), and the USB logical device 125 is responsible for routing the packets between the bus interface 123 and the individual endpoints on the device 120. The functional element 127 represents the actual functionality provided by the device 120 (e.g. digital camera). [0039] The functional element 127 includes resources 128. The resources 128 include, without limitation, a memory 131, a processor 133 (or controller) and a battery 135 serving as the power source for the device 120. Those skilled in the art will appreciate from the present disclosure that the resources 128 may be shared by other components included on the device such as the bus interface 123 and logical device 125, and that the resources 128 are merely illustrated within the functional element 127 as one possible implementation. [0040] In operation, the bus interface 123 sends an interrupt to the upper layers when a data transfer is completed successfully. The interrupt frequency is often implementation specific. For example, USB 2.0 operates on the micro-frame boundaries, where each micro-frame is 125 μβεΰ long. Interrupts at the micro-frame boundaries usually lead to a high processing overhead. Hence most USB devices operate at lower interrupt frequencies, such as a milli-second (msec) boundary or theUSB defined frame boundaries. The minimum interval between consecutive interrupts is often referred to as the minimum interrupt interval time. In response to an interrupt from the bus-controller, during a data phase, logical device 125 processes any data available in the buffer 121 that might have been received from the host. This processing consumes resources including, but not limited to, processing power, memory etc on the device 120. As noted above, the currently available USB standards do not provide a flow-control mechanism that provides an adequate solution to situations in which the resources 128 of the device 120 are constrained and/or limited for a relatively long duration. [0041] FIG. 3 is a simplified block diagram of an implementation of a peripheral device 120. The device 120 illustrated in FIG 2 is similar to and adapted from the device 120 illustrated in FIG. 1. Accordingly, elements common to both devices share common reference indicia, and only differences between the devices are described herein for the sake of brevity. With reference to FIG. 3, the device 120 further includes a monitoring entity 150 and an interrupt processor. [0042] The interrupt monitoring entity 150 senses signals indicative of measurements corresponding to one or more of the resources 128. As described in further detail below, one or more of the signals are converted into a resource utilization value. The resource utilization value is in turn provided to the interrupt processor 140. The interrupt processor 140 adjusts how the processing in response to the interrupts generated by the bus interface 123 occurs. In some implementations interrupt processor 140 also adjusts how the processing of data in a buffer above a threshold level occurs. [0043] In one implementation, the total resource availability is quantized into N levels or bins. These levels can be addressed by the index n (where 1 < n < N). In other words, if r denotes the percent of utilization of the resources under consideration, then level n corresponds to the cases where r falls in the range ΤΗΜΑΧ -Ι < r < Th,MAx_n, where Thmx n denotes the upper threshold of resource utilization for a particular bin denoted by index n. [0044] In one implementation, the levels denote progressively increasing levels of resource utilization. In other words, THMAX -I < hmx n- However, those skilled in the art will appreciate from the present disclosure that numerous other relationships can be defined amongst the levels, including that the levels denote progressively decreasing levels of resource utilization.[0045] In one implementation, each level is assigned a minimum interrupt interval time value that may be used to gate the interrupts produced by the bus interface 123. For example, in operation, when the resource utilization crosses a particular threshold, the device 120 may have to reduce resource consumption. Beyond this threshold, the device can autonomously opt to conserve resources by adaptively switching to a lower interrupt frequency from the bus controller 123. This can be accomplished by dynamically changing the minimum interrupt interval time value corresponding to the bin n which is chosen based on the current levels of resource utilization. This process is further described with reference to FIG. 6. [0046] As the interrupt frequency changes, the device 120 achieves gains from aggregation at the cost of latencies as the resource utilization increases. On the other hand, as the resource utilization level decreases, the device 120 increases the interrupt frequency back up to nominal levels. This allows the device 120 to reduce latencies, under normal operating conditions. [0047] Further, the minimum interrupt interval value for each level can be chosen such that, the buffer 121 does not overflow at the determined level of resource utilization given the incoming data-rate. The minimum interrupt interval values for the respective levels are dependent on a particular implementation because a device 120 can have a wide variety of functions. [0048] In another implementation, each level is assigned a probability value of processing in response to a received event-signal, such as an interrupt and/or the amount of data in a buffer breaching a threshold. In turn, the interrupt service routine may process the interrupts that signal data-transfers, with a probability of P. In other words, with a probability of (1-P), an interrupt denoting the end of data-transfer, may be dropped autonomously by the device 120, without any loss of data. A more detailed example of this process is discussed below with reference to FIG. 7. [0049] Again, because a device 120 can have a wide variety of functions, the value of P at each level is dependent on a particular implementation, including the size of the available hard-ware buffers in order to avoid any possible data loss due to buffer over- flows. Further, in some implementations the value of P also factors in the current level of resource utilization, and the minimum interrupt interval time value used in the device.[0050] Further, in another implementation, an interrupt is automatically serviced in order reduce and/or prevent data loss due to buffer overflow, if M consecutive previously received interrupts have not been serviced. In other words, after M consecutive un-serviced interrupts there is a risk that the device buffers will overflow. In order to reduce this risk, the next interrupt is automatically serviced. The probability of hitting M consecutive un-serviced interrupts is dependent on the probability value P for a given level. [0051] The various strategies disclosed herein can be implemented in either software, hardware, firmware or a combination thereof. FIG. 4 is a simplified block diagram of a software implementation in the device 120 shown in FIG. 3. The device 120 illustrated in FIG 2 is similar to and adapted from the device 120 illustrated in FIG. 1. Accordingly, elements common to both devices share common reference indicia, and only differences between the devices are described herein for the sake of brevity. [0052] With reference to FIG. 4, the device 120 includes a hardware component 160 that produces hardware interrupts. In one implementation, the hardware component 160 includes at least a portion of the bus interface 123 (shown in FIG. 3). The device 120 also includes two software modules. The first is an interrupt manager module 141 and the second is an interrupt processing software module 142. [0053] In operation, the interrupt manager 141 receives a resource utilization value from the monitoring entity. In response, the interrupt manager 141 selects one of a minimum interrupt interval time value and a probability value in accordance with one of the implementations discussed above, and provides the value to the interrupt processing software 142. In response, the interrupt processing software 141 either gates the hardware interrupt using a new minimum interrupt interval time value or determining whether to drop the hardware interrupt using the probability value. [0054] FIG. 5 is a simplified block diagram of a portion another alternative implementation of the device 120 shown in FIG. 3. The device 120 illustrated in FIG 2 is similar to and adapted from the device 120 illustrated in FIG. 1. Accordingly, elements common to both devices share common reference indicia, and only differences between the devices are described herein for the sake of brevity. [0055] With reference to FIG. 5, the device 120 includes a hardware component 160 that produces hardware interrupts. In one implementation, the hardwarecomponent 160 includes at least a portion of the bus interface 123 (shown in FIG. 3). The device 120 also includes a hardware implemented interrupt manager 141. [0056] In operation, the interrupt manager 141 receives a resource utilization value from the monitoring entity. In response, the interrupt manager 141 selects one of a minimum interrupt interval time value and a probability value in accordance with one of the implementations discussed above, and provides the value to the hardware module 160. In response, the hardware module 160 either gates the hardware interrupt using a new minimum interrupt interval time value or determining whether to drop the hardware interrupt using the probability value. [0057] FIG. 6 is a flowchart of an implementation of a method. As represented by block 6-1, the method includes determining two more discrete levels (or bins) of resource utilization based on the total resource availability within a device. As represented by block 6-2, the method includes one of determining and setting a respective minimum interrupt interval time value for each of the discrete levels of resource utilization. As represented by block 6-3, the method includes receiving resource measurements concerning one or more resources included on the device. As represented by block 6-4, the method includes determining a resource utilization value based on the resource measurements. As represented by block 6-5, the method includes selecting minimum interrupt interval time value based on the resource utilization value by mapping the resource utilization value to one of the predetermined resource utilization levels. As represented by block 6-6, the method includes setting a timer to gate interrupts using the selected minimum interrupt interval time value. [0058] FIG. 7 is a flowchart of an implementation of a method. As represented by block 7-1, the method includes determining two more discrete levels (or bins) of resource utilization based on the total resource availability within a device. As represented by block 7-2, the method includes one of determining and setting a respective probability value for each of the discrete levels of resource utilization. As represented by block 7-3, the method includes receiving an interrupt from a data link interface, such as for example a bus interface. As represented by block 7-4, the method includes receiving resource measurements concerning one or more resources included on the device. As represented by block 7-5, the method includes determining a resource utilization value based on the resource measurements. As represented by block 7-6, the method includes selecting probability value based on the resource utilization value bymapping the resource utilization value to one of the predetermined resource utilization levels. [0059] As represented by block 7-7, the method includes determining whether to skip the processing associated with the received interrupt based on the probability value. In one example implementation, the number is randomly generated and compared to the probability value. If the randomly generated number is greater than the probability value, the associated processing occurs in response to the received the interrupt. On the other hand, if the randomly generated number is less than the probability value, the associated processing is skipped. With further reference to block 7-7, if it is determined that the associated processing should not be skipped based on the probability value (No path from 7-7), the method includes proceeding to the portion of the method represented by block 7-10. On the other hand, if it is determined that the associated processing should be skipped based on the probability value (Yes path from 7-7), the method includes proceeding to the portion of the method represented by block 7-8. [0060] With reference to block 7-8, the method includes determining whether the previous M received interrupts have been skipped. If the previous M interrupts have been skipped (Yes path from 7-8), as represented by block 7-10, the method includes performing the processing associated with the received interrupt before returning to the portion of the method represented by block 7-3. On the other hand, if the previous M interrupts have not been skipped (No path from 7-8), as represented by block 7-9, the method includes skipping the processing associated with the received interrupt and returning to the portion of the method represented by block 7-3. [0061] With further reference to FIG. 3, there may be circumstances where the aforementioned methods insufficiently affect flow-control from the host 110. In such circumstances the device 120 may optionally send a request to the host 110 to lower the transmission data-rate. FIG. 8 is a signal diagram of an implementation of such a method. [0062] As represented by block 801, the device 120 determines whether to change an operating parameter based on a current level of resource utilization, in accordance with one of the examples discussed above. As represented by block 802, the device 120 attempts to compensate locally in accordance with one of the example discussed above. As represented by block 803, the device determines whether the localcompensation effort was successful. If the local compensation effort was successful (Yes path from 803), as represented by block 804 the device 120 does not need to take further action. On the other hand, if the local compensation effort was not sufficient (No path from 803), as represented by block 805 the device 120 changes one or more operating setting affecting the data link. As represented by signal 806, the device 120 transmits the new settings to the host 110. As represented by block 807, the host 110 reconfigures the data link. As represented by signal 808, the host transmits data to the device 120 under the new settings. [0063] USB compliant devices may enumerate multiple alternate settings at the time of initialization, for each supported configuration. Each alternate setting may include parameters such as the supported data rates for each endpoint. [0064] As such in one implementation, USB compliant devices support multiple alternate settings, each with a different set of maximum supported data rates for the individual data rates. Further, if a device is resource constrained, the device may send a pre-determined, implementation specific signal to the host. For instance, this could be multiple stall tokens on the associated control pipe for a particular function, multiple back-to-back time-outs or even a custom implementation specific control token that can be interpreted by the host software. This signal could then be used to trigger a re-evaluation of the data-rates by the host software layer. In one implementation, if the device sends the pre-determined signal to the host to trigger flow-control, then in response the host software triggers transition to an alternate setting that involves lower data-rates in order to help the device conserve resources. Additionally and/or alternatively, if a device eventually determines that the data rates corresponding to the newly renegotiated alternate settings are too high as well, then the device could repeat the flow-control signal to the host and the host could switch to a more conservative alternate setting. [0065] This approach can be implemented to reduce the generation of NAK tokens for an extended period of time. Also, since the device and the host can establish a mechanism for adjusting data rates based on available resources, the data rates can be adjusted based on the most recent device status, thereby ensuring that the device is not forced to expend additional power in order to flow-control the host. In other words, the cost of flow-controlling the host is reduced, since flow-control is not performed per-microframe as described in the USB specifications, but is performed at a much lower frequency. [0066] The USB 2.0 standard partially addresses the problem of excessive link bandwidth/resource wastage due to flow-control for a particular function/endpoint, with the introduction of NAK limiting functionality using the aforementioned PING protocol. When the device flow-controls the host by responding with NAKs for OUT tokens, the host may poll the device status using PING packets. The device may then respond with ACK/NAK to these PING special tokens, based on the current device status. ACK response to PING indicates device can accept more data, while NAK response to PING. [0067] While the PING tokens provide a useful mechanism to reduce the NAK bandwidth and resource wastage for OUT tokens, the inherent problem with this mechanism is that the USB standards do not impose any restrictions on the frequency of the PING tokens. For example, the USB2.0 standard mandates that the device must be capable to handling PING tokens as frequently as at consecutive micro-frames, though the host may issue PING tokens at almost any frequency. [0068] To resolve this issue, in one implementation, the host follows an exponential back-off mechanism during repeated PING transmissions. When the host issues the first PING token, the host starts a timer at an initial value. If the device responds with a NAK, the host waits until a timer expires before issuing the next PING token. For every consecutive PING transaction, the value of the timer is increased by a multiplicative factor, until a certain maximum value is reached for PING backoffs. The maximum value can be determined based on the individual function and device characteristics and is thus implementation specific. [0069] It should be understood that any reference to an element herein using a designation such as "first," "second," and so forth does not generally limit the quantity or order of those elements. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements may comprise one or more elements.[0070] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0071] Those of skill would further appreciate that any of the various illustrative logical blocks, modules, processors, means, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two, which may be designed using source coding or some other technique), various forms of program or design code incorporating instructions (which may be referred to herein, for convenience, as "software" or a "software module), or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. [0072] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented within or performed by an integrated circuit (IC), an access terminal, or an access point. The IC may comprise a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, electrical components, optical components, mechanical components, or any combination thereof designed to perform the functions described herein, and may execute codes or instructions that reside within the IC, outside of the IC, or both. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., acombination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0073] It is understood that any specific order or hierarchy of steps in any disclosed process is an example of a sample approach. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented. [0074] The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer- readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media. In summary, it should be appreciated that a computer-readable medium may be implemented in any suitable computer-program product. [0075] The above description is provided to enable any person skilled in the art to make or use embodiments within the scope of the appended claims. Various modifications to these aspects will be readily apparent to those skilled in the art, and thegeneric principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Embodiments include epitaxial semiconductor stacks for reduced defect densities in III-N device layers grown over non-III-N substrates, such as silicon substrates. In embodiments, a metamorphic buffer includes an AlxIn1-xN layer lattice matched to an overlying GaN device layers to reduce thermal mismatch induced defects. Such crystalline epitaxial semiconductor stacks may be device layers for HEMT or LED fabrication, for example. System on Chip (SoC) solutions integrating an RFIC with a PMIC using a transistor technology based on group III-nitrides (III-N) capable of achieving high Ft and also sufficiently high breakdown voltage (BV) to implement high voltage and/or high power circuits may be provided on the semiconductor stacks in a first area of the silicon substrate while silicon-based CMOS circuitry is provided in a second area of the substrate.
CLAIMS What is claimed is: 1. A semiconductor material stack, comprising: a silicon substrate; a group III-N device layer disposed over the silicon substrate; and a buffer disposed between the silicon substrate and the group III-N device layer, wherein the buffer includes an ΑΙχΙη χΝ layer, with x being less than unity 2. The material stack of claim 1, wherein the ALln^N layer is lattice matched to the group III-N device layer and is in direct contact with the group III-N device layer. 3. The material stack of claim 2, wherein the group III-N device layer is GaN, wherein the top barrier comprises at least one of Al zGa 1-zN, Al wIn 1-wN, or A1N, and wherein x is between 0.80 and 0.84, and wherein the silicon substrate has a ( 100), (110), or (111) crystal orientation. 4. The material stack of claim 3, wherein the silicon substrate has (100) orientation and is offcut to between 4° and 8° toward the [110] direction. 5. The material stack of claim 2, wherein the Al xIn 1-xN layer has a thickness that is between 1.5 and 10 times greater than the group III-N device layer. 6. The material stack of claim 2, wherein the buffer includes a super lattice comprising a plurality of ΑΙχΙη χΝ layers and group III-N layers. 7. The material stack of claim 1, wherein the buffer further comprises an A1N nucleation layer disposed between the layer and the silicon substrate. 8. The material stack of claim 7, wherein the buffer further comprises an Α1 γΙη 1-γΝ transition layer disposed between the A1N nucleation layer and the layer, wherein y>x. 9. The material stack of claim 8, wherein y is graded with decreasing from approximately 1 nearest the nucleation layer toward approximately x nearest the ΑΙχΙη χΝ layer. 10. The material stack of claim 8, wherein the Al xIn 1-xN layer comprises between 50 % and 99 % of the total thickness of the buffer. 11. A high electron mobility transistor (HEMT), comprising: a gate electrode disposed between a source contact and a drain contact; a gate dielectric disposed below the gate electrode; a group III-N channel layer disposed below the gate dielectric; a bottom barrier disposed below the channel layer, wherein the bottom barrier comprises ALlni- xN layer lattice matching the channel layer; and a silicon substrate disposed below the bottom barrier with the layer disposed over a (100) or (111) crystal plane of the substrate. 12. The HEMT of claim 11, further comprising a top barrier layer having a first thickness between the gate electrode and the channel layer and a second, greater thickness, between the source contact and drain contact disposed on either side of the gate electrode, wherein the top barrier layer comprises at least one of Al zGai- zN, Al wIni- wN, or AIN. 13. The HEMT of claim 12, wherein the group III-N channel layer comprises a GaN layer having a thickness between lOnm and 200nm, wherein the Al xIn 1-xN layer has a thickness that is between 400nm and 2μιη, and wherein x is between 0.80 and 0.84; wherein an AIN nucleation layer is disposed between the Al xIn 1-xN layer and the silicon substrate; and wherein the layer is disposed on an Α1 γΙη 1-γΝ transition layer disposed over the AIN nucleation layer and the wherein y is graded from approximately 1 nearest the nucleation layer toward approximately x nearest the Al xIn 1-xN layer. 14. The HEMT of claim 11, wherein the channel layer is undoped within a region disposed below a gate electrode and the first thickness of the top barrier layer induces charge to form a two dimensional electron gas (2DEG) within the channel layer only when the gate electrode is at a threshold voltage (V t) greater than 0V. A mobile computing device, comprising: a touchscreen; a battery; an antenna; a DC-to-DC converter coupled to the battery; and a wireless transmitter further including a power amplifier (PA), wherein at least one of the DC-to-DC converter and the PA comprises the HEMT as in claim 11. 16. The mobile computing device of claim 15 where the DC-to-DC converter comprises a first HEMT as in claim 11, and the PA employs a second HEMT as in claim 11. 17. A method of forming a high electron mobility transistor, the method comprising: forming a sacrificial gate structure over a stack of semiconductor material layers disposed on crystalline silicon substrate, the stack comprising a group III-N semiconductor channel layer disposed on a lattice matched layer that has a thickness greater than the channel layer; forming a source and a drain region on opposite sides of the sacrificial gate structure; removing the sacrificial gate structure to expose a surface of the epitaxially grown stack; forming a gate dielectric layer on the exposed surface of the epitaxially grown stack with an atomic layer deposition process; and forming a gate electrode on the gate dielectric layer. 18. The method of claim 17, wherein the method further comprises forming the stack of semiconductor material layers by: epitaxially growing a graded Α1 γΙη 1-γΝ transition layer over an A1N nucleation layer disposed on the substrate; epitaxially growing the ΑΙχΙη χΝ layer over the Al yIn 1-yN transition layer, wherein y is graded from approximately 1 nearest the nucleation layer toward approximately x nearest the layer; and epitaxially growing the group III-N semiconductor channel consisting essentially of GaN over the ΑΙχΙη χΝ layer; and epitaxially growing a top barrier layer comprising a ternary group Ill-nitride over the channel layer. 19. The method of claim 17, wherein the graded Α1 γΙη 1-γΝ transition layer is grown directly on the A1N nucleation layer to a thickness between 50nm and lOOnm, wherein the layer is grown directly on the Α1 γΙη 1-γΝ transition layer to a thickness between 300nm and 2μιη, and wherein the channel layer is grown directly on the Al xIn 1-xN layer to a thickness between lOnm and 200nm. 20. The method of claim 19, wherein the stack of semiconductor material layers is disposed on a (100) surface of the substrate offcut to between 4° and 8° toward the [110] direction; and wherein the ternary group Ill-nitrides is selected from the group consisting of: Al xGa 1-xN, Al wIn 1-wN, and In zGa 1-zN.
EPITAXIAL BUFFER LAYERS FOR GROUP III-N TRANSISTORS ON SILICON SUBSTRATES TECHNICAL FIELD Embodiments of the present invention generally relate to microelectronic devices and manufacture, and more particularly to group III-N transistor architecture and design. BACKGROUND The mobile computing (e.g., smart phone and tablet) markets benefit from smaller component form factors and lower power consumption. Because current platform solutions for smart phones and tablets rely on multiple packaged integrated circuits (ICs) mounted onto a circuit board, further scaling to smaller and more power efficient form factors is limited. For example, a smart phone will include a separate power management IC (PMIC), radio frequency IC (RFIC), and WiFi/Bluetooth/GPS IC, in addition to a separate logic processor IC. System on Chip (SoC) architectures offer the advantage of scaling which cannot be matched by board- level component integration. While the logic processor IC may itself be considered a system on a chip (SoC) integrating both memory and logic functions, more extensive SoC solutions for mobile computing platforms have remained elusive because the PMIC and RFIC operate with two or more of high voltage, high power, and high frequency. As such, conventional mobile computing platforms typically utilize incompatible transistor technologies that are specifically tailored for the different functions performed by the PMIC and RFIC. For example, laterally diffused silicon MOS (LDMOS) technology is typically employed in the PMIC to manage voltage conversion and power distribution (battery voltage regulation including step-up and/or step-down voltage conversion, etc.). Group III-V compound semiconductors, such a GaAs heterojunction bipolar transistors (HBTs), are typically utilized in the RFIC to generate sufficient power amplification at GHz carrier frequencies. Conventional silicon field effect transistors implementing CMOS technology then entail a third transistor technology utilized for logic and control functions within the mobile computing platform. In addition to fundamental semiconductor material incompatibilities between the various ICs in the mobile computing platform, transistor design for DC-to-DC conversion switches in the PMIC has been generally incompatible with the transistor design for high frequency power amplifiers in the RFIC. For example, the relatively low breakdown voltage of silicon requires source-to-drain separation in a DC-to-DC converter switch to be vastly larger than is permissible for a power amplifier transistor needing an F texceeding 20 GHz, and possibly up to 500 GHz, depending on the carrier frequency (e.g., WPAN is 60 GHz and so transistors need an F tmany times 60 GHz). Such different transistor-level design requirements render the fabrication processes for the various transistor designs distinct and difficult to integrate into a single process. Therefore, while an SoC solution for the mobile computing space that would integrate PMIC and RFIC functions is attractive for improving scalability, lowering costs, and improving platform power efficiency, one barrier to an SoC solution is the lack of a scalable transistor technology having both sufficient speed (i.e., sufficiently high gain cutoff frequency, Ft), and sufficiently high breakdown voltage (BV). Group Ill-nitride (III-N) devices offer a promising avenue for integration of PMIC and RFIC functions with CMOS as both high BV and Ft can be obtained. However, heteroepitaxy of III-N material stacks on silicon substrates poses a technical challenge for at least the reasons of significant lattice mismatch and thermal mismatch, both of which can lead to high defect densities and poor device performance. Techniques and epitaxial semiconductor stack architectures which can provide reduced defect densities in device layers are therefore advantageous. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the present invention are illustrated by way of example, and not by way of limitation, and can be more fully understood with reference to the following detailed description when considered in connection with the figures, in which: Figure 1A illustrates a cross-section of a semiconductor stack in which a high electron mobility transistor may be formed, in accordance with embodiments; Figure IB illustrates a cross-section of a semiconductor stack in which a high electron mobility transistor may be formed, in accordance with embodiments; Figure 2A illustrates a cross-section of a recessed gate group III-N transistor with epitaxially grown raised source/drain regions, in accordance with an embodiment; Figure 2B illustrates band diagrams for regions of the transistor comparing bottom barriers of Al yGa 1-yN to those of Al xIn 1-xN, in accordance with embodiments of the present invention; Figure 3 is a functional block diagram of a group III-N SoC implementation of a mobile computing platform, in accordance with an embodiment of the present invention; and Figure 4 is a flow diagram illustrating a method of fabricating a non-planar high voltage transistor, in accordance with embodiments. DETAILED DESCRIPTION In the following description, numerous details are set forth, however, it will be apparent to one skilled in the art, that the present invention may be practiced without these specific details. In some instances, well-known methods and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present invention. Reference throughout this specification to "an embodiment" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "in an embodiment" in various places throughout this specification are not necessarily referring to the same embodiment of the invention. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the two embodiments are not mutually exclusive. The terms "coupled" and "connected," along with their derivatives, may be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" my be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship). The terms "over," "under," "between," and "on" as used herein refer to a relative position of one material layer with respect to other layers. As such, for example, one layer disposed over or under another layer may be directly in contact with the other layer or may have one or more intervening layers. Moreover, one layer disposed between two layers may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first layer "on" a second layer is in direct contact with that second layer. Described herein are embodiments of epitaxial semiconductor stacks for reduced defect densities in III-N device layers grown over non-III-N substrates, such as silicon substrates. In embodiments, a metamorphic buffer includes an ALIn^N layer lattice matched to overlying device layers, such as GaN, for reduced thermal mismatch induced defects in the device layers. Such crystalline epitaxial semiconductor stacks may be used to provide devices layers for HEMT or LED fabrication, for example. In embodiments, group Ill-nitride (III-N) semiconductor stacks and high electron mobility transistors formed thereon are employed in SoC solutions integrating an RFIC with a PMIC to implement high voltage and/or high power circuits. With epitaxial stack embodiments described herein, SoC solutions may deliver the product specific electrical current and power requirements needed for a mobile computing platform. The fast switching, high voltage transistors are capable of handling high input voltage swings and providing high power added efficiencies at RF frequencies. In embodiments, the III-N semiconductor stack and transistor architecture is amenable to monolithic integration with group IV transistor architectures, such as planar and non-planar silicon CMOS transistor technologies. In particular embodiments, group III-N transistors are employed in SoC architectures integrating high power wireless data transmission and/or high voltage power management functions with low power CMOS logic data processing. High frequency operation suitable for broadband wireless data transmission applications is possible while the use of large bandgap III-N materials also provides a high BV such that sufficient RF output power may be generated for the wireless data transmission applications. This combination of high F t/F maxand high voltage capability also makes possible the use of the transistors described herein for high speed switching applications in DC-to-DC converters utilizing inductive elements of reduced size. As both the power amplification and DC-to-DC switching applications are key functional blocks in smart phones, tablets, and other mobile platforms, the structures described herein may be utilized in a SoC solution for such devices. Figure 1A illustrates a cross-section of a III-N semiconductor stack 101 in which a high electron mobility transistor (HEMT) may be formed, in accordance with embodiments. At the base of the stack 101 is a substrate 100. Generally, the substrate 100 is a non-III-N material such that the stack 101 includes metamorphic epitaxial layers. In the exemplary embodiment, the substrate 100 is crystalline silicon (e.g., substantially monocrystalline). In first silicon substrate embodiments, the substrate 100 is (100) silicon (i.e., having a (100) top surface upon which overlying epitaxial layers are disposed). (100) crystal orientations are advantageous for the formation of silicon transistors (e.g., in other regions not covered by III-N epitaxial layers) and therefore is ideal for embodiments where a group III-N transistor formed in the stack 101 is to be monolithically integrated with silicon CMOS transistor technology. In a particular (100) silicon substrate embodiment, the substrate 100 has a vicinal surface, for example prepared by off-cutting the substrate from an ingot grown to provide wafer slices having (100) surfaces. The (100) substrate surface is offcut at an angle between 4° and 8° (e.g., 6°) towards the [110] direction to produce a surface having terraces that include a surface having a (100) crystal plane. The surface area of a (100) plane associated with each terrace depends on the specific offcut angle, with a greater angle producing a greater number of terraces with each terrace having lesser (100) surface area. In such embodiments, the offcut produces a vicinal surface having an array of (100) terraces, many of which are separated by a double atomic step with a height of two silicon atoms which can be useful in avoiding the formation of anti-phase domains (APD) within the stack 101. In second silicon substrate embodiments, the substrate 100 is (110) silicon. In certain (110) embodiments, the (110) substrate surface is offcut at an angle between 4° and 8° (e.g., 6°) to produce a surface having terraces that include a surface having a (110) crystal plane separated by a double atomic step with a height of two silicon atoms. In third silicon substrate embodiments, the substrate 100 is (111) silicon (i.e., having a (111) top surface upon which overlying epitaxial layers are disposed). (Ill) crystal orientations are advantageous for III-N epitaxial growths because lattice mismatch is considerably less (approximately 16% while (100) silicon orientations have approximately 42% mismatch). Generally, for (111) silicon embodiments, no offcut need be provided. Although the exemplary (100), (110), and (111) silicon embodiments entail substrates consisting essentially of silicon (i.e., some trace level impurities not detrimental to III-N and/or silicon CMOS device function are permissible), it is noted that other substrates with similarly mismatched lattice constants may also benefit from the epitaxial stack architectures described herein, such as, but not limited to, substrates including germanium (Ge), which may be alloyed with silicon, or in a pure form. In embodiments, an epitaxial semiconductor stack includes at least one III-N device layer. In the exemplary embodiment illustrated in Figure 1A, the stack 101 may be referred to as a metamorphic epitaxial stack and is suitable for the formation of a HEMT, where at least the channel layer 107 and the top barrier layer 109 represent device layers. The channel layer 107 is substantially single crystalline and although is referred to herein as "monocrystalline," one of ordinary skill will appreciate that a low level of crystal defects may nevertheless be present as artifacts of an imperfect epitaxial growth processes. Within the channel layer 107, there is a crystalline arrangement of a first semiconductor material including one or more group III elements and nitrogen. Generally, the group III- nitride semiconductor in the channel layer 107 should have relatively high carrier mobility and therefore in embodiments, the channel layer 107 is substantially undoped group III- nitride material (i.e., impurity concentration minimized) for minimal impurity scattering. In the exemplary embodiment, the channel layer 107 is GaN. However, the channel layer 107 may also be one or more ternary alloy of GaN, such as AlGaN, AlInN, or a quaternary alloy of GaN including at least one group III element and nitrogen, such In the exemplary GaN embodiment, the channel layer 107 is between lOnm and 200nm in thickness. With the buffer described further elsewhere herein, the GaN channel layer 107 may be in the upper end of the this thickness range, and beyond, without generation of defects as the thickness increases because the channel layer 107 is to be lattice matched to at least the buffer layer 106. The advantage of lattice matching the channel layer 107 with the buffer layer 106 is also relevant in other epitaxial stack embodiments suitable for a light emitting diode (LED) or laser integrated onto a silicon substrate, in which case a device layer may comprise many quantum well layers, p-type and n-type contact layers and one or more distributed Bragg structure, requiring significant total device layer thickness. Disposed over the channel layer 107 is a cap or barrier layer (top barrier layer 109). Generally, any group III-N material may be utilized for the barrier layer 109, as dependent on the material selected for the channel layer 107 such that the barrier layer 109 has a larger bandgap than that of the channel layer 107. Preferably, top barrier layer 109 is substantially monocrystalline (i.e., having a thickness below the critical thickness for the given composition or lattice matched to the group III-N material utilized in the channel layer 107). In the exemplary embodiment, the barrier layer 109 includes a second group III-N material layer having the same crystallinity as that of the channel layer 107 to form a heterointerface. In a first exemplary embodiment where the channel layer 107 is GaN, the top barrier layer 109 is Al zGa^N, Al wlni- WN, or AlN. One exemplary top barrier layer 109 has 18% In. In embodiments, the barrier layer 109 has only intrinsic impurity doping level (e.g., i-Al wIn 1-wN). Quaternary alloys including at least one group III element and nitrogen, such as In xAl yGai- x- yN, are also possible. The barrier layer 109 may further comprise any multilayer stack of group III- nitrides, for example, an Al wIn 1-wN /A1N stack with the A1N layer of the stack adjacent to channel layer 107 to serve as a mobility enhancing layer. Depending on the embodiment, the barrier layer 109 may range between lnm and 20 nm in thickness. In embodiments, a metamorphic epitaxial semiconductor stack includes an aluminum indium nitride ternary alloy (Al xIn 1-xN) buffer layer disposed between a non-group III-N substrate and the group III-N device layer(s). Generally, for an Al xIn 1-xN buffer layer(s), the mol. % is less than 100 (e.g., x<l), although the exact concentration may vary through different layers of the buffer. Although ΑΙχΙη χΝ buffer layers present a number of advantages, of particular note is the relatively low epitaxial growth temperature of Al xIn 1-xN. Whether the growth is by MBE or MOCVD, MOPVE, etc., the growth of ALln^N is on the order of 300°C lower than many alternative III-N materials. For example, while ΑΙχΙη χΝ has a growth temperature generally between 750 and 800°C, AlGaN has a growth temperature of approximately 1050- 1100°C. As such, the total thermal budget experienced during the growth of the stack 101 is advantageously reduced. Also, the thermal expansion coefficient of ΑΙχΙη χΝ buffer layers is more closely matched to that of silicon. Strain due to thermal mismatch is generally characterized as s epi layer), where ΔΤ denotes the difference between growth temperature and ambient room temperature and a denotes the thermal expansion coefficients of the substrate and epitaxial layer grown. The thermal expansion coefficient of Al xIn 1-xN is less than those of GaN(approximately 5.1xl0 ~6K ) or AlGaN (>4xl0 ~6K ), decreasing as the indium fraction increases so that the net thermal mismatch between buffer layer(s) and the substrate 100 may be reduced significantly relative to alternatives. The presence of one or more buffer layers of substantial thickness reduces thermal stress exerted by a silicon substrate 100 on overlying III-N device layers having greater thermal mismatch, such as the exemplary GaN channel layer 107. Reductions in thermal stress have been found to reduce defect density in the device layer(s) and the surface crack formation in III-N epitaxial films deposited on Silicon In the exemplary embodiments where a buffer includes an layer, the mol fractions within the buffer are such that there is an Al xIn 1-xN layer lattice matched to an epitaxial device layer disposed over the buffer. The layer is therefore distinguished from a buffer layer that induces strain in the device layers due through pseudomorphic mechanisms (i.e., where a device layer strains to accommodate a non-native lattice constant). In the exemplary embodiment illustrated by Figure 1 A where the epitaxial stack 101 includes a GaN channel layer 107, the buffer includes an layer 106 with x between 0.80 and 0.84 with an In percentage of approximately 18% being substantially lattice matched to the GaN channel layer 107. As shown in Figure 1A, the lattice matched Al xIn 1-xN layer 106 is disposed immediately below the channel layer 107. In embodiments, the lattice matched layer 106 has only intrinsic impurity doping level (e.g., and may be relatively thick to most effectively mitigate thermal stress exerted by a silicon substrate 100. Furthermore, with the lattice matched ΑΙχΙη χΝ layer 106 having an approximate 42% lattice mismatch with an (100) silicon substrate 100, the layer 106 is to be thick enough to fully relax and glide resulting dislocations laterally (e.g., toward a topographical feature, etc.). In embodiments therefore, the lattice matched ΑΙχΙη χΝ layer is between 50 % and 99 % of the total thickness of the buffer with particular embodiments of the ΑΙχΙη χΝ layer 106 being between 300nm and 2μιη and preferably at least 1 μιη for most HEMT applications while greater thickness generally will offer lower defect densities, but incur additional expense/time of longer growths. As such, the layer 106 can be expected to be between 1.5 and 10 times greater for HEMT embodiments where a GaN channel layer 107 is between the lOnm and 200nm. Figure IB illustrates a cross-section of a semiconductor stack 102 in which an exemplary HEMT may also be formed, in accordance with embodiments. Generally, the stack 102 includes all the same epitaxial layers described for the stack 101 with like layers identified by same reference numbers. Similarly, the stack 102 is disposed on the same (growth) substrate 100 as previously described in the context of Figure 1A. The stack 102 however further includes a nucleation layer 104 and a transition layer 105 disposed between the nucleation layer 104 and the lattice matched layer 106. Functionally, the nucleation layer is to initiate the epitaxial growth of the III-N semiconductor materials comprising the stack 101 and while good results are possible for the stack 101 where the lattice matched layer 106 is formed directly on the substrate 100, addition of the nucleation layer may advantageously reduce APD occurrences, and/or further reduce defect density in the device layers (e.g., channel layer 107), and/or reduce total growth times, thermal budgets, etc. As the first III-N material layer of the stack 101, the nucleation layer 104 may be relatively thin, for example less than lOOnm (nanometers) in the z-dimension of Figure IB. Thickness of the nucleation layer 104 may be dependent, at least in part, on whether the substrate surface is offcut such that greater degrees of offcut are associated with greater thicknesses. Generally, the mobility of both the group III and group V species of the nucleation layer 104 are ideally sufficiently high that substantially random species motion can be effectively funneled in a direction dictated by the substrate terracing so as to avoid forming an APD in the polar epitaxial materials. In the exemplary embodiment, the nucleation layer 104 is aluminum nitride (A1N) grown to a thickness of between 50nm and lOOnm. A1N embodiments have a lattice mismatch of approximately 43% to a (100) silicon plane. As further illustrated in Figure IB, in addition to the lattice matched layer 106 the buffer further includes the transition layer 105 disposed over the nucleation layer 104. While it is possible for one or more intermediate layers to intervene between the transition layer 105 and nucleation layer 105, in the exemplary embodiment the transition layer 105 is disposed directly on, and in contact with, the nucleation layer and is in further direct contact with the Al xIn 1-xN layer 106. The transition layer 105 may be considered a lower buffer layer and is to function as a transition from the composition of the nucleation layer to the composition of the Al xIn 1-xN layer 106 disposed above the transition layer 105. Generally, the transition layer 105 is to be grown at a higher temperature than that used for the nucleation layer 104 (e.g., at the same temperature as the Al xIn 1-xN layer 106). Also, during the formation of the transition layer 105, the flux rate can be relatively higher than for the nucleation layer 104 (or for initial growths of the lattice matched ΑΙχΙη χΝ layer 106 in embodiments where such a layer is grown directly on the substrate 100 as in Figure 1A) because of the presence of the polar nucleation layer 104. For embodiments where the nucleation layer 104 is A1N, the transition layer 105 comprises an Α1 γΙη 1-γΝ layer. Generally, the mol fraction y may be anything less than 1, and larger than x for the lattice matched layer 106. Therefore, in the exemplary embodiment where the channel layer 107 is GaN, and x is approximately 0.82 the lattice matched ΑΙχΙη χΝ layer 106, y is greater than 0.82 within the transition layer 105. In further embodiments, the composition of the transition layer 105 is graded between the composition of the nucleation layer and the lattice matched layer 106. For example, in one such Al yIn 1-yN embodiment, y decreasing from approximately 1 nearest the nucleation layer toward approximately x nearest the lattice matched layer 106. The transition layer 105 is generally thinner than the layer 106, and may even be thinner than the nucleation layer 104. As one example, 50nm should be sufficient to transition from an A1N nucleation layer 104 to an 18% In Al xIn 1-xN layer 106. In further embodiments, a buffer between a III-N device layer and a non-III-N substrate includes a super lattice comprising a plurality of ΑΙχΙη χΝ layers and group III-N layers. Notably, the in the super lattice need not be the 18% In layer 106, but may have other compositions. In one embodiment, for example, the super lattice comprises AlInN and A1N layers. In another embodiment, the group III-N device layer composition is lattice matched with th of the device layer, with a super lattice of the two readily formed with intervening ayers still serving to mitigate thermal mismatch between the device layer and substrate. Figure 2A illustrates a cross-section of a recessed gate group III-N transistor 200, in accordance with an embodiment. Generally, the transistor 200 is a majority carrier (electron), gate voltage controlled device (i.e., a FET). The transistor 200 is planar and disposed on the epitaxial semiconductor stack 102. In the exemplary embodiment, the transistor 200 has no junctions formed by impurity dopant gradients. As such, disadvantages associated with dopant diffusion, scattering, and breakdown voltage degradation are avoided. Disposed over the epitaxial semiconductor stack 102 are heavily impurity doped (e.g., N+) contact layers 212. In the illustrative embodiment, a proper thickness of the top barrier layer 109, or a separate material disposed between the top barrier layer 109 and the channel layer 107 serves as a charge inducing layer to controllably supply carriers by inducing a sheet of charge, commonly referred to as a 2-D electron gas (e.g., 2DEG 211 in Figure 2A). While embodiments may utilize the top barrier layer 109 as the only source of sheet charge, in other embodiments the presence of the compositionally distinct charge inducing layer enables a thinning of the top barrier layer 109 for threshold voltage tuning while ensuring a thin (e.g., >0.5nm) wideband gap material is at the surface of the channel layer 107 for reduced alloy scattering and high carrier mobility. As a result of different polarizations of the materials utilized in the channel layer 107 and the top barrier layer 109 (or intervening charge inducing layer), a density of charge may be provided which can further be modulated through selection of a work function metal as the gate electrode 220 and/or control of the semiconductor thickness along the gate length (x-dimension). As such, performance characteristics of the transistor 200 depend on the materials chosen for the top barrier layer 109, the gate electrode 220 along the longitudinal transistor length disposed between the gate electrode 220 and the channel layer 107, demarked as the recessed gate region 225. In the exemplary embodiment, where the channel layer 107 is GaN and the top barrier layer 109 is at least one of Al zGai- zN, Al wIni- wN, or A1N (e.g., with A1N being a charge inducing layer materially distinct from another material serving as part of the top barrier layer 109). In embodiments, the transistor 200 is operable in enhancement mode. Enhancement mode operation, where the transistor 200 has a threshold voltage (V t) greater than 0V, is important for power efficient switching in a PMIC, and efficient shut-down of a power amplifier in an RFIC during idle, for example. In an embodiment, the gate electrode 220 includes a large work function metal to increase the V t. A work function metal which may be selected to obtain a desired threshold voltage (V t) (e.g., greater than 0V, etc) with exemplary conductive gate materials include, tungsten (W), aluminum (Al), titanium (Ti), tantalum(Ta), nickel (Ni), molybdenum (Mo), germanium (Ge), platinum (Pt), gold (Au), ruthenium (Ru), palladium (Pd), iridium (Ir), their alloys and silicides, carbides, nitrides, phosphides, and carbonitrides thereof. The transistor 200 is a single recessed gate architecture with the top barrier layer 109 having only one recessed gate region 225. As such, the top barrier layer 109 has a first thickness between the gate electrode 220 and channel layer 107 a second thickness between the source or drain semiconductor 212 and the channel layer 107. Thinning of the top barrier layer 109 helps achieve enhancement mode because the spontaneous and piezoelectric polarization induced charges in the channel layer disposed below the gate electrode 220 can be depleted, increasing V t. Depending on the embodiment, the first thickness may be 0% - 50% of the second thickness (e.g., ranging from 0- 2.5 nm). For embodiments without a work function gate metal, the top barrier layer 109 may need to be completely etched away to obtain a V t>0V. Where a separate charge inducing layer is present, the recessed gate region 225 may have a top barrier thickness of 0%, to expose the charge inducing layer so it is the only source for carriers within the recess. In the exemplary embodiment where the channel layer 107 is undoped, a work function metal gate electrode and gate recess are employed to provide for enhancement mode operation. In addition to being advantageous for low defect density device layers, the lattice matched Al xIn 1-xN layer further functions as a more efficient back barrier to confine the 2DEG with the channel layer 107 because of the materials relatively greater polarization, relative to alternatives, such as AlGaN, thereby improving short channel performance of the device considerably over alternative device stacks lacking the lattice matched buffer layer. More specifically, subthreshold slope and drain induced barrier lowering (DIBL) is reduced for the lattice matched Al xIn 1-xN back barrier relative to AlGaN. Indeed, for an exemplary HEMT channel length (L G) of 20nm having symmetrical source and drain (L GD=L GS=40nm), a 5V VDS and -2V VGS is expected to have a drain current of le ~5A/mm for an AlInN barrier while it AlGaN would be three orders of magnitude greater. Figure 2B illustrates band diagrams for regions of the transistor 200 comparing bottom barriers of Al yGa^ yN (where y is 0.08-0.10) to those of a lattice matched Al xIn 1-xN, in accordance with embodiments of the present invention. As shown in the region highlighted by the dashed box, the large bandgap of (approximately 4.9eV) renders it a relatively more insulating buffer layer and reducing parallel conduction beneath the channel layer 107 which is particularly advantageous for high voltage devices. Of further note, if a metamorphic ALlni-xN buffer layer is absent, (e.g. where an AlGaN buffer is utilized under a GaN channel layer), incorporation of a Al xIn 1-xN bottom barrier, if similarly lattice matched to GaN would further reduce the allowable thickness of the GaN channel layer as the cumulative thickness of such a bottom barrier and channel layer would be limited to a given critical thickness. Returning to Figure 2A, disposed on either side of the gate electrode 220 is a source 235 and drain 245 that includes impurity doped (e.g., N+) semiconductor regions 212 electrically coupled to an ohmic contact metal 235 A, 245 A. The impurity doped semiconductor regions 212 may be any low bandgap group III-N material, such as InGaN and InN, for formation of low resistance contacts, or simply n-type GaN. Disposed between the top barrier layer 109 and the gate electrode 220 is a dielectric layer 230. The dielectric layer 230 electrically insulates the gate electrode 220 from the semiconductor stack 102 and may also isolate the gate electrode 220 from source and drain 235, 245. In the embodiment illustrated in Figure 2A, the dielectric layer 230 serves as both a gate dielectric and a spacer dielectric, laterally separating the gate electrode 220 from the source and drain 235, 245. In the exemplary embodiment, the dielectric layer 230 is a self-aligned spacer structure enabling self-aligned, ultra-scaling of the source-to-drain spacing down to <100nm to reduce the extrinsic resistance (R ext) of the transistor, lead to higher transconductance (G m) or gain, and hence higher F t. Dielectric spacers also enable scaling of the transistor channel length (L g) to dimensions smaller than lithographically definable feature sizes. Dielectrics materials such silicon nitrides (Si xN), silicon oxide (Si0 2), alumina (A1 20 3) and high-k dielectrics such as Gd 20 3, Hf0 2, high-K silicates such as HfOSiO, TaSiO, AlSiO, and high-K oxynitrides such as HfON, SiON, AION, ZrSiON, HfSiON, and group III-ON are suitable for the dielectric layer 230. In embodiments, the dielectric layer 230 serves to passivate the interface between gate electrode 220 and top surface of the stack 102 to preserve high channel mobility and reduce gate leakage current. High quality passivation is achieved in one embodiment with an atomic layer deposited (ALD) dielectric layer 230. Although not depicted, other HEMT embodiments include a double recessed gate group III-N transistor includes the same semiconductor stack 102, gate electrode 220, and source and drains 235, 245 as described for the transistor 200. However, instead of the single recess 225 illustrated in Figure 2A, a double recessed HEMT embodiment includes the recess 225 and a second recessed region so that the top barrier layer 109 has three thicknesses, a first between the channel layer 107 and the source and drain 235, 245, a second thickness between the channel layer 107 and the dielectric layer 230 (under the gate electrode 220), and a third thickness between the channel layer 107 and a spacer dielectric laterally separating the gate electrode 220 from source and drain 235, 245. The third thickness is generally intermediate of the first and the second thicknesses. Relative to the transistor 200, a double-recessed embodiment has an advantage of preserving the 2DEG charge density under the spacer dielectric when the region disposed under the gate electrode 220 is depleted, thereby preserving low access resistance to the channel region under the gate electrode 220. While the transistor 200 is a planar device, in other embodiments, a non-planar group III-N transistor is formed in the stack 101 or 102. Although not depicted, for non-planar transistor embodiments at least one of the semiconductor layers of an epitaxial semiconductor stack (e.g., 101 or 102) a non-planar semiconductor body having opposite sidewalls over which a gate dielectric layer, a gate electrode, and/or a non-planar source, drain is wrapped. A non-planar transistor may include all the functional features described for the exemplary planar transistor 200 with the materials and thicknesses of the semiconductor stack 101 or 102 being as previously described. Depending on the crystal orientation of the group III- nitride stacks 101, 102, the 2DEG may be proximate to a top surface or a sidewall of a non-planar semiconductor body. As the GaN and other group Ill-nitrides described herein form the wurtzite structure which is notable in that it is non-centrosymmetric meaning that the crystal lacks inversion symmetry, and more particularly the { 0001 } planes are not equivalent, in one non-planar embodiment, the wurtzite crystal orientation is such that the (0001) plane forms a top surface of the crystal and interfaces the lattice matched layer 106. For such an embodiment top barrier layer 109 and the layer 106 function as charge inducing layer and a back barrier, respectively. In alternate non-planar HEMT embodiments, where the channel layer 107 is formed into a non-planar body, the overlying semiconductor layers of the epitaxial semiconductor stack 101 or 102 may then be grown on the top and sidewall surfaces. For such an embodiment the crystal orientation may either be as above or such that the (100) plane forms a top surface of the crystal and interfaces with the lattice matched ΑΙχΙη χΝ layer 106. For such an embodiment, a barrier layer formed on sidewalls of the non-planar channel layer 107 cause the spontaneous polarization field, P SPwithin a non-planar body to be directed away from a first sidewall toward a second sidewall. As such, the polarization of the non-planar group III-N transistor may be through a width or through a thickness of a non-planar semiconductor body of a non-planar HEMT embodiment. Figure 3 is a functional block diagram of a SoC implementation of a mobile computing platform, in accordance with an embodiment of the present invention. The mobile computing platform 700 may be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, mobile computing platform 700 may be any of a tablet, a smart phone, laptop computer, etc. and includes a display screen 705 that is in the exemplary embodiment a touchscreen (e.g., capacitive, inductive, resistive, etc.) permitting the receipt of user input, the SoC 710, and a battery 713. As illustrated, the greater the level of integration of the SoC 710, the more of the form factor within the mobile computing platform 700 that may be occupied by the battery 713 for longest operative lifetimes between charging, or occupied by memory (not depicted), such as a solid state drive, for greatest functionality. Depending on its applications, mobile computing platform 700 may include other components including, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). The SoC 710 is further illustrated in the expanded view 721. Depending on the embodiment, the SoC 710 includes a portion of a substrate 100 (i.e., a chip) upon which two or more of a power management integrated circuit (PMIC) 715, RF integrated circuit (RFIC) 725 including an RF transmitter and/or receiver, a controller thereof 711, and one or more central processor core 730, 731 is fabricated. The RFIC 725 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The RFIC725 may include a plurality of communication chips. For instance, a first communication chip may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. As will be appreciated by one of skill in the art, of these functionally distinct circuit modules, CMOS transistors are typically employed exclusively except in the PMIC 715 and RFIC 725. In embodiments of the present invention, the PMIC 715 and RFIC 725 employ one or more of the group Ill-nitride transistors as described herein (e.g., group Ill-nitride transistor 200) utilizing an embodiment of the epitaxial stacks described herein (e.g., stack 101 or 102). In further embodiments the PMIC 715 and RFIC 725 employing the group III- nitride transistors described herein are integrated with one or more of the controller 711 and processor cores 730, 731 provided in silicon CMOS technology monolithically integrated with the PMIC 715 and/or RFIC 725 onto the (silicon) substrate 100. It will be appreciated that within the PMIC 715 and/or RFIC 725, the high voltage, high frequency capable group III- nitride transistors described herein need not be utilized in exclusion to CMOS, but rather silicon CMOS may be further included in each of the PMIC 715 and RFIC 725. The group Ill-nitride transistors described herein may be specifically utilized where a high voltage swings present (e.g., 7-10V battery power regulation, DC-to-DC conversion, etc. within the PMIC 715). As illustrated, in the exemplary embodiment the PMIC 715 has an input coupled to the battery 713 and has an output provide a current supply to all the other functional modules in the SoC 710. In a further embodiment, where additional ICs are provided within the mobile computing platform 700 but off the SoC 710, the PMIC 715 output further provides a current supply to all these additional ICs off the SoC 710. With the reduced ON resistance available (e.g., through the symmetric L gd/L gs) and low access resistance (e.g., 2DEG 211 present in spacer region within channel layer 107), particular embodiments of the group Ill-nitride transistors described herein permit the PMIC to operate at higher frequencies (e.g., 50x those possible in LDMOS implementations). In certain such embodiments, inductive elements within the PMIC (e.g., buck-boost convertors, etc.) may be scaled to much smaller dimensions. As such inductive elements in the PMIC account for 60-70% of chip area, embodiments of the PMIC implemented in the group Ill-nitride transistors described herein offer a significant shrink over other PMIC architectures. As further illustrated, in the exemplary embodiment the PMIC 715 has an output coupled to an antenna and may further have an input coupled to a communication module on the SoC 710, such as an RF analog and digital baseband module (not depicted). Alternatively, such communication modules may be provided on an IC off-chip from the SoC 710 and coupled into the SoC 710 for transmission. Depending on the group III- nitride materials utilized, the group Ill-nitride transistors described herein (e.g., transistor 200) may further provide the large power added efficiency (PAE) needed from a power amplifier transistor having an F tof at least ten times carrier frequency (e.g., a 1.9 GHz in an RFIC 725 designed for 3G or GSM cellular communication). Figure 4 is a flow diagram illustrating a method 400 of fabricating the high voltage group III- nitride transistors described herein, in accordance with embodiments. While method 400 highlights certain operations, each of these operations may entail many more process sequences. Beginning at operation 401, a stack of monocrystalline semiconductor materials is grown using any standard metal organic chemical vapor deposition (MOCVD), molecular beam epitaxy (MBE), metal organic vapor phase epitaxy (MOVPE) growth tools/techniques, or the like, with standard precursors, temperatures, etc. for a given film. In one embodiment, the entire semiconductor stack 101 or 102 (Figures 1A, IB) is grown using such techniques. For example, to form the stack 102, an A1N nucleation layer 104 is grown on a (100) surface of a silicon substrate. Next, growth temperature is changed to 750-800°C and In is introduced, for example at increasing amounts relative to Al to form a graded Α1 γΙη 1-γΝ transition layer 105 until reaching an approximate 18% In composition, at which point the lattice matched layer 106 is grown for example to the thickness range described elsewhere herein. Growth temperature is then ramped up from the ΑΙχΙη χΝ growth temperature by approximately 300°C, for example to 1050°C and precursors, etc. changed appropriately for growth of the channel layer 107, for example GaN. Remaining at the higher temperature, a top barrier layer 109 of Al zGai- zN is formed, and/or the growth temperature reduced to form an A1N or Al wIni- wN layer. In one embodiment, an in-situ n-type impurity doped source/drain layer may then be grown as a higher level device layer, or in an alternate embodiment, (e.g., as illustrated by operation 410 in Figure 4, which is dashed as being optional), a regrowth process is performed subsequently in the fabrication process to form source/drain regions. At operation 403, at least a portion of the epitaxial semiconductor stack 110 is etched with any plasma or wet chemical etch techniques known in the art for the particular materials epitaxially grown as part of the semiconductor stack 101 or 102. Referring further to Figure 2A, in certain embodiments operation 403 entails etching at least a portion of the top barrier layer 109 to form the recessed region 225. For embodiments where the semiconductor stack 101 includes a source/drain layer(s) disposed over the top barrier layer 109, the source/drain layer(s) are etched during operation 403. For embodiments where the source/drain is later formed by regrowth, the etch process at operation 403 merely entails etching a portion of the top barrier layer 109. For a non-planar transistor embodiment (not depicted) ,the epitaxial stack (e.g., 101 or 102) is etched into a semiconductor fin structure at operation 403. Proceeding with operation 405, a sacrificial gate is formed in the recessed region. A gate replacement process permits an epitaxial regrowth of source drain regions (if desired), enables formation of a gate electrode to be formed last with a work function metal (if desired), and enables double recessed gate architectures, etc. In exemplary embodiment, a sacrificial gate includes a CVD polysilicon, or silicon nitride/oxynitride, etc. The sacrificial gate may be laterally separated from the surrounding film (e.g., field dielectric, etched layers of epitaxial stack) by a spacer structure. In certain embodiments, with the sacrificial gate and spacer structure serving as a mandrel protecting the channel region of the device stack, at operation 410 source and drain regions (e.g., 212 in Figure 2A) are regrown, for example on the top barrier layer 109. In one embodiment a compositionally graded ternary alloy of GaN is epitaxially grown one the epitaxial stack not protected by the sacrificial gate. In alternate embodiments of the method 400 in Figure 4 where the epitaxial stack includes source/drain regions, operation 410 is omitted. At operation 415 the sacrificial gate (stack) is removed to exposed the epitaxial stack (e.g., 101 or 102). For a double recessed gate embodiment, the top barrier layer 109 is etched a second time to form a second recessed region that is narrower than the recess 225. In certain single recess embodiments, in operation 415 entails etching at least a portion of the top barrier layer 109 a first time to form the recess 225 after removal of the sacrificial gate structure rather than before sacrificial gate formation. With the device layers of the epitaxial stack prepared, a gate dielectric layer is formed in the first or second recessed region. In embodiments, the gate dielectric layer is formed by depositing any of the dielectric materials described for dielectric layer 230 (e.g., a high-K dielectric material) using an ALD technique known to be suitable for the particular dielectric material. A work function metal (e.g., any of those described in the context of the transistor 200) is then deposited on the gate dielectric layer, and planarized to from the gate electrode 220. The device is then completed at operation 420, for example using conventional techniques to form ohmic contacts 235A, 245A and interconnect metallization (not depicted in Figure 2A). In further embodiments where CMOS transistors are also formed in the silicon substrate 100, one or more of the operations in method 400 may be concurrently or selectively performed (e.g., using conventional masking techniques) to silicon CMOS regions and HEMT regions of the substrate. Embodiments of a semiconductor material stack have therefore been described. Aa semiconductor material stack, including a silicon substrate; a group III-N device layer disposed over the silicon substrate; and a buffer disposed between the silicon substrate and the group III-N device layer, wherein the buffer includes an Al xIn 1-xN layer, with x being less than unity. In further embodiments, the Al xIn 1-xN layer is lattice matched to the group III-N device layer and is in direct contact with the group III-N device layer. In further embodiments, the group III-N device layer is GaN, and the top barrier comprises at least one of Al zGa^N, Al wIni- wN, or A1N, and wherein x is between 0.80 and 0.84, and wherein the silicon substrate has a (100), (110), or (111) crystal orientation. In further embodiments, the silicon substrate has (100) orientation and is offcut to between 4° and 8° toward the [110] direction. In further embodiments, the ALlni- xN layer has a thickness that is between 1.5 and 10 times greater than the group III-N device layer. In further embodiments, the buffer includes a super lattice comprising a plurality layers and group III-N layers. In further embodiments, the buffer further comprises an A1N nucleation layer disposed between the layer and the silicon substrate. In further embodiments, the buffer further comprises an Α1 γΙη 1-γΝ transition layer disposed between the A1N nucleation layer and the ΑΙχΙη χΝ layer, wherein y>x. In further embodiments, y is graded with decreasing from approximately 1 nearest the nucleation layer toward approximately x nearest the layer. In further embodiments, the Al xIn 1-xN layer comprises between 50 % and 99 % of the total thickness of the buffer. In embodiments, a high electron mobility transistor (HEMT), includes: a gate electrode disposed between a source contact and a drain contact; a gate dielectric disposed below the gate electrode; a group III-N channel layer disposed below the gate dielectric; a bottom barrier disposed below the channel layer, wherein the bottom barrier comprises layer lattice matching the channel layer; and a silicon substrate disposed below the bottom barrier with the ALlni- xN layer disposed over a (100) or (111) crystal plane of the substrate. In further embodiments, the HEMT includes a top barrier layer having a first thickness between the gate electrode and the channel layer and a second, greater thickness, between the source contact and drain contact disposed on either side of the gate electrode, wherein the top barrier layer comprises at least one of Al zGai- zN, Al wIni- wN, or A1N. In further embodiments, the group III- N channel layer comprises a GaN layer having a thickness between lOnm and 200nm, wherein the Al xIn 1-xN layer has a thickness that is between 400nm and 2μιη, and wherein x is between 0.80 and 0.84; an A1N nucleation layer is disposed between the layer and the silicon substrate; and the Al xIn 1-xN layer is disposed on an Α1 γΙη 1-γΝ transition layer disposed over the A1N nucleation layer and the wherein y is graded from approximately 1 nearest the nucleation layer toward approximately x nearest the Al xIn 1-xN layer. In further embodiments, the channel layer is undoped within a region disposed below a gate electrode and the first thickness of the top barrier layer induces charge to form a two dimensional electron gas (2DEG) within the channel layer only when the gate electrode is at a threshold voltage (V t) greater than 0V. In embodiments, a mobile computing device, includes a touchscreen; a battery; an antenna; a DC-to-DC converter coupled to the battery; and a wireless transmitter further including a power amplifier (PA), wherein at least one of the DC-to-DC converter and the PA comprises the HEMT as described herein. In embodiments, the DC-to-DC converter comprises a first HEMT as described herein, and the PA employs a second HEMT, as described herein. In embodiments, a method of forming a high electron mobility transistor, the method includes forming a sacrificial gate structure over a stack of semiconductor material layers disposed on crystalline silicon substrate, the stack comprising a group III-N semiconductor channel layer disposed on a lattice matched layer that has a thickness greater than the channel layer; forming a source and a drain region on opposite sides of the sacrificial gate structure; removing the sacrificial gate structure to expose a surface of the epitaxially grown stack; forming a gate dielectric layer on the exposed surface of the epitaxially grown stack with an atomic layer deposition process; and forming a gate electrode on the gate dielectric layer. In embodiments, the method further comprises forming the stack of semiconductor material layers by: epitaxially growing a graded Α1 γΙη 1-γΝ transition layer over an A1N nucleation layer disposed on the substrate; epitaxially growing the Al xIn 1-xN layer over the Α1 γΙη 1-γΝ transition layer, wherein y is graded from approximately 1 nearest the nucleation layer toward approximately x nearest the ΑΙχΙη χΝ layer; and epitaxially growing the group III-N semiconductor channel consisting essentially of GaN over the ΑΙχΙη χΝ layer; and epitaxially growing a top barrier layer comprising a ternary group III- nitride over the channel layer. In embodiments, the graded Α1 γΙη 1-γΝ transition layer is grown directly on the A1N nucleation layer to a thickness between 50nm and lOOnm, wherein the ΑΙχΙη χΝ layer is grown directly on the Al yIn 1-yN transition layer to a thickness between 300nm and 2μιη, and wherein the channel layer is grown directly on the layer to a thickness between lOnm and 200nm. In embodiments, the stack of semiconductor material layers is disposed on a (100) surface of the substrate offcut to between 4° and 8° toward the [110] direction; and wherein the ternary group Ill-nitrides is selected from the group consisting of: Al xGa 1-xN, Al wIn 1-wN, and In zGa 1-zN. It is to be understood that the above description is illustrative, and not restrictive. For example, while flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order may not be required (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). Furthermore, many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
A thin die Package Substrate is described that may be produced using existing chemistry. In one example, a package substrate is built over a support material. A dry film photoresist layer is formed over the package substrate. The support material is removed from the package substrate. The dry film photoresist layer is removed from the substrate and the substrate is finished for use with a package.
1.A method including:Forming a package substrate on a supporting material;Forming a dry film photoresist layer on the package substrate;Removing the support material from the package substrate;Removing the dry film photoresist layer; andTrim the substrate for packaging.2.The method of claim 1, wherein trimming the substrate comprises:Applying solder photoresist to the substrate; andA metal layer is applied using an SF process.3.The method of claim 2, wherein applying a metal layer comprises applying a Ni layer, followed by a Pd layer, and then an Au layer.4.The method according to claim 3, wherein the Ni layer is thicker than the Pd layer and the Au layer.5.The method of claim 4, wherein the Ni layer is at least 10 times thicker than the Pd layer.6.The method of claim 2, wherein trimming the substrate further comprises: applying a solder ball to at least a portion of the metal layer.7.The method of claim 1, wherein constructing a package substrate comprises:Plating a metal pattern directly on the support material; andAn insulator is applied on the metal pattern.8.The method of claim 7, wherein electroplating the metal pattern includes electrolytically applying a series of metal layers to the support material.9.The method of claim 8, wherein the series of metal layers comprises Cu, then Ni, and then Cu.10.The method of claim 7, wherein the electroplated metal pattern comprises:Patterning a photoresist directly on the support material;Using the photoresist pattern to define the metal pattern during electrolytic plating; andThe photoresist is stripped.11.The method of claim 7, wherein electroplating the metal pattern includes electrolytically applying a Cu layer directly on the support material.12.The method according to claim 11, wherein the support material is a Cu plate.13.A package substrate includes:Multiple insulator layers, formed by continuous lamination;Multiple contacts by plating the contacts on a support material, covering the contacts with a dry film photoresist layer, removing the support material, removing the dry film photoresist, and trimming The contact is formed.14.The package substrate according to claim 13, further comprising a through hole drilled through the insulator layer to connect with at least one contact.15.The package substrate according to claim 14, further comprising a solder resist connector opposed to a plurality of contacts formed on the through hole after the supporting material is removed.16.The package substrate according to claim 13, wherein the substrate is trimmed by applying a soldering photoresist to the substrate and applying a metal layer using an SF process.17.The package substrate of claim 13, wherein plating the contacts onto the support material includes applying a Ni layer, then applying a Pd layer, and then applying an Au layer.18.The package substrate according to claim 17, wherein the Ni layer is thicker than the Pd layer and the Au layer.19.The package substrate of claim 13, wherein plating the contacts onto the support material comprises:Pattern the photoresist directly on the temporary core;Defining the metal pattern using the photoresist pattern during electrolytic plating; andThe photoresist is stripped.20.The package substrate of claim 19, wherein the electroplated metal pattern includes electrolytically applying a Cu layer directly on the support material.
Coreless substrate package with symmetrical external dielectric layerBackground techniqueTechnical fieldThe present invention relates to the field of substrates for packaging and mounting semiconductor and micromechanical dies, and more particularly to constructing a coreless substrate on a support material and removing the core before the substrate is completed.Background techniqueIntegrated circuits and micromechanical structures are typically formed in groups on a wafer. A wafer is usually a substrate such as silicon and is subsequently cut into dies such that each die contains an integrated circuit or micromechanical structure. Each die is then mounted on a substrate and is typically subsequently packaged. The substrate connects the die to a printed circuit board, socket, or other connection. The package supports or protects the die and also provides other functions such as isolation, insulation, thermal control, and more.Substrates for these applications are typically made of a glass cloth layer pre-impregnated with an epoxy material, such as a pre-impregnated laminate FR-4 commonly used in printed circuit boards. Connection pads and conductive copper traces are then formed on the substrate to provide interconnections between the die and the system to which it is mounted.In order to reduce the z-height and improve the electrical connection, a coreless substrate is used. In a coreless substrate, connection pads and conductive traces are first formed on the core. After these structures are formed, the core on which the connection is formed is removed. Since the prepreg core can have a thickness of 800 or more microns, removing it can reduce the height of the substrate by more than half. For some nuclear-free technologies, a copper core is used instead of a pre-impregnated core.However, forming a nuclear-free substrate presents challenges in providing sufficient structural rigidity and appropriate thermal characteristics. In addition, there are restrictions on forming layers on the core, because only one side of the final substrate is accessible and the other side is blocked by the support material.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments of the invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like reference characters are used to refer to like features, and wherein:1 is a cross-sectional side view of a coreless substrate attached to a system board and carrying a die according to an embodiment of the present invention;FIG. 2A is an illustration of a start stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention; FIG.2B is an illustration of a patterning stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2C is an illustration of a plating stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2D is an illustration of a peeling stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2E is an illustration of a layering stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2F is an illustration of a through-hole drilling stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2G is an illustration of an electroless plating stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2H is an illustration of a patterning stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2I is an illustration of a plating stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2J is an illustration of an etching stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2K is an illustration of a layering stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2L is an illustration of a layering stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2M is an illustration of a patterning stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2N is a diagram of a DFR lamination stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2O is an illustration of a core separation stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2P is a diagram of a DFR peeling stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2Q is an illustration of an SR coating stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2R is an illustration of a metal coating stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;2S is an illustration of a pre-soldering stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention;FIG. 2A is an illustration of a start stage of a process for manufacturing a nuclear-free substrate according to an embodiment of the present invention; FIG.3 is a cross-sectional side view of a support material with a coreless substrate formed on either side according to an embodiment of the present invention; and4 is a cross-sectional side view of a coreless substrate formed on one side of a support material according to an embodiment of the present invention.detailed descriptionAccording to an embodiment of the present invention, a protection step is used to separate the coreless substrate from the support material before the substrate is subjected to a SolderResist (SR) step. Once separated, the thin package SR can be used to convert a BE (Back End) of a coreless substrate to a standard structure FCBGA (Flip Chip Ball Grid Array). This allows the use of many conventional chemicals and process steps. This also allows the formation of coreless substrate wiring on both sides of the substrate.It may be difficult to make coreless packages using existing materials. Some processes have been proposed which require new surface chemicals. New surface chemistries require new investments from substrate suppliers to develop experience and compatibility and to form surface finishes between top and bottom layers.According to an embodiment of the invention, the assembly process may use an outer surface finishing layer very similar to a substrate with a core. These simplify manufacturing and also integrate coreless packaging with cored packaging into a larger system. Such single-surface finishing chemistries enable improved impact resistance and minimize assembly transparency issues. According to an embodiment of the present invention, Ni (nickel) can be used as a barrier layer for chemical etching of Cu (copper).According to an embodiment of the present invention, the inner side of the package formed by the coreless substrate will have a thicker Ni layer. In one example, the Ni layer is approximately one hundred times thicker and at least ten times thicker than adjacent layers, such as Pd and Au. Thicker Ni layers can also have different grain structures. In addition, as described below, SRs may be formed on both sides of the substrate instead of just one side. In other words, a double-sided SR can be manufactured for a coreless thin package.Referring to FIG. 1, a portion of an electronic system 72 is shown. The system may be a computer, a portable information manager, a wireless device, an entertainment system, a portable phone or a communications manager, or any of a variety of other electronic systems. In the illustrated example, the package 68 is soldered to the motherboard 76, or any other system or logic board. The package is attached to the solder ball 74, or any other type of attachment system may be used, including slots or other mounting appliances. The motherboard provides power, control, and data connections between the package and other components of the electronic system 72.The illustrated package is an ultra-thin package with a coreless substrate. In this example, the package 68 has a die 66 that is attached to the coreless substrate 24 and contains an electronic or microcomputer-based system. The coreless substrate has solder balls 74 opposite the die for connection to the motherboard 76.As shown, the die 66 is attached to the substrate 24 using a ball grid array 80 via a series of contact pads 78. The contact 78 leads to a small hole 70 which is electrically conductive to the solder ball 74. The coreless substrate 24 may include a network of copper traces (not shown) that extend laterally to connect the vias 70 to each other. A specific number of pads and solder balls and the connections between them are used to suit any particular implementation.The package may also include additional components (not shown) such as covers, heat sinks, cooling equipment such as heat sink fins, liquid cooling contacts, and other components. The package can also include additional dies, external connection ports, and additional contacts on the top or side of the package. Various additional structures can be added or adapted for packaging, depending on the specific requirements.As mentioned above, the package may also be adapted to use a socket (not shown) or other socket. The package may thus include a clamping surface, a fixing structure, and a conductive connector to a structure on the socket.Referring to FIG. 2A, the process for manufacturing the nuclear-free substrate 68 starts with the support material 2. The support material can be made from a variety of different materials. These materials can be selected so that these layers of the substrate are easily constructed and the support material is easily removed. In this example, the core is a copper sheet about 800 microns thick. Other possible materials include silicon and prepreg stacks, such as FR-4. Figure 2A is a cross-sectional side view of the core.In FIG. 2B, a patterned photoresist layer 4 is applied to the top surface of the support material 2. The photoresist layer has a land with a gap therebetween. In the examples already described, these layers are applied only to the top surface of the support material. However, similar or identical process steps can also be applied to the underside of the support material simultaneously. This doubles the output per production cycle. In addition, only a single substrate is shown in the figure, but in actual production, multiple substrates can be produced side by side and simultaneously on a single support material.In FIG. 2C, an electrolytic metal plating 6 is applied to the photoresist 4. This creates a contact surface in the gap between the lands. The particular metal may be selected based on the specific implementation. Materials other than metals can also be selected. In one example, it is formed by electrolytic plating of Cu first, then Ni, and then Cu. This is a simpler, faster and cheaper process than commonly used processes such as Ni, Pd (palladium), Au (gold), or Cu, Au, Pd, Ni. This also results in better electrical, thermal and mechanical properties.In FIG. 2D, the photoresist is stripped, leaving the metal contact 6.In FIG. 2E, an insulator construction film layer 8, such as an epoxy / phenol novolac resin or other material, is applied to the metal contact 6. The insulator, which also acts as a filler, provides the physical structure of the substrate after the core is removed, and can be made from a variety of insulating materials with appropriate thermal and mechanical properties. Polymers, silicon-based materials, and plastic resins with silicon dioxide insulators can be used in particular.In FIG. 2F, laser drilling is used to drill through holes 10 in the insulating layer 8. Vias can also be created in various other ways as desired. As shown in the figure, the vias reach the metal contacts 6 through the insulator layer from the top of the insulator layer.In FIG. 2G, an electroless Cu layer 12 is applied to the insulator layer and the vias.FIG. 2H illustrates starting another layer similarly to that formed in FIGS. 2B to 2G. The additional layer allows conductive patterning to connect vias to each other or isolate them from each other. This also enables the production of thicker and more robust nuclear-free substrates. In FIG. 2H, another photoresist layer 14 is applied to the structure. In this example, a photoresist is shown as being applied between the vias.In FIG. 2I, the top surface of the (16) substrate is plated using a Cu / Ni / Cu process to fill the vias and any other areas between the photoresist.In FIG. 2J, electroless Cu is quickly etched, leaving filled vias and contact pads on top of each via. These contact pads may be in the form of copper traces between vias as mentioned above.In FIG. 2K, another insulator layer 20 is laminated on top of the substrate.In FIG. 2L, the insulator is drilled as in FIG. 2F and plated as shown in FIGS. 2F and 2G to form a second-stage filled conductive via 22 through the second insulator layer 20.In FIG. 2M, a suitable pattern 24 is formed on top of the second via layer as in FIGS. 2H, 2I, and 2J.In FIG. 2N, the third layer 25 can be constructed in a similar manner to the first and second layers. Additional layers can be added depending on the implementation to meet physical, electrical, and thermal needs. The top of the top layer is then laminated with DFR (Dry Film Resist) 26. The photoresist layer protects the top of the substrate when the support material is removed.In addition, FIG. 2N shows that an additional metal contact region 27 has been added to the third insulator layer 25. Provide additional contacts as an example. In the cross-sectional side view of this example, the electrical path between the contacts is not visible. However, the additional contacts 27 allow various electrical connections between the through holes and between different conductors on the die or motherboard.In FIG. 2O, the support material is separated from the substrate. This creates pockets at the contact pads 6 in the bottom surface of the substrate that can serve as connections or attachment points on the substrate. The slot is aligned with the through hole 10 drilled in FIG. 2F.The above figure describes an example of manufacturing the nuclear-free substrate 68. The number of layers can be modified to suit any specific implementation. After the top Cu plating, the DFR stack 26 can be used as a protective layer. This allows the support material 2 to be separated using electrolytic Ni as a Cu etch barrier.DFR 26 can then be peeled off as shown in FIG. 2P, and then SR (solder resist) coating 28, 32 can be applied to both sides of the substrate as shown in FIG. 2Q.The exposed metal surfaces 27, 34 can then be trimmed with, for example, an electroless Ni / Pd / Au coating 36, 38, as shown in FIG. 2R. However, a variety of different materials can be used. In this example, the thick Ni layer is followed by Pd plating, but Au plating. The Ni layer can be a hundred times thicker than the other layers.In FIG. 2P, the DFR layer 26 is peeled off or etched away, thereby exposing the following previously protected contact pads 24.Finally, in FIG. 2S, pre-solder 40 is applied to the top plated contact area between the solder resists. In this example, the bottom contact is not further processed. Pre-solder can be used for the C4 (Controlled Collapsed Chip Connection) pad and as shown with reference to FIG. 2O, the interconnection or wiring can be performed by Cu or other electrolytic plating at the C4 pad layer.Alternative SR printing can be performed on both sides with or without surface finishing 36,38.Another alternative is that after separation of the core (Figure 2O), a dry film type SR stack can be applied to the bottom side.Another alternative is that a PET (polyethylene terephthalate) stack can be used instead of a DFR stack. The PET stack can be applied after the top Cu layer is plated. The PET stack acts as a protective layer during core separation. Electrolytic Ni is still used as a Cu etch barrier. The PET stack can be removed later. The SR coating can be applied to one or both sides, and the surface-trimmed electroless Ni / Pd / Au layer can be applied as shown in the figure. Also in this example, the SR metal layer may be made of various materials. This Ni / Pd / Au layer may be a thick Ni layer followed by Pd plating, and then Au plating.As shown in the figure, SR can be used to cover insulators of a substrate and even have different types of contacts stacked. On the top side of the substrate of FIG. 2S, a C4 (Controlled Collapse Chip Connection) pad is used. The insulator is stacked between the pads, but the SR covers the insulator layer. On the other hand, the bottom side of the structure is suitable for use with a BGA (Ball Grid Array). As shown, the SR also covers the insulator on the BGA side.SR protection on the bottom side also allows connections on the bottom side to be routed in the substrate. As shown in FIG. 2D, the bottom side begins by plating the metal pad 6 directly onto the support material 2. The advancement of the double-sided SR is to avoid metal-defined pads, which only allow the overlap of the external SR layer and the metal pad. These features prevent any degradation of the mechanical strength of the substrate by increasing the area of the fracture near the bottom side.Since these layers are typically exposed to the environment in which the substrate is trimmed, wiring cannot be easily used in the inner layer. Any wiring may be unreliable. By applying the SR layer 32 on the bottom side as shown in FIG. 2Q, the wiring can be patterned onto the top and bottom sides without any risk from the environment.FIG. 3 shows an example of continuously manufacturing two substrates, one on either side of the supporting material. In FIG. 3, in the center of the structure 107 is a support material 112 that has been patterned using connection pads 114. Three insulator layers 115, 139, and 143 are laminated on these connection pads having through holes 136, 140, 144 drilled through each layer to form a connection from the outside of the substrate to the inside of the support material.The top and bottom substrate structures in FIG. 3 are the same. FIG. 3 shows that applying the same process on both sides simultaneously results in substantially the same structure on either side of the copper core. The precise nature of further processes can be adapted to suit different implementations.FIG. 4 shows a substrate manufacturing structure 108 in a similar situation. However, in the example of FIG. 4, the substrates are not only stacked on one side of the support material. Such a method is better for a certain process and manufacturing equipment or design. In FIG. 4, the same reference numerals as in FIG. 3 are used, and the corresponding elements are the same.Figures 3 and 4 suggest an intermediate situation between Figures 2M and 2N. This proposes a possible variation on the sequence proposed in these figures. In Figures 3 and 4, the SR process and SF layer are applied before the support material is removed, which is different from Figures 2O, P, Q, and R. This results in the structures of FIGS. 3 and 4. For subsequent processes, a DFR stack is applied to the structure of Figures 3 and 4, the core is separated, the DFR is peeled off, and the contact pads or connections are subsequently trimmed.FIG. 5 shows operations described in the context of FIGS. 2A to 2S as a process flow diagram. Operation begins with a support material made of copper, prepreg, or any other suitable material. At block 202, a core is patterned using a photoresist to form a connection point that will be located at the bottom of the final substrate. At block 204, electrical connection points are formed. In the above example, this was formed using electroplated Cu, followed by Ni, followed by Cu. At block 206, the photoresist is stripped, leaving a contact pad.At block 208, a first insulator layer is laminated on the contact pad. This starts the formation of the part that will eventually form the substrate structure. At block 210, a conductive via is formed through the insulator down to the contact pad. This is done by first laser drilling and then coating with copper or any other suitable conductor. At block 212, a contact pad is formed on the via by patterning, filling with copper, and then etching.At block 214, the process returns to block 208 until sufficient layers have been formed. In short, the stacking and formation of the through holes are repeated to form a desired number of additional layers of the substrate. This thickens and strengthens the substrate to support the die later.At block 216, a DFR stack is applied to the structure to protect the vias and contact pads. Then at block 218, the support material is separated from the substrate and the DFR is peeled.At block 220, SR is applied and patterned to form an opening for a contact pad. At block 222, the contact pads are formed by a SF process using Ni, then Pd, and then Au. Finally, the contact pads at frame 224 are trimmed to have a suitable surface, such as solder balls for C4 pads. Alternatively, an additional trimming step can be used for the reverse side, ie the side that is formally attached to the support material.The trimmed substrate may then be attached to one or more dies. Leads and other components can be attached as needed. The resulting structure can then be used to form a package as proposed in FIG. 1."One embodiment" or "an embodiment" throughout the specification means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but it does not mean that they exist in each Embodiments. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" appearing in various places throughout this specification do not necessarily refer to the same embodiment of the invention. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. In other embodiments, various additional layers and / or structures may be included and / or the features described may be omitted.Various operations are described as multiple discrete operations to help understand the description. However, the order of description cannot be interpreted to mean that these operations must be order dependent. In particular, these operations need not be performed in the order given. The operations described may be performed in a different order than the described embodiments. Various additional operations may be performed and the operations described may be omitted.Many modifications and variations are available in light of the above teachings. Various equivalent combinations and substitutions can be made for the various components and operations shown in the figure. The scope of the invention is not limited to these detailed descriptions, but is defined by the appended claims.The example of the cleaning process described above is provided only as an example. There may be other different chemical processes that decompose, convert to gas, or otherwise eliminate optically induced defects on the mask. The above example shows how combining light, heating, and exposure to gases such as air, oxygen, and water vapor can partially or completely eliminate these mixtures and reduce the number of different types of optical sensing defects from the surface of the photomask Or eliminate it completely. Specific combinations of lighting, heating, vacuum, and other parameters can be selected in conjunction with the examples considered above. Alternatively, a specific combination may be selected based on the parameters described above and then optimized using trial and error.A less complex or more complex cleaning chamber, a set of cleaning operations, masks, and films can be used instead of just those shown and described herein. As such, the configuration can be changed on a case-by-case basis based on a variety of factors, such as price constraints, performance requirements, technical improvements, or other circumstances. Embodiments of the invention may also be applied to other types of lithography systems using materials and devices other than those shown and described herein, such as EUV lithography. Although the above description mainly refers to 193nm lithography equipment and technology, the present invention is not limited to this and can be applied to a wide range of other wavelengths and other process parameters. In addition, the present invention can also be applied to the production of semiconductors, microelectronics, micromechanics, and other devices using photolithography.In the above description, several specific details have been clarified. It should be understood, however, that embodiments of the invention may be practiced without these specific details. For example, well-known equivalent materials may be substituted for those described herein, and similarly, well-known equivalent technologies may be substituted for the particular process technology disclosed. In addition, steps and operations may be removed or added to the operations described to enhance the effect or add additional functions. In other instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring the understanding of these descriptions.Although the embodiments of the present invention have been described in terms of several examples, those skilled in the art may recognize that the present invention is not limited to the described embodiments, but may be used to carry out applications that fall within the spirit and scope of the appended claims. Implementation with modifications and changes. This description is therefore to be regarded as illustrative instead of limiting.
An independent access, double-gate transistor and tri-gate transistor fabricated in the same process flow is described. An insulative plug is removed from above the semiconductor body of the I-gate device, but not the tri-gate device. This allows, for instance, metalization to form on three sides of the tri-gate device, and allowing independent gates for the I-gate device.
CLAIMS What is claimed is: 1. A method comprising: forming at least two silicon bodies having overlying insulative members; patterning a sacrificial layer defining gate regions, intersecting the silicon bodies; enclosing the patterned sacrificial layer in a dielectric material; covering one of the insulative members; removing the other of the insulative members; removing the patterned sacrificial layer; forming an insulative layer and metal layer within the gate regions. 2. The method of claim 1 , including planarizing the dielectric material to expose the insulative members. 3. The method defined by claim 2, wherein the silicon bodies comprise a monocrystalline silicon. 4. The method defined by claim 3, wherein the insulative members comprise silicon nitride. 5. The method defined by claim 4, wherein the sacrificial layer comprises polysilicon. 6. The method defined by claim 5, wherein the planarizing comprises chemical-mechanical polishing. 7. The method defined by claim 1 , including removing the insulative members to the extent that they are exposed following the patterning of the sacrificial layer. 8. The method defined by claim 1 , including forming source and drain regions in the silicon bodies after the patterning of the sacrificial layer. 9. The method defined by claim 8, wherein the doping of the source and drain regions is done in two doping processes, one before formation of sidewall spaces and one after the formation of sidewall spaces. 10. The method defined by claim 1, wherein the removal of the other of the insulative members is done with an etchant that has a higher etch rate for the insulative member than the etch rate for the dielectric material. 11. A method comprising: defining on an insulative layer, a first and a second silicon body with a first and a second insulative member, respectively; removing at least a portion of the first insulative member while leaving in place the second insulative member; forming a first gate structure on opposite sides of the second silicon body, the first gate structure having two independent gates; and forming a second gate structure on three sides of the second silicon body. 12. The method defined by claim 11, wherein the first and second gate structures are metal, insulated from their respective silicon bodies by high-k insulation. 13. The method defined by claim 12, wherein the gate structures are formed by removal of a sacrificial layer surrounded by an interlayer dielectric. 14. The method defined by claim 13, wherein the sacrificial layer comprises polysilicon and the insulative members comprise silicon nitride. 15. An integrated circuit comprising: a substrate; a first transistor on the substrate having a first body surround on three sides by a first metal gate; and a second transistor on the substrate having a second body having two independent metal gates on opposite sides of the second body. 16. The circuit defined by claim 15, wherein the first and second bodies comprise a monocrystalline silicon. 17. The circuit defined by claim 16, including an insulative member disposed on the second body between the independent gates. 18. The circuit defined by claim 17, wherein the insulative member comprises silicon nitride. 19. The circuit defined by claim 15, including a plurality of the first and second transistors, some of which are n channel transistors and others of which are p channel transistors. 20. The circuit defined by claim 19, wherein the bodies of the transistors comprise monocrystalline silicon.
INDEPENDENTLY ACCESSED DOUBLE-GATE AND TRI-GATE TRANSISTORS IN SAME PROCESS FLOWFIELD OF THE INVENTION [0001] The invention relates to the field of semiconductor processing.PRIOR ART AND RELATED ART[0002] One relatively recent development in semiconductor processing is the independently-controlled double-gate (I-gate) transistor. This transistor has two gates disposed on opposite sides of a channel, each gate being independently controlled. Independent gate control provides some unique transistor characteristics and enables, for example, a single body, dynamic random-access memory (DRAM) cell. [0003] Another relatively recent development in semiconductor processing is the tri-gate transistor. Here, a gate is formed on three sides of a channel region. This transistor, particularly when used with a high-k insulator and metal gate, provides substantial performance improvements.[0004] Several I-gate structures have been proposed. This and other related technology is described at C. Kuo, IEDM, Dec. 2002, following M. Chan Electron Device Letters, Jan 1994; C. Kuo, IEDM, Dec. 2002, "A Hypothetical Construction of the Double Gate Floating Body Cell; " T. Ohsawa, et al, IEEE Journal of <'>Solid-State Circuits, Vol. 37, No. 11, November 2002; David M. Fried, et al, "High-Performance P-TypeIndependent-Gate FinFETs, " IEEE Electron Device Letters. Vol. 25, NoA, April 2004; and David M. Fried, et al, "Improved Independent Gate N-Type FinFET Fabrication and Characterization, " IEEE Electron Device Letters, Vol. 24, No. 9, September 2003. [0005] Tri-gate structures are described at, for instance, publication number U.S. 2004-0036127-A1. BRIEF DESCRIPTION OF THE DRAWINGS[0006] Figure IA is a perspective view of a substrate which includes two silicon bodies with overlying insulative members[0007] Figure IB is a cross-sectional, elevation view of the structure of Figure 1 taken through section line IB- IB of Figure IA.[0008] Figure 2 A illustrates the structure of Figure 1 following the patterning of a sacrificial layer.[0009] Figure 2B is a cross-sectional, elevation view of the structure of Figure 2 A taken through section line 2B-2B of Figure 2A.] Figure 3 is a perspective view of the structure of Figure 2A following the deposition of an interlayer dielectric (ILD).[0011] Figure 4A is a perspective view of the structure of Figure 3 following planarization.[0012] Figure 4B is a cross-sectional, elevation view taken through section line 4B-4B of Figure 4A.[0013] Figure 5 is a perspective view of the structure of Figure 4 following the covering of a section of the substrate on which an I-gate transistor is fabricated.[0014] Figure 6 A is a perspective view of the structure of Figure 5 following an etching step.] Figure 6B is a cross-sectional view of the structure of Figure 6 A taken through section line 6B-6B of Figure 6 A.[0016] Figure 7A is a perspective view of the structure of Figure 6A following removal of the patterned, sacrificial layer.[0017] Figure 7B is a cross-sectional, elevation view of the structure of Figure 7A taken through section line 7B-7B of Figure 7A. [0018] Figure 8 is a cross-sectional, elevation view of the structure of Figure 7 A and 7B following the formation of an insulative layer and a metal layer. [0019] Figure 9A is a perspective view of the structure of Figure 8 following planarization of the metal layer.] Figure 9B is a perspective view of the structure of Figure 9 A with the ILD removed.DETAILED DESCRIPTION[0021] In the following description, the fabricating of an independently accessed, double-gate (I-gate) transistor and a tri-gate transistor on a common substrate is described. Numerous specific details are set forth, such as specific materials, in order to provide a thorough understanding of the present invention. It will be apparent to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well known processing steps have not been described in detail, in order not to unnecessarily obscure the present invention. For example, well-known cleaning steps, and some protective layers often used in the fabrication of integrated circuits, are not described.[0022] The method which follows describes the formation of both the I-gate transistor and a tri-gate transistor in a single process flow. While the fabrication of only a single I-gate transistor and a single tri-gate transistor are illustrated, it will be apparent to one skilled in the art, that in a typical integrated circuit, numerous such transistors are simultaneously fabricated. Moreover, the I-gate and tri-gate transistors may be fabricated wherever needed in the integrated circuit. Thus, a single circuit, such as a buffer, may have both I-gate and tri-gate transistors. In some cases, for example in a DRAM, an array of memory cells using only I-gate transistors may be fabricated and connected to peripheral circuits which use both I-gate and tri-gate transistors. A memory using I-gate memory cells is described in "Memory with Split-Gate Devices and Method of Fabrication," Serial No. 10/816,282, filed March 31, 2004, and assigned to the assignee of the present application.[0023] In one embodiment, the transistors are fabricated on an oxide layer 10 which is formed on a silicon substrate 12. The transistor bodies are fabricated from a monocrystalline silicon layer 14 (shown in dotted lines in Figures IA and IB) disposed on layer 10. This silicon-on-insulator (SOI) substrate is well-known in the semiconductor industry, where as shown, the layer 14 is disposed on the layer 10. By way of example, the SOI substrate is fabricated by bonding the oxide layer 10 and a silicon layer 14 onto the substrate 12, and then, planarizing the layer 14 so that it is relatively thin. This relatively thin, low body effect layer, is used to form the bodies of active devices, as mentioned. Other techniques are known for forming an SOI substrate including, for instance, the implantation of oxygen into a silicon substrate to form a buried oxide layer. In the subsequent cross-sectional views, the transistors are shown fabricated on the oxide layer 10, the underlying silicon substrate 12 is not shown.[0024] The layer 14 may be selectively ion-implanted with an n-type dopant in the regions where n channel devices are to be fabricated, and with a p type dopant in those regions where p channel devices are to be fabricated. This is used to provide the relatively light doping typically found in the channel regions of MOS devices fabricated in a CMOS integrated circuit. Both the I-gate and tri-gate transistors may be fabricated with the described process as either p channel or n channel devices. (The doping of the channel regions of the transistors may be done at other points in the process flow such as the point in the process shown in Figures IA or 7A.) [0025] In the processing for one embodiment, a protective oxide (not shown) is disposed on the silicon layer 14 followed by the deposition of a silicon nitride layer. The nitride layer is masked to define a plurality of insulative members, such as members 17 and 18 of Figures IA and IB. Then, the underlying silicon layer 14 is etched in alignment with these members resulting in the silicon bodies 15 and 16.[0026] The width of the silicon bodies 15 and 16 may be the critical dimension in a particular process, for instance, in a 30 nanometer (nm) gate length process, these bodies may have a width of 30 nm. The thickness of the layer 14, and the silicon nitride layer from which the members 17 and 18 are formed, may each be, by way of example, in the range of 10-50 nm. [0027] Now, a sacrificial layer is deposited over the structure of Figure IA on the oxide layer 10. In one embodiment, this layer is a poly silicon layer 50-100 nm thick.Other materials may be used for the sacrificial layer. The material for the sacrificial layer should be able to protect the channel regions of the devices from ion implantation during the formation of the source and drain regions, as will be described. Moreover, the sacrificial layer should be able to be etched without destroying the integrity of an ILD formed around the sacrificial layer after patterning, as will be described. Additionally, the insulative members must be able to be selectively etched in the presence of the sacrificial layer.[0028] Next, the sacrificial layer is patterned into gate-defining members shown as members 20 and 22 in Figure 2A. The member 20 occupies the region in which the two gates for the I-gate transistor is fabricated as well as "fins" for these gates to allow contact with the gates as shown later. The member 22 occupies the region in which the tri-gate is formed for the tri-gate transistor, as well as a fin, again for contact. [0029] At this point in the processing, the silicon nitride members 17 and 18 may be etched in alignment with the member 20 and 22, thereby exposing portions of the underlying silicon bodies 15 and 16. As shown by the arrows 25, the silicon bodies, to the extent they are not covered by the members 20 and 22, are ion implanted to form source and drain regions for both the I-gate and tri-gate transistors. As is commonly done, but not shown, separate ion implantation steps are used for the p channel and n channel devices, with protective layers being used to permit separate implantation of the source and drains for the p channel and n channel devices.[0030] Alternatively, the silicon nitride members 17 and 18 may remain in place, and the source and drain regions implanted at an angle, so that the dopant enters the sides of the silicon bodies 15 and 16. [0031] Additionally, spacers may be formed to allow a more lightly doped source and drain region to be implanted adjacent the channel region, and more heavily doped source and drain regions spaced apart from the channel region. This is described in the above-referenced application serial number 10/816,282.[0032] An ILD 30 is now formed on the insulative layer 10 as shown in Figure 3.The ILD 30 surrounds the members 20 and 22, and as will be seen, allows the inlay of metal once the polysilicon is removed. The ILD 30 may be, for instance, a chemical vapor deposited (CVD) silicon dioxide layer.[0033] The structure of Figure 3 is now planarized, for instance, if a chemical mechanical polishing (CMP) process, so as to expose the silicon nitride insulative members 17 and 18. This is illustrated in both Figures 4 A and 4B. Note, the members 17 and 18 are flush with the upper surface of the ILD 30, as are the members 20 and 22. [0034] Now, a photoresist layer is deposited over the structure of Figure 4A and4B, and patterned so as to remain in place over the I-gate transistor region. The photoresist layer 50 covers the insulative member 17. As shown in Figure 5, the photoresist layer 50 leaves exposed insulative member 18 of the tri-gate device. [0035] Then, as shown in Figures 6A and 6B, an etching process is used to remove the plug-shaped silicon nitride member 18. An etchant that discriminates between the silicon nitride and both the ILD 30 and sacrificial layer is used so that the ILD 30 and member 22 remains substantially intact. A dry or wet etchant may be used. Once the member 18 is removed, the underlying silicon body 16 as shown in Figure 6B is exposed. [0036] The poly silicon sacrificial layer is next removed with, for example, a wet etch process. The resultant structure is shown in Figures 7A and 7B. The remaining ILD 30 now defines a form in which the gates for the transistors may be fabricated. [0037] A gate dielectric layer 60 is formed on and around each semiconductor bodies 15 and 16 as seen in Figure 8. Specifically, a gate dielectric may be deposited such that it covers the top surface of the semiconductor body 16 and the insulative member 17 as well as on the opposite side-walls of each of the semiconductor bodies. This gate dielectric, ideally has a high dielectric constant, such as a metal oxide dielectric, for instance, HfO2 or ZrO or other high-k dielectrics, such as PZT or BST. A high-k dielectric film can be formed by any well-known technique such as by chemical vapor deposition (CVD). Alternatively, the gate dielectric can be a grown dielectric. In one embodiment, the gate dielectric layer 60 is a silicon dioxide film grown with a dry/wet oxidation process. For example, the silicon dioxide film is grown to a thickness of between 5-50A. (A conformally deposited dielectric layer is shown in Figure 8.) [0038] Next, as shown in Figure 8, a gate electrode (metal) layer 61 is formed over the gate dielectric layer 60. The gate electrode layer 61 may be formed by blanket deposition of a suitable gate electrode material. In one embodiment, a gate electrode material comprises a metal film such as Tungsten, Tantalum, Titanium and/or nitrides and alloys thereof. For the n channel, I-gate and tri-gate transistors, a work function in the range of 4.0 to 4.6 eV may be used. For the p channel, I-gate and tri-gate transistors, a work function of 4.6 to 5.2 eV may be used. Consequently, for substrates with both n channel and p channel transistors, two separate metal deposition processes may need to be used.[0039] The metal layer 61 is planarized using, for example CMP, and such planarization continues until at least the upper surface of the insulative member 17 is exposed, as shown in Figure 9A. This is done in order to assure that no metal spans the member 17, since otherwise, the gates in the I-gate transistor will be shorted together. As can be seen in Figure 9, there are two independent gates 62 and 64 for the I-gate transistor, and a single gate 65 for the tri-gate device. [0040] The gate 65 for the tri-gate transistor has a top surface opposite the bottom surface and has a pair of laterally opposite sidewalls formed adjacent the tri-gate structure best seen in Figure 9B. These sidewalls are connected on the upper surface of the silicon body. Thus, the gate surrounds the channel region of the tri-gate transistor on three sides. For the I-gate transistor, two independent gates 62 and 64 are separated by the insulative member 17, again best seen in Figure 9B where the ILD is shown removed.[0041] Also, best seen in Figure 9B, the silicon bodies 15 and 16 are shown on the insulative layer 10. Source regions 68 and 70 are shown for each of the transistors along with drain regions 71 and 72. The independent gates 62 and 64 along with their orthogonally disposed fins are readily seen. The same is true for the gate 65. These fins allow for easier contact to be made to the gates from overlying metalization layer, as shown by contact regions 80, 81 and 82. While not shown in Figure 9B, contact is made to the source and drain regions as well as to the gates from overlying metalization layers through contacts not shown. [0042] I-gate transistors may be used in logic circuits along with the tri-gate transistors. I-gate transistors have characteristics which make them desirable in certain circuits. For instance, a single I-gate transistor may provide both a high current and medium current device depending on the potential applied to one or both gates. Such devices may provide a "strong off device to reduce leakage in a sleep mode or power- down mode. I-gate transistors also provide a device for pre-charge lines by allowing a trickle current. In the above-mentioned patent application, the I-gate devices are used as DRAM cells, and the process described above, may be used in connection with such DRAM fabrication. In this case, the silicon body 15 is an elongated body formed in a plurality of parallel, spaced-apart lines and used in an array of DRAM cells. [0043] While in the figures two separate silicon bodies are shown, it will be appreciated that a single body may be used. Then, a tri-gate and I-gate transistor may be fabricated in series with one another. In this case, the series transistors have a source/drain region.[0044] Thus, a process has been described and a resultant structure for an integrated circuit having both an I-gate and tri-gate structure on a common substrate.
A method of manufacturing a semiconductor device may include forming a fin on an insulator and forming a gate oxide on sides of the fin. The method may also include forming a gate structure over the fin and the gate oxide and forming a dielectric layer adjacent the gate structure. Material in the gate structure may be removed to define a gate recess. A width of a portion of the fin below the gate recess may be reduced, and a metal gate may be formed in the gate recess.
1. A method of manufacturing a semiconductor device, comprising:forming a fin structure on an insulator;forming a gate structure over a portion of the fin structure, the gate structure comprising a semiconducting material or a metal;forming a dielectric layer adjacent the gate structure;removing the semiconducting material or the metal in the gate structure;reducing a width of a portion of the fin structure; anddepositing a metal to replace the removed semiconducting material or metal in the gate structure.2. The method of claim 1, wherein the forming a fin structure includes:depositing a dielectric layer on a silicon layer, andetching the dielectric layer and the silicon layer to define the fin structure including a silicon fin and a dielectric cap.3. The method of claim 2, further comprising:growing oxide layers on sides of the silicon fin.4. The method of claim 3, further comprising, after said removing and before said reducing:removing the oxide layers on sides of the silicon fin.5. The method of claim 1, wherein the forming a gate structure includes:depositing a gate material over the fin structure, andselectively etching the gate material to define the gate structure.6. The method of claim 1, wherein the forming a dielectric layer includes:depositing an oxide material over the gate structure, andpolishing the oxide material until a top surface of the oxide material is coplanar with a top surface of the gate structure and the top surface of the gate structure is exposed.7. The method of claim 1, wherein the removing the semiconducting material or the metal in the gate structure includes:etching the gate structure to form a gate recess.8. The method of claim 7, wherein the reducing includes:reducing the width of the portion of the fin structure below the gate recess.9. The method of claim 1, wherein the reducing includes:reducing the width of the portion of the fin structure by about 30 nm to about 80 nm in a channel region of the semiconductor device.10. The method of claim 1, wherein the reducing includes:wet etching the portion of the fin structure to reduce the width.11. The method of claim 1, further comprising:removing the dielectric layer.12. A method of manufacturing a semiconductor device, comprising:forming a fin on an insulator;forming a gate oxide on sides of the fin;forming a gate structure over the fin and the gate oxide, the gate structure comprising a semiconducting material;forming a dielectric layer adjacent the gate structure;removing the semiconducting material in the gate structure to define a gate recess;reducing a width of a portion of the fin below the gate recess; andforming a metal gate in the gate recess.13. The method of claim 12, further comprising, after said removing and before said reducing:removing the gate oxide on the sides of the fin.14. The method of claim 12, wherein the reducing includes: reducing the width of the portion of the fin by about 30 nm to about 80 nm.15. The method of claim 12, wherein the forming includes:forming the fin with a width between about 40 nm and about 100 nm.16. The method of claim 15, wherein the reducing includes:reducing the width of the portion of the fin to a width between about 10 nm and about 50 nm.17. The method of claim 12, further comprising, before said forming a metal gate:forming a gate dielectric on at least the sides of the fin.18. A method of manufacturing a semiconductor device, comprising:forming a fin on an insulator;forming a dielectric cap over the fin;forming gate oxide layers on opposite sides of the fin;forming a gate structure over the fin and dielectric cap, the gate structure comprising a semiconducting material or a metal;forming a dielectric layer adjacent the gate structure;removing the gate structure to define a gate recess within the dielectric layer and to expose the dielectric cap and gate oxide layers;removing the gate oxide layers from the opposite sides of the fin to expose the fin from the dielectric cap down to the insulator;reducing a width of the fin below the gate recess; andforming a metal gate in the gate recess.19. The method of claim 18, wherein the reducing includes:reducing the width of the fin below the gate recess by about 30 nm to about 80 nm.20. The method of claim 18, wherein the forming a fin includes:forming the fin with a width between about 40 nm and about 100 nm, and wherein the reducing includes:reducing the thickness of the fin below the gate recess to a width between about 10 nm and about 50 nm.
TECHNICAL FIELDThe present invention relates to semiconductor devices and methods of manufacturing semiconductor devices. The present invention has particular applicability to double-gate devices.BACKGROUND ARTThe escalating demands for high density and performance associated with ultra large scale integration semiconductor devices require design features, such as gate lengths, below 100 nanometers (nm), high reliability and increased manufacturing throughput. The reduction of design features below 100 nm challenges the limitations of conventional methodology.For example, when the gate length of conventional planar metal oxide semiconductor field effect transistors (MOSFETs) is scaled below 100 nm, problems associated with short channel effects, such as excessive leakage between the source and drain, become increasingly difficult to overcome. In addition, mobility degradation and a number of process issues also make it difficult to scale conventional MOSFETs to include increasingly smaller device features. New device structures are therefore being explored to improve FET performance and allow further device scaling.Double-gate MOSFETs represent new structures that have been considered as candidates for succeeding existing planar MOSFETs. In several respects, the double-gate MOSFETs offer better characteristics than the conventional bulk silicon MOSFETs. These improvements arise because the double-gate MOSFET has a gate electrode on both sides of the channel, rather than on only one side as in conventional MOSFETs. When there are two gates, the electric field generated by the drain is better screened from the source end of the channel. Also, two gates can control roughly twice as much current as a single gate, resulting in a stronger switching signal.A FinFET is a recent double-gate structure that exhibits good short channel behavior. A FinFET includes a channel formed in a vertical fin. The FinFET structure may be fabricated using layout and process techniques similar to those used for conventional planar MOSFETs.DISCLOSURE OF THE INVENTIONImplementations consistent with the present invention provide a method of forming a FinFET device that includes a metal gate using a damascene process. The thickness of a fin in a channel region may be reduced after removal of a dummy gate.Additional advantages and other features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The advantages and features of the invention may be realized and obtained as particularly pointed out in the appended claims.According to the present invention, the foregoing and other advantages are achieved in part by a method of manufacturing a semiconductor device that includes forming a fin structure on an insulator and forming a gate structure over a portion of the fin structure. The method may also include forming a dielectric layer adjacent the gate structure and removing material in the gate structure. A width of a portion of the fin structure may be reduced. A metal may be deposited to replace the removed material in the gate structure.According to another aspect of the invention, a method of manufacturing a semiconductor device may include forming a fin on an insulator and forming a gate oxide on sides of the fin. The method may also include forming a gate structure over the fin and the gate oxide and forming a dielectric layer adjacent the gate structure. Material in the gate structure may be removed to define a gate recess. A width of a portion of the fin below the gate recess may be reduced, and a metal gate may be formed in the gate recess.According to a further aspect of the invention, a method of manufacturing a semiconductor device may include forming a fin on an insulator and forming a dielectric cap over the fin. The method may also include forming gate oxide layers on opposite sides of the fin and forming a gate structure over the fin and dielectric cap. The method may further include forming a dielectric layer adjacent the gate structure and removing the gate structure to define a gate recess within the dielectric layer and to expose the dielectric cap and gate oxide layers. The gate oxide layers from the opposite sides of the fin may be removed, and a width of the fin below the gate recess may be reduced. A metal gate may be formed in the gate recess.Other advantages and features of the present invention will become readily apparent to those skilled in this art from the following detailed description. The embodiments shown and described provide illustration of the best mode contemplated for carrying out the invention. The invention is capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, where elements having the same reference number designation may represent like elements throughout.FIG. 1 is a cross-section illustrating exemplary layers that may be used for forming a fin in accordance with an embodiment of the present invention.FIG. 2A schematically illustrates the top view of a fin structure in accordance with an exemplary embodiment of the present invention.FIG. 2B is a cross-section illustrating the formation of the fin structure of FIG. 2A in accordance with an exemplary embodiment of the present invention.FIG. 3 is a cross-section illustrating the formation of a gate oxide and gate material on the device of FIG. 2B in accordance with an exemplary embodiment of the present invention.FIG. 4 is a cross-section illustrating the planarizing of the gate material of FIG. 3 in accordance with an exemplary embodiment of the present invention.FIG. 5A schematically illustrates a top view of a FinFET structure in accordance with an exemplary embodiment of the present invention.FIG. 5B is a cross-section illustrating the formation of the FinFET structure of FIG. 5A in accordance with an exemplary embodiment of the present invention.FIG. 6A is a cross-section illustrating the formation of a surrounding oxide layer on the FinFET structure of FIG. 5B in accordance with an exemplary embodiment of the present invention.FIG. 6B schematically illustrates a top view of a planarized structure in accordance with an exemplary embodiment of the present invention.FIG. 6C is a cross-section illustrating the formation of the planarized structure of FIG. 6B in accordance with an exemplary embodiment of the present invention.FIG. 7A is a cross-section illustrating a further stage in the formation of the FinFET structure in accordance with an exemplary embodiment of the present invention.FIG. 7B is another cross-section illustrating a further stage in the formation of the FinFET structure in accordance with an exemplary embodiment of the present invention.FIG. 8A schematically illustrates the top view of the FinFET structure in accordance with an exemplary embodiment of the present invention.FIGS. 8B and 8C are cross-sections further illustrating the formation of the FinFET structure in accordance with an exemplary embodiment of the present invention.FIGS. 9A, 9B, and 9C are cross-sections illustrating the formation and polishing of a semiconductor device in accordance with another implementation of the present invention.BEST MODE FOR CARRYING OUT THE INVENTIONThe following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents.Implementations consistent with the present invention provide a method of forming a FinFET device that may include a metal gate formed using a damascene process. After removing a dummy gate, but before forming the metal gate, the thickness of a silicon fin may be reduced.FIG. 1 illustrates the cross-section of a semiconductor device 100 formed in accordance with an embodiment of the present invention. Referring to FIG. 1, semiconductor device 100 may include a silicon on insulator (SOI) structure that includes a silicon substrate 110, a buried oxide layer 120 and a silicon layer 130 formed on the buried oxide layer 120. Buried oxide layer 120 and silicon layer 130 may be formed on substrate 110 in a conventional manner.In an exemplary implementation, buried oxide layer 120 may include a silicon oxide and may have a thickness ranging from about 1000 Ȧ to about 3000 Ȧ. Silicon layer 130 may include monocrystalline or polycrystalline silicon having a thickness ranging from about 300 Ȧ to about 1500 Ȧ. Silicon layer 130 is used to form a fin structure for a double-gate transistor device, as described in more detail below.In alternative implementations consistent with the present invention, substrate 110 and layer 130 may include other semiconducting materials, such as germanium, or combinations of semiconducting materials, such as silicon-germanium. Buried oxide layer 120 may also include other dielectric materials.A top dielectric layer 140, such as a silicon nitride layer or a silicon oxide layer (e.g., SiO2), may be formed over silicon layer 130 to act as a protective cap during subsequent etching processes. In an exemplary implementation, dielectric layer 140 may be formed to a thickness ranging from about 150 Ȧ to about 700 Ȧ. Next, a photoresist material may be deposited and patterned to form a photoresist mask 150 for subsequent processing. The photoresist may be deposited and patterned in any conventional manner.Semiconductor device 100 may then be etched. In an exemplary implementation, dielectric layer 140 and silicon layer 130 may be etched in a conventional manner, with the etching terminating on buried oxide layer 120 to form a fin. Photoresist mask 150 may then be removed. After the formation of the fin, source and drain regions may be formed (e.g., by deposition or epitaxial growth of a semiconducting material) adjacent the respective ends of the fin. For example, in an exemplary embodiment, a layer of silicon, germanium or combination of silicon and germanium may be deposited, patterned and etched in a conventional manner to form source and drain regions. Alternately, the source and drain regions may be formed in the same photolithography process that forms the fin.FIG. 2A schematically illustrates the top view of a fin structure on semiconductor 100 formed in such a manner. Source region 220 and drain region 230 may be formed adjacent the ends of fin 210 on buried oxide layer 120, according to an exemplary embodiment of the present invention.FIG. 2B is a cross-section along line A-A' in FIG. 2A illustrating the formation of fin structure 210 in accordance with an exemplary embodiment of the present invention. As described above, dielectric layer 140 and silicon layer 130 may be etched to form structure 210. Structure 210 may include a silicon fin 130 and dielectric cap 140.FIG. 3 is a cross-section illustrating the formation of a gate oxide and gate material on fin structure 210 in accordance with an exemplary embodiment of the present invention. A relatively thin gate oxide may be formed on exposed side surfaces of fin 130 as illustrated in FIG. 3. For example, a gate oxide 310 may be thermally grown on fin 130. The gate oxide 310 may be grown to a thickness of about 50 Ȧ to about 150 Ȧ and may be formed on the side surfaces of the silicon fin 130.A gate material layer 320 may be deposited over semiconductor device 100 after formation of the gate oxide 310. In an exemplary implementation, the gate material layer 320 may include polysilicon deposited using conventional chemical vapor deposition (CVD) or other well known techniques. Alternatively, other semiconducting materials, such as germanium or combinations of silicon and germanium, or various metals may be used as the gate material in layer 320.FIG. 4 is a cross-section illustrating the planarizing of the gate material 320 in accordance with an exemplary embodiment of the present invention. Planarizing the gate material 320 may remove any non-planar protrusions in the material, such as that shown above the fin structure 210 in FIG. 3. Returning to FIG. 4, chemical-mechanical polishing (CMP) or other conventional techniques may be performed so that the upper surface of gate material 320 is substantially planar. As shown in FIG. 4, the planar gate material 320 may extend about 200 to about 700 Ȧ above the dielectric cap 140. A thickness of the gate material 320 in the areas adjacent fin structure 210 after planarizing may range from about 700 Ȧ to about 2000 Ȧ.FIG. 5A schematically illustrates the top view of semiconductor device 100 at one stage in processing in accordance with an exemplary embodiment of the present invention. As illustrated, a gate may be patterned and etched in gate material 320 to form gate structure 510 that extends across a channel region of the fin structure 210.FIG. 5B is a cross-section taken along line B-B' in FIG. 5A and illustrates the formation of semiconductor device 100 of FIG. 5A in accordance with an exemplary embodiment of the present invention. Gate structure 510 may be defined in the gate material layer 320 by lithography (e.g., photolithography). A bottom antireflective coating (BARC) layer (not shown) may be deposited on the planar gate material layer 320 to facilitate etching of gate material layer 320. As will be understood by those skilled in the semiconductor art, photoresist (and possibly a top antireflective (TAR) coating) may be deposited on the BARC layer and patterned in the shape of gate structure 510.Gate material layer 320 may then be selectively etched to form the gate structure 510 out of the gate material layer 320 on device 100. The planar gate material layer 320 may provide at least a planar bottom surface for the BARC layer (not shown), and may tend to flatten the top surface of the BARC layer. The BARC layer may have a thickness ranging from about 100 Ȧ to about 500 Ȧ. Because of the planar gate material layer 320, the photoresist over the BARC layer may be patterned more precisely. As a result, the gate structure 510's critical dimension (CD) (i.e., its smallest feature size such as the gate width) may be formed with dimensions as small as from about 20 nm to about 50 nm.Gate structure 510 may include a gate portion proximate to the sides of the fin structure 210 and a larger electrode portion spaced apart from the fin structure 210. The electrode portion of gate structure 510 may provide an accessible electrical contact for biasing or otherwise controlling the gate portion.As may be seen in FIG. 5B, dielectric cap 140 located outside the perimeter of the gate structure 510 may be removed. In other words, the selective etching of gate material layer 320 may remove all material beyond the gate structure 510, down to the silicon fin 130 of fin structure 210. Further, it should be noted that the gate oxide 310 is still present on silicon fin 130, but is not illustrated in FIG. 5B because the line B-B' in FIG. 5A extends along the silicon fin 130 of fin structure 210.The source/drain regions 220 and 230 may then be doped. For example, n-type or p-type impurities may be implanted in source/drain regions 220 and 230. The particular implantation dosages and energies may be selected based on the particular end device requirements. One of ordinary skill in this art would be able to optimize the source/drain implantation process based on the circuit requirements and such acts are not disclosed herein in order not to unduly obscure the thrust of the present invention. In addition, sidewall spacers (not shown) may optionally be formed prior to the source/drain ion implantation to control the location of the source/drain junctions based on the particular circuit requirements. Activation annealing may then be performed to activate the source/drain regions 220 and 230.FIG. 6A is a cross-section illustrating the formation of a surrounding oxide layer 610 on semiconductor device 100 of FIG. 5B in accordance with an exemplary embodiment of the present invention. As illustrated, surrounding oxide layer 610 may be deposited over fin structure 210 (including silicon fin 130) and adjacent gate structure 510. In one implementation consistent with the principles of the invention, surrounding oxide layer 610 may include a protective compound such as tetraethyl orthosilicate (TEOS), although any other dielectric material may be used.FIG. 6B schematically illustrates a top view of a planarized semiconductor device 100 in accordance with an exemplary embodiment of the present invention. As shown, surrounding oxide layer 610 may be removed from over gate structure 510, for example, by a polishing process. Surrounding oxide layer 610, however, may still enclose the perimeter of gate structure 510. Although not illustrated in FIG. 6B, surrounding oxide layer 610 also may extend over source/drain regions 220 and 230 in some implementations.FIG. 6C is a cross-section along line B-B' in FIG. 6B illustrating the formation of planarized semiconductor device 100 in accordance with an exemplary embodiment of the present invention. As shown, surrounding oxide layer 610 may be polished back (e.g., by CMP) to expose gate structure 510 and to be coplanar with the top of gate structure 510. As illustrated in FIG. 6C, surrounding oxide layer 610 may extend above the entire silicon fin 130 except for the portion of silicon fin 130 that is covered by the dielectric cap 140 and gate structure 510.FIG. 7A is a cross-section along line B-B' in FIG. 6B illustrating a further stage in the formation of the semiconductor device 100 in accordance with an exemplary embodiment of the present invention. As shown, gate structure 510 (e.g., polysilicon) may be removed by, for example, selective etching. Because gate structure 510 is intended to be removed during processing, it may be referred to as a "dummy gate." Dielectric cap 140 under the gate structure 510 may protect the top of silicon fin 130 from being etched away during the removal of gate structure 510. Oxide layer 610 may act as a mask to protect other portions of semiconductor device 100 during the etching.FIG. 7B is a cross-section along line A-A' in FIG. 6B illustrating a further stage in the formation of semiconductor device 100 in accordance with an exemplary embodiment of the present invention. Concurrent with (or after) removal of gate structure 510, gate oxide 310 on the sides of the silicon fin 130 may also be completely removed. The width of silicon fin 130 may then be selectively thinned in the channel region (e.g., relative to its original width from lithographic formation, which is illustrated by the width of dielectric cap 140), as shown in FIG. 7B. Such selective thinning may be accomplished, for example, by wet etching, and may decrease the width of silicon fin 130 in the channel region (i.e., formerly under gate structure 510 before its removal). Portions of silicon fin 130 not in the channel region are covered and protected by surrounding oxide layer 610 during such thinning.In some implementations, it may be desirable for the width of silicon fin 130 to be less than a length of the gate. As an example, for good short channel control, it may be desirable for the width of silicon fin 130 to be less than half of the gate's length (i.e., <gate length/2). Onerous demands may be placed on lithography process parameters (i.e., greatly increasing processing difficulty) if such a small width of silicon fin 130 were to be achieved solely by lithography (e.g., in FIG. 2A). If silicon fin 130 is initially, lithographically defined to be the same size or larger than gate structure 510 and then is thinned after removing the "dummy gate" 510 in the above damascene gate process, however, a silicon fin 130 that is significantly smaller than the gate may be achieved. Such local thinning of silicon fin 130 as illustrated in FIG. 7B may achieve a fin of a desired width more easily than by lithography alone (e.g., FIG. 2A).Because the thinning of silicon fin 130 may be performed by wet etching, the sidewall surfaces of thinned silicon fin 130 may be smoother than by lithography alone. Such smoother sidewall surfaces of thinned silicon fin 130 may improve the carrier mobility of the vertically-oriented channels of semiconductor device 100. The widths of silicon fin 130 before and after thinning may depend on the length of dummy gate 510 in the channel region. As one example, however, the width of silicon fin 130 may be in a range of about 40-100 nm before thinning and may be in a range of about 10-50 nm after thinning. In another implementation consistent with the principles of the invention, the thinning may reduce the total width of silicon fin 130 by about 30 nm to about 80 nm.As shown in FIGS. 7A and 7B, at least some of dielectric cap 140 may remain after removing gate structure 510 and thinning silicon fin 130. In one implementation consistent with the principles of the invention, dielectric cap 140 may be left in place to insulate the top of thinned silicon fin 130 from subsequently-deposited gate material (e.g., a metal). In another implementation consistent with the principles of the invention (described further below), dielectric cap 140 may be removed (e.g., by etching) so that thinned silicon fin 130 in the channel region of semiconductor device 100 is exposed for subsequent processing.FIG. 8A schematically illustrates the top view of semiconductor device 100 in accordance with an exemplary embodiment of the present invention. The dotted lines in FIG. 8A illustrate the reduced width of thinned silicon fin 130 in the channel region of fin structure 210. FIG. 8B is a cross-section along line A-A' in FIG. 8A further illustrating the formation of the semiconductor device 100. FIG. 8C is a cross-section along line B-B' in FIG. 8A further illustrating the formation of the semiconductor device 100.A high-k dielectric material 810 such as HfO2 or HfSiO may be formed on fin 130 in the channel region as illustrated in FIG. 8B. Such a high-k dielectric material 810 may have a dielectric constant k higher than about 3.9. In another implementation consistent with the principles of the invention, dielectric material 810 may be an oxide (e.g., SiO2) that is thermally grown on the side surfaces of the thinned silicon fin 130 (and the top surface if dielectric cap 140 has been removed). The dielectric constant k of such SiO2 material may be about 3.9. In either case, the dielectric material 810 may serve as the gate dielectric layer for semiconductor device 100 in the implementation illustrated in FIGS. 8A and 8B.Next, a metal, such as TaN or TiN may be deposited into the gate-shaped space (which may be referred to as a "gate recess") within surrounding oxide layer 610 that was left by the removal of gate structure 510 (see FIGS. 6B and 7A). This metal may form gate 820, and may be polished (e.g., by CMP) to obtain a relatively planar top surface as shown in FIG. 8C. The surrounding oxide layer 610 around gate 820 may then be removed. FIG. 8A illustrates FinFET device 100 after removal of surrounding oxide layer 610.Thus, in accordance with the present invention, a FinFET device 100 may be formed with metal gate 820 using a damascene process after thinning silicon fin 130 in a gate recess. The gate recess may be formed by removing dummy gate 510. Advantageously, the resulting structure exhibits good short channel behavior. The metal gate also reduces gate resistance and eliminates poly depletion problems associated with polysilicon gates. In addition, the present invention provides increased flexibility and can be easily integrated into conventional processing.OTHER IMPLEMENTATIONIn some implementations, it may be desirable to achieve automatic stopping at a polysilicon gate after dielectric CMP. FIG. 9A is a cross-section illustrating a FinFET 900 after gate formation (similar to FIG. 5B). A dielectric layer 940 may be formed on a silicon fin 930, an insulator 920, and a substrate 910. A gate material layer (e.g., polysilicon) and SiON (or BARC material) may be deposited on dielectric layer 940 and patterned to form a gate 950 and a stop cap 960, as shown in FIG. 9A. Stop cap 960 may be formed of SiON or the BARC material. The SiON or the BARC material may aid in precisely forming the dimensions of gate 950.A surrounding dielectric layer 970 may be deposited over dielectric cap 940, gate 950, and stop cap 960 as shown in FIG. 9B. Surrounding dielectric layer 970 may include, for example, TEOS. Surrounding dielectric layer 970 may be polished by CMP using a high selectivity slurry, as shown in FIG. 9C. The presence of stop cap 960 facilitates stopping of such polishing when stop cap 960 is reached. In this manner, stop cap 960 may function as an automatic stop layer during CMP of layer 970. FinFET 900 may continue to be processed in a similar manner as semiconductor device 100. For example, gate 950 may be removed, and a metal gate may be deposited in its place.In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, the present invention can be practiced without resorting to the specific details set forth herein. In other instances, well known processing structures have not been described in detail, in order not to unnecessarily obscure the thrust of the present invention.The dielectric and conductive layers used in manufacturing a semiconductor device in accordance with the present invention can be deposited by conventional deposition techniques. For example, metallization techniques, such as various types of CVD processes, including low pressure CVD (LPCVD) and enhanced CVD (ECVD) can be employed.The present invention is applicable to the formation of any of various types of semiconductor devices, and hence, details have not been set forth in order to avoid obscuring the thrust of the present invention. In practicing the present invention, conventional photolithographic and etching techniques are employed and, hence, the details of such techniques have not been set forth herein in detail.Only the preferred embodiments of the invention and a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the invention is capable of use in various other combinations and environments and is capable of modifications within the scope of the inventive concept as expressed herein.No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. The scope of the invention is defined by the claims and their equivalents.
The invention relates to a method of forming a semiconductor construction (10). In one of the steps, a semiconductor substrate is provided which comprises a plurality of trenched isolation regions (12, 14, 16) and extends within a monocrystalline semiconductor material (18). The isolation regions are spaced from one another by first regions (20, 22) comprising the monocrystalline semiconductor material. In another step the monocrystalline semiconductor material is patterned into a plurality of pillars (70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90) within the first regions. The patterning comprises the steps of forming a patterned hard mask (40) over the monocrystalline semiconductor material and transferring a pattern from the patterned hard mask into the monocrystalline semiconductor material.
A method of forming a semiconductor construction (10), comprising:providing a semiconductor substrate, the substrate comprising a plurality of trenched isolation regions (12, 14, 16) extending within a monocrystalline semiconductor material (18), the isolation regions being spaced from one another by first regions (20, 22) comprising the monocrystalline semiconductor material; andpatterning the monocrystalline semiconductor material into a plurality of pillars (70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90) within the first regions;wherein the patterning comprises:forming a patterned hard mask (40) over the monocrystalline semiconductor material; andtransferring a pattern from the patterned hard mask into the monocrystalline semiconductor material.The method of claim 1 wherein the trenched isolation regions (12, 14, 16) have an uppermost surface at a first elevational level, wherein the monocrystalline semiconductor material (18) has an uppermost surface at a second elevational level, and wherein the first elevational level is at or above the second elevational level at initiation of the patterning of the semiconductor material.The method of claim 1 wherein the trenched isolation regions (12, 14, 16) have an uppermost surface at a first elevational level, wherein the monocrystalline semiconductor material (18) has an uppermost surface at a second elevational level, and wherein the first elevational level is below the second elevational level at initiation of the patterning of the semiconductor material.The method of claim 3 wherein the monocrystalline semiconductor material (18) is a first semiconductor material, the method further comprising forming a second semiconductor material (30) over the first semiconductor material, and wherein the patterning patterns the second semiconductor material and forms individual of the pillars to comprise a segment of the second semiconductor material over a segment of the first semiconductor material.The method of claim 4 wherein the second semiconductor material (30) consists essentially of polycrystalline or amorphous silicon.The method of claim 4 wherein the second semiconductor material (30) consists essentially of single crystal silicon.The method of claim 6 wherein the second semiconductor material (30) is epitaxially grown from the first semiconductor material (18).The method of claim 1 wherein the trenched isolation regions (12, 14, 16) extend along a defined longitudinal direction, wherein the pillars (70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90) form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and further comprising forming one or more horizontally-extending gatelines (144, 146, 148) extending along pillars (80, 82) that are along a common row as one another.The method of claim 8 further comprising:forming sections of vertically-extended semiconductor material (65, 67) between the pillars;forming source/drain regions (150) within upper regions of the sections (65, 67);forming source/drain regions (151, 153) in upper regions of the pillars; andincorporating paired source/drain regions into transistor devices; individual pairs of source/drain regions comprising one source/drain region within a section and the other source/drain region within a pillar, the transistor devices comprising channel regions interconnecting the paired source/drain regions.The method of claim 9 further comprising forming a DRAM unit cell by:forming a capacitor (160) in electrical connection with one of the paired source/drain regions of an individual transistor device; andforming a bitline (162) in electrical connection with the other of the paired source/drain regions of the transistor device.The method of claim 1 wherein the trenched isolation regions (12, 14, 16) extend along a defined longitudinal direction, wherein the pillars (70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90) form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and wherein horizontally adjacent pillars are longitudinally staggered relative to one another.The method of claim 1 wherein the trenched isolation regions (12, 14, 16) extend along a defined longitudinal direction, wherein the pillars (70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90) form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and wherein horizontally adjacent pillars are substantially not longitudinally staggered relative to one another.
TECHNICAL FIELDThe invention pertains to semiconductor constructions and to methods of forming semiconductor constructions. In particular aspects, the invention pertains to methods of forming transistor devices with vertically-extending channel regions, and to constructions comprising such devices.BACKGROUND OF THE INVENTIONTransistor devices are utilized in numerous semiconductor assemblies. The transistor devices can be utilized in, for example, memory circuitry, such as, dynamic random access memory (DRAM) constructions and static random access memory (SRAM) constructions.Continuing goals of semiconductor device processing are to increase the scale of integration, simplify processing and reduce costs. It is desired to create new methods of forming transistor constructions which progress toward one or more of such continuing goals.Inventive aspects described herein can be particularly useful for forming transistor devices. However, it is to be understood that although the invention is primarily described relative to such application, the invention can also be utilized in other semiconductor fabrication applications, as will be recognized by persons of ordinary skill in the art.SUMMARY OF THE INVENTIONAccording to the present invention there is provided a method of forming a semiconductor construction as defined in claim 1. Embodiments of the invention pertain to methods of forming a semiconductor construction. A semiconductor substrate is provided. The substrate includes a plurality of trenched isolation regions extending within a monocrystalline semiconductor material. The isolation regions are spaced from one another by first regions comprising the monocrystalline semiconductor material. The monocrystalline semiconductor material is patterned into a plurality of pillars within the first regions. In subsequent processing, the pillars can be incorporated into transistor devices. In such applications, the pillars can comprise vertically-extending channel regions of the transistor devices.In another method of forming a semiconductor construction, a semiconductor substrate is provided. The substrate comprises rows of trenches extending within a first semiconductor material. The rows are spaced from one another by first regions comprising the first semiconductor material. The trenches are only partially filled with dielectric material, and the dielectric material within the trenches forms spaced rows. A second semiconductor material is formed over the semiconductor substrate. The second semiconductor material extends across the first region between the rows of trenches. The first and second semiconductor materials are patterned into a plurality of pillars. Individual pillars comprise a segment of the second semiconductor material over a segment of the first semiconductor material. The pillars extend along rows, with at least some of the pillar rows being spaced from one another by second regions comprising one or more of the dielectric material rows.In another method of forming a semiconductor construction, a semiconductor substrate is provided. The substrate includes a plurality of trenches extending within a first semiconductor material. The first semiconductor material has an uppermost surface at a first elevational level. The trenches are spaced from one another by first regions comprising the first semiconductor material. The trenches are filled with a first dielectric material. A level of the first dielectric material is reduced within the trenches to form dielectric material lines. The dielectric material lines have uppermost surfaces at a second elevational level which is below the first elevational level. After the level of the first dielectric material is reduced, a second semiconductor material is formed over the semiconductor substrate. The second semiconductor material extends over the dielectric material lines, and also extends across the first regions. Openings are formed through the second semiconductor material to the dielectric material lines, and filled with a second dielectric material. The first and second semiconductor materials are then patterned into a plurality of pillars within the first regions. Individual pillars comprise a segment of the second semiconductor material over a segment of the first semiconductor material. The pillars have uppermost surfaces at a third elevational level which is above the first elevational level.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.Figs. 1-3 are a diagrammatic, fragmentary top view ( Fig. 1 ) and cross- sectional side views ( Figs. 2 and 3 ) of a semiconductor construction at a preliminary processing state of an exemplary aspect of the present invention. Figs. 2 and 3 are views along the lines 2-2 and 3-3, respectively, of Fig. 1, Fig. 2 is a view along the line 2-2 of Fig. 3, and Fig. 3 is a view along the line 3-3 of Fig. 2 .Figs. 4-6 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 1-3 . Figs. 5 and 6 are views along the lines 5-5 and 6-6 of Fig. 4, Fig. 5 is a view along the line 5-5 of Fig. 6, and Fig. 6 is a view along the line 6-6 of Fig. 5 .Figs. 7-9 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 4-6 . Figs. 8 and 9 are views along the lines 8-8 and 9-9, respectively, of Fig. 7, Fig. 8 is a view along the line 8-8 of Fig. 9, and Fig. 9 is a view along the line 9-9 of Fig. 8 .Figs. 10-12 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 7-9 . Figs. 11 and 12 are views along the lines 11-11 and 12-12, respectively, Fig. 11 is a view along the line 11-11 of Fig. 12, and Fig. 12 is a view along the line 12-12 of Fig. 11 .Figs. 13-15 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 10-12 . Figs. 14 and 15 are views along the lines 14-14 and 15-15, respectively, of Fig. 13, Fig. 14 is a view along the line 14-14 of Fig. 15, and Fig. 15 is a view along the line 15-15 of Fig. 14 .Figs. 16-18 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 13-15 . Figs. 17 and 18 are views along the lines 17-17 and 18-18 of Fig. 16 , respectively, Fig. 17 is a view along the line 17-17 of Fig. 18, and Fig. 18 is a view along the line 18-18 of Fig. 17 .Figs. 19-21 are views of the fragments of Figs. 1-3 , respectively, shown at a processing stage subsequent to that of Figs. 16-18 . Figs. 20 and 21 are views along the lines 20-20 and 21-21 of Fig. 19 , respectively, Fig. 20 is a view along the line 20-20 of Fig. 21, and Fig. 21 is a view along the line 21-21 of Fig. 20 .Fig. 22 is a view of the fragment of Fig. 1 shown at the processing stage of Fig. 10 , in an alternative embodiment relative to that described previously with reference to Fig. 10 .Fig. 23 is a view of the fragment of Fig. 2 , and is shown as a preliminary processing stage of another exemplary aspect of the present invention.Fig. 24 is a view of the Fig. 23 wafer fragment shown at a processing stage subsequent to that of Fig. 23 .Fig. 25 is a view of the Fig. 23 wafer fragment shown at a processing stage subsequent to that of Fig. 24 .Fig. 26 is a view of the Fig. 2 wafer fragment shown at a processing stage subsequent to that of Fig. 2 in accordance with yet another aspect of the present invention.Fig. 27 is a view of the Fig. 26 wafer fragment shown at a processing stage subsequent to that of Fig. 26 .Fig. 28 is a diagrammatic view of a computer illustrating an exemplary application of the present invention.Fig. 29 is a block diagram showing particular features of the motherboard of the Fig. 28 computer.Fig. 30 is a high-level block diagram of an electronic system according to an exemplary aspect of the present invention.Fig. 31 is a simplified block diagram of an exemplary memory device according to an aspect of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe invention pertains to semiconductor constructions comprising vertically-extending pillars, and to methods of forming such constructions. In particular aspects, the pillars can be incorporated into vertical-surrounding-gate field effect transistors. Such transistors can be incorporated into high density memory arrays, such as, for example, high density DRAM and/or SRAM arrays. An exemplary aspect of the invention is described with reference to Figs. 1-21 .Referring initially to Figs. 1-3 , a semiconductor construction 10 is illustrated at a preliminary processing stage. Construction 10 comprises a semiconductor substrate which includes a plurality of trenched isolation regions 12, 14 and 16 extending within a monocrystalline semiconductor material 18. To aid in interpretation of the claims that follow, the terms "semiconductive substrate" and "semiconductor substrate" are defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above.Isolation regions 12, 14 and 16 are spaced from one another by regions 20 and 22 of semiconductor material 18. Regions 20 and 22 can be referred to as "first regions" in particular aspects of the present invention.The isolation regions 12, 14 and 16 comprise trenches formed within semiconductor material 18, and comprise dielectric material 24 provided within the trenches. Dielectric material 24 can be any suitable composition or combination of compositions. In particular aspects, material 24 will comprise, consist essentially of, or consist of silicon dioxide provided over a silicon nitride liner. The trenches formed within monocrystalline material 18 can be formed to any suitable depth, and in some aspects the isolation regions will correspond to so-called shallow trench isolation regions.Semiconductor material 18 can comprise any suitable semiconductor material, or combination of materials. In particular aspects, material 18 will comprise, consist essentially of, or consist of monocrystalline silicon either alone or lightly-doped with background dopant at the processing stage of Figs. 1-3 . Construction 10 can, in some aspects, correspond to a fragment of a monocrystalline silicon wafer at the shown processing stage of Figs. 1-3 .Construction 10 has an upper surface 26 at the processing stage of Figs. 1-3 . Such upper surface is shown to be substantially coplanar across dielectric material 24 and across semiconductor material 18, and materials 18 and 24 can be considered to have uppermost surfaces at a common elevational level in the shown aspect of Figs. 1-3 . The elevational level of dielectric material 24 can be referred to as a first elevational level, and the elevational level of semiconductor material 18 can be referred to as a second elevational level. It is to be understood that the invention encompasses other aspects (not shown) in which surface 26 is not coplanar across the dielectric material 24 and semiconductor material 18 (i.e., in which the first and second elevational levels are not the same as one another). In such other aspects, dielectric material 24 can extend above the uppermost surface of material 18 or below such uppermost surface.Referring next to Figs. 4-6 , such illustrate construction 10 after dielectric material 24 has been recessed within trenches 12, 14 and 16. In aspects in which dielectric material 24 comprises, consists essentially of, or consists of silicon dioxide, the etch utilized to recess material 24 can be a wet etch. For example, the etch can be a buffered oxide etch, and/or can utilize hydrofluoric acid (in particular aspects the etch will utilize diluted hydrofluoric acid). If semiconductor material 18 consists essentially of monocrystalline silicon and dielectric material 24 consists essentially of silicon dioxide, the etch utilized to recess material 24 is preferably an etch selective for silicon dioxide relative to silicon (i.e., an etch which removes silicon dioxide at a faster rate than silicon, which can include, but is not limited to, an etch which is 100% selective for silicon dioxide relative to silicon). As will become clear in the discussion that follows, the amount by which the dielectric material 24 is recessed determines the height of semiconductor material pillars in some aspects of the invention. In such aspects, the etch can be conducted to recess the dielectric material by from about 500Å to about 1500Å, and can, for example, be conducted to recess the dielectric material by from about 1000Å to about 1500Å.As was discussed previously, the trenches 12, 14 and 16 can correspond to conventional trenches utilized for shallow trench isolation regions. It is noted, however, that the trenches can also be formed to be deeper than those traditionally utilized for shallow trench isolation regions in order to compensate for the recessing of dielectric material 24 within the isolation trenches. In some aspects, the trenches can extend to a depth greater than about 2000Å.The recessing of dielectric material 24 reduces the elevational height of the dielectric material (the so-called first elevational level referred to above) relative to the elevational height of semiconductor material 18 (the so-called second elevational level referred to above). Thus, the elevational level of the uppermost surface of semiconductor material 18 is above the elevational level of the uppermost surface of dielectric material 24 at the processing stage of Figs. 4-6 . In other words, trenches 12, 14 and 16 are only partially filled with dielectric material 24 at the processing stage of Figs. 4-6 . The dielectric material within the trenches forms spaced rows, as can be seen in the top view of Fig. 4 . The up-down direction of the Fig. 4 view can be defined as a longitudinal direction, and the side-to-side direction of the Fig. 4 view can be defined as a horizontal direction. Accordingly, the rows of dielectric material are elongated in the defined longitudinal direction. In particular aspects, the rows can be referred to as longitudinally-extending dielectric lines. Such lines are separated from one another by longitudinally-extending strips of semiconductor material 18 (such as, for example, the strips 20 and 22 of Figs. 4 and 5 ).Referring next to Figs. 7-9 , a semiconductor material 30 is formed over material 18, and a dielectric material 23 is formed within the semiconductor material and directly over trenches 12, 14 and 16. The dielectric material 23 is patterned into lines 25, 27 and 29.The shown construction can be formed by initially providing semiconductor material 30 over substrate 18 and over trenches 12, 14 and 16. Subsequently, openings can be formed through material 18 to the material 24 within trenches 12, 14 and 16, and the openings can be filled with the dielectric material 23. In some aspects, the dielectric material 23 will be formed to overfill the openings in material 30, and subsequently excess material will be removed by planarization to form the shown planarized upper surface extending across material 30 and lines 25, 27 and 29. The dielectric material 24 within the trenches is in rows, and the dielectric material 23 raises an elevational level of the dielectric material rows to the height of material 30.The dielectric material 23 can be referred to as a second dielectric material to distinguish the material from the first dielectric material 24 that was described previously. Material 23 can comprise any suitable dielectric composition or combination of compositions. In some aspects, material 23 can be compositionally the same as material 24, and in other aspects material 23 can be different than material 24. Dielectric material 23 can, for example, comprise, consist essentially of, or consist of doped or undoped silicon dioxide.Material 30 can comprise any suitable semiconductor material. In particular aspects, material 30 will comprise, consist essentially of, or consist of silicon. The silicon can be in one or more of amorphous, polycrystalline or single crystalline form. For instance, material 30 can comprise, consist essentially of, or consist of single crystal silicon epitaxially grown from exposed surfaces of monocrystalline material 18. Alternatively, material 30 can comprise, consist essentially of, or consist of polycrystalline and/or amorphous silicon deposited over material 18 by, for example, chemical vapor deposition and/or atomic layer deposition. Material 30 can be referred to as a second semiconductor material to distinguish the material from the first semiconductor material 18.Material 30 can be formed to be of any suitable thickness. In particular aspects, material 30 can be formed to a thickness of from about 1000Å to about 3000Å, and in some aspects can be formed to a thickness greater than or equal to about 1500Å.The semiconductor material 30 can be undoped at the processing stage of Figs. 7-9 . Alternatively, semiconductor material 30 can be formed to bein situdoped. For instance, in particular applications (discussed in more detail below), material 30 is ultimately patterned into vertically-extending pedestals (i.e., pillars) comprising a source/drain region and/or a channel region of a transistor device. In such aspects, material 30 can be formed to be appropriately doped so that the pillars will have the desired doping therein without additional implants. Alternatively, material 30 can be formed so that additional implants are provided within material 30 after the material is patterned into the vertically-extending pillars.Material 30 can be utilized for numerous functions in various aspects of the invention. For instance, a purpose of material 30 can be to increase a vertical height of pillars ultimately formed between trenches 12, 14 and 16. Such can be advantageous if, for example, increased channelling is desired in transistors comprising the pillars as vertically-extending channel regions.Referring next to Figs. 10-12 , a patterned material 40 is formed over semiconductor material 30 and dielectric lines 25, 27 and 29. Material 40 can correspond to a so-called hard mask (i.e., to a mask formed of material other than photoresist), and in particular aspects will comprise, consist essentially of, or consist of silicon nitride.Material 40 can be formed into the desired mask pattern utilizing any suitable method. In a particular aspect, material 40 is silicon nitride and is formed into the desired pattern utilizing the following multi-step method. Initially, silicon dioxide is formed over material 30, and openings are formed to extend through the silicon dioxide in locations where nitride mask material 40 is ultimately desired. A silicon nitride layer is then formed over the silicon dioxide and within the openings. The silicon nitride is subjected to a blanket etch which removes the silicon nitride from over the silicon dioxide while leaving the silicon nitride within the openings that had been formed through the silicon dioxide. Such blanket etch can comprise, for example, chemical-mechanical polishing. Subsequently, the silicon dioxide is removed with a wet etch selective for the silicon dioxide relative to silicon nitride. The silicon nitride remaining is in the form of the desired patterned hard mask.An alternative method for forming the silicon nitride in the desired patterned hard mask is to deposit a layer of silicon nitride over material 30, and to then pattern the silicon nitride using photolithographically processed photoresist (i.e., to form a photolithographically patterned photoresist mask over the silicon nitride, transfer a pattern from the photoresist mask to the silicon nitride with an appropriate etch of the silicon nitride, and then remove the photoresist mask).The shown patterned mask comprises lines 65 and 67, and spaced islands 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62 and 64. The lines 65 and 67 extend substantially orthogonally to a direction of dielectric lines 25, 27 and 29, as can be seen in the top view of Fig. 10 . The locations where dielectric lines 25, 27 and 29 are crossed by lines 65 and 67 are diagrammatically shown as locations 69.The islands 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62 and 64 form an array comprising longitudinally-extending columns (such as the column comprised by islands 42, 50 and 58), and horizontally-extending rows (such as the row comprised by islands 50, 52, 54 and 56). Although the longitudinally-extending lines of islands (such as longitudinally-extending line of islands 42, 50 and 58) are described as "columns", and contrasted with the horizontally-extending "rows" of islands, it is to be understood that the term "row" can be utilized outside of the concept of an array to refer to any line in any orientation. Thus, the longitudinally-extending lines can also be considered "rows" in some aspects of the invention. For instance, the aspect of Figs. 10-12 can be considered to comprise longitudinally-extending rows of islands (such as the longitudinally-extending row of islands 42, 50 and 58), and longitudinally-extending rows 25, 27, and 29 of dielectric material within semiconductor material 30.In the shown aspect of the invention, horizontally adjacent pillars (such as the pillars 50 and 52) are not longitudinally staggered relative to one another. In contrast, Fig. 22 shows construction 10 at the processing stage of Fig. 10 , but in accordance with an aspect in which horizontally-adjacent islands of masking material 40 are longitudinally staggered relative to one another. The aspect of Fig. 22 can be preferred relative to that of Fig. 10 in that the aspect of Fig. 22 may allow tighter packing of structures formed utilizing patterned material 40 then can be achieved with the aspect of Fig. 10 . For instance, as will be discussed below, masking material 40 can be utilized for forming pillars from one or both of materials 30 and 18. The aspect of Fig. 22 may allow the pillars to be more tightly packed than the aspect of Fig. 10 . The dielectric lines 25, 27, 29 are not shown in Fig. 22 , nor are the lines 65 and 67, in order to simplify the drawing, but it is to be understood that structures analogous to lines 25, 27, 29, 65 and 67 would typically be included in Fig. 22 aspects of the invention.Referring to Figs. 13-15 , a pattern from masking material 40 is transferred into semiconductor materials 18 and 30 to form pillars 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90 and 92 within the regions between the trenched isolation regions (such as, for example, the regions 20 and 22).The transfer of the pattern of mask 40 into the underlying materials forms lines from the materials underlying lines 65 and 67. Thus, regions of dielectric lines 25, 27 and 29 ( Figs. 10-12 ) that are not protected by the masking material 40 are removed, and the only remaining portions of lines 25, 27 and 29 are at locations 69 wherein the lines 25, 27 and 29 are crossed by lines 65 and 67. The portions of dielectric material from lines 25, 27 and 29 at locations 69 segment the materials beneath lines 65 and 67 into sections 91, 93, 95, 97, 99, 101, 103 and 105 of material 30 which are spaced from one another by the portions 69 of dielectric material remaining from lines 25, 27 and 29.Any suitable etch can be utilized for transferring the pattern from masking material 40 into the underlying materials, including, for example, a reactive ion etch. The etch preferably extends through semiconductor material 30 and lines 25, 27 and 29, and into semiconductor material 18, as shown. Further, the etch preferably terminates when a level of semiconductor material 18 between the pillars is at about the same elevational level as the uppermost surfaces of dielectric material 24 within regions 12, 14 and 16. Such can be accomplished utilizing, for example, a timed etch and/or an end point determination of one or more components from material 24.The pillars 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90 and 92 have the same array pattern as that discussed previously for the islands of hard masking material 40 in Figs. 10 and 22 . Accordingly, the pillars can be formed so that horizontally adjacent pillars are not longitudinally staggered relative to one another, or can be formed so that horizontally adjacent pillars are longitudinally staggered relative to one another.It is noted that in the shown embodiment each of the longitudinally-extending rows of pillars is spaced from a horizontally adjacent row of pillars by a single row of dielectric material (for instance, the longitudinally-extending row of pillars 70, 78 and 86 is spaced from the adjacent longitudinally-extending row of pillars 72, 80 and 88 by a gap which includes the single row 12 of dielectric material). It is to be understood, however, that the invention encompasses other aspects (not shown) in which adjacent rows of pillars are spaced from one another by two or more dielectric material rows.Each of the shown pillars comprises a segment of the second semiconductor material 30 over a segment of the first semiconductor material 18. The pillars can be considered to comprise mesas of the monocrystalline material 18 extending upwardly from longitudinally-extending strips of the material 18 between isolation regions 12, 14 and 16. The mesas define bases of the pillars. In the shown aspect of the invention, the lowest-most portion of the pillar bases is at about the same elevational level as the uppermost portion of the dielectric material 24 within isolation regions 12, 14 and 16. In contrast, each of the pillars has an uppermost portion of semiconductor material defined by the uppermost portion of material 30, with such uppermost portion being above the uppermost elevational level of material 18 at the processing stage of Fig. 5 (i.e., being above the so-called second elevational level of the Fig. 5 construction). Thus, the uppermost semiconductor material 30 of the pillars defines an uppermost elevational level of the pillars that can be referred to as a third elevational level which is above the levels discussed with reference to Fig. 5 for the elevational levels of dielectric material 24 and semiconductor material 18.Although the shown patterning utilized to form the pillars extends through second semiconductor material 30 and into first semiconductor material 18, it is to be understood that the invention encompasses other aspects (not shown) in which the pillars only extend into second semiconductor material 30, and do not extend to first semiconductor material 18.Referring next to Figs. 16-18 , gate dielectric 140 is formed along sidewalls of the pillars 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90 and 92; along sidewalls of the sections 91, 93, 95, 97, 99, 101, 103 and 105 of material 30; and also along exposed regions of the semiconductor material 18 between the pillars. The gate dielectric material can comprise, consist essentially of, or consist of, for example, silicon dioxide. The gate dielectric material can be formed by oxidizing exposed surfaces of semiconductor materials 18 and 30, and/or by deposition of desired dielectric materials. The gate dielectric material is not shown being formed along the various dielectric materials of construction 10, but it is to be understood that the invention encompasses other embodiments in which the dielectric material of the gate dielectric is formed along the various dielectric materials of construction 10 as well as along materials 18 and 30.A gateline material 142 is shown formed around the pillars. The gateline material is in horizontally-extending strips 144, 146 and 148 which are separated from one another by lines 65 and 67. The strips 144, 146 and 148 of the gateline material form wordlines extending along rows of the pillars, and separated from the pillars by the dielectric material 140. The gateline materials can entirely surround the pillars, as shown, or in other aspects (not shown) may only partially surround at least some of the pillars.The patterned gateline strips 144, 146 and 148 can be formed utilizing any suitable methodology. In particular aspects, the strips will be formed by depositing the gateline material across an entirety of construction 10 and subsequently utilizing planarization (such as, for example, chemical-mechanical polishing) to remove the gateline material from over masking material 40.Gateline material 142 can comprise any suitable composition, or combination of compositions. In particular aspects, material 142 will comprise, consist essentially of, or consist of conductively-doped silicon. In some aspects, material 142 can comprise metal and/or metal compounds, either alone, or in combination with conductively-doped silicon.Gateline material 142 can be formed to any suitable thickness, but preferably is formed to a thickness which only partially overlaps the elevational thickness of semiconductor material 30. In exemplary applications, gateline material 142 will have a thickness of at least about 500Å, and in some applications can have a thickness of greater than 1000Å.The cross-sections of Figs. 17 and 18 show that source/drain regions 150, 151 and 153 have been formed within material 30. The source/drain regions within the pillars are labeled as 150, and can be referred to as first source/drain regions. The source/drain regions within sections 97 and 95 ( Fig. 18 ) are labeled as 151 and 153, respectively, and can be referred to as second source/drain regions to distinguish them from the source/drain regions in the pillars. The source/drain regions can be formed with any suitable implant of conductivity-enhancing dopant, and are formed to elevationally overlap the gateline material 142.The source/drain regions 150 at the top of the pillars are gatedly connected with the source/drain regions in sections 91, 93, 95, 97, 99, 101, 103 and 105 (such as the source/drain regions 151 and 153 of Fig. 18 ) through channel regions. Such channel regions extend within the pillars and sections, and also extend within portions of substrate 18 interconnecting the pillars and sections. The channel regions can be doped at any suitable processing stage, and can, for example, bein situdoped during formation of one or both of semiconductor materials 18 and 30. The gateline 142, source/drain regions 150, and source/drain regions within the sections 91, 93, 95, 97, 99, 101, 103 and 105 (for example, the source/drain regions 151 and 153) together form a plurality of field effect transistor constructions.Referring next to Figs. 19-21 , masking material 40 ( Figs. 16-18 ) is removed and subsequently an insulative material 154 is formed over the upper surface of the construction. Insulative material 154 can comprise any suitable composition or combination of compositions, and in some aspects will comprise, consist essentially of, or consist of one or more of silicon nitride, silicon dioxide, and borophosphosilicate glass (BPSG).The material 154 has openings 156 extending therethrough to expose source/drain regions 150, and can have other openings (not shown) extending to the source/drain regions in the sections between the pillars (the source/drain regions 151 and 153, for example). The source/drain regions 150 can be electrically connected with capacitor constructions 160 (diagrammatically illustrated by boxes in Figs. 20 and 21 ) through interconnects (not shown) extending within the openings 156. Similarly, the source/drain regions within the sections between pillars (the source/drain regions 151 and 153, for example) can be connected to bitlines 162 through appropriate interconnects. The transistor devices comprising channels within the pillars can thus be incorporated into DRAM constructions. The constructions can be formed in numerous levels of integration, and in some aspects can be incorporated into, for example, 4F2, 6F2, or 8F2DRAM cell arrays. In other aspects of the invention (not shown), the transistor constructions of Figs 19-21 can be incorporated into other types of memory devices besides, or in addition to being incorporated in DRAM devices. For instance, the transistor constructions can be incorporated into SRAM devices.Another aspect of the invention is described with reference to Figs. 23-25 . In referring to such aspect, similar numbering will be used as was used above in describing Figs. 1-21 , where appropriate.Referring initially to Fig. 23 , a construction 10 is illustrated at the processing stage of Fig. 3 . Construction 10 thus comprises the crystalline semiconductor material 18 described previously, and further comprises the isolation regions 12, 14 and 16 extending within semiconductor material 18. The construction also comprises the regions 20 and 22 extending between the isolation regions, and is shown comprising a planarized upper surface 26 extending across the isolation regions and also across an uppermost surface of semiconductor material 18. It is noted that upper surface 26 can be non-planer in other aspects of the invention (not shown), and specifically that the surfaces of regions 12, 14 and 16 can be above the surface of material 18 in such other aspects.Referring next to Fig. 24 , a semiconductor material 200 is epitaxially grown directly over an uppermost surface of monocrystalline material 18. Epitaxially-grown material 200 can, in some aspects, comprise, consist essentially of, or consist of single crystal silicon. The crystalline material 200 comprises defect regions 202 radiating from surfaces of dielectric material 24. The defect regions can be caused by, for example, the epitaxial growth occurring from surfaces of monocrystalline material 18 but not from surfaces of dielectric material 24.The thickness of material 200 and conditions utilized for growing the material can be adjusted such that the defect regions 202 extend only partially across the regions between dielectric regions 12, 14 and 16 (such as, for example, the regions 20 and 22 described previously). Accordingly, there will be defect-free regions of semiconductor material 200 between dielectric regions 12, 14 and 16. In some aspects, if material 200 is grown to a thickness such that the defect-free regions are undesirably narrow, the material 200 can be planarized back to reduce the lateral thickness of the defective regions and thus increase the lateral width of the defect-free regions. In exemplary aspects, material 200 is grown to a thickness of from about 100 nanometers to about 300 nanometers, and regions 12, 14 and 16 are spaced from one another by about 100 nanometers.Patterned masking material 40 is formed over the defect-free regions, and subsequently a pattern is transferred from material 40 to underlying semiconductor material 200 to form pillars 204, 206, 208 and 210 (shown in Fig. 25 ) comprising defect- free regions of material 200. Such pillars can then be utilized in the processing discussed above relative to Figs. 13-21 to form transistor devices having vertically-extending channel regions.A notable difference between the processing of Figs. 23-25 and that of Figs. 4-9 is that the second semiconductor material (30 of Figs. 7-9 and 200 of Fig. 24 ) is formed in the processing of Figs. 4-9 while an uppermost level of dielectric material 24 is below the uppermost level of semiconductor material 18, and is formed in the processing of Fig. 24 while the uppermost level of dielectric material 24 is coplanar with the uppermost level of material 18.Another aspect of the invention is described with reference to Figs. 26 and 27 . In referring to Figs. 26 and 27 , similar numbering will be used as was used above in describing Figs. 1-21 , where appropriate.Referring initially to Fig. 26 , a construction 220 is illustrated at a processing stage subsequent to that of Fig. 2 . Construction 220 is similar to the construction 10 described previously, but the isolation regions 12, 14 and 16 of the Fig. 26 construction are much deeper than those of the Fig. 2 construction.Semiconductor material 18 and dielectric material 24 are shown sharing a coplanar uppermost surface 26. It is to be understood, however, that material 24 can, in some aspects of the invention (not shown) have an upper surface that is above that of semiconductor material 18 at the processing stage of Fig. 26 .Patterned masking material 40 is formed over regions of semiconductor material 18 between regions 12, 14 and 16.Referring to Fig. 27 , pillars are etched into semiconductor material 18 by transferring a pattern from patterned mask 40 into material 18. Such can be accomplished with, for example, a suitable dry etch. The individual pillars are labelled as 222, 224, 226 and 228. The embodiment of Figs. 26 and 27 can be less preferred than other embodiments described previously in this disclosure, in that the pillars can have the shown stringers 230 extending between the pillars and the dielectric material 24 (the stringers can result from a pro-graded etch or a retrograded etch). In some aspects, such stringers can be removed by appropriate etching. The pillars 222, 224, 226 and 228 can then be subjected to the processing described previously with reference to Figs. 13-21 to incorporate the pillars into transistor devices comprising vertically-extending channel regions. In some aspects, the dielectric regions 12, 14 and 16 can be left as is so that the dielectric regions have uppermost surfaces approximately coextensive with the uppermost surfaces of the pillars. In other aspects, the dielectric regions can be subjected to suitable processing to reduce the elevational level of the uppermost surfaces of the dielectric regions to beneath those of the pillars.The pillars of Fig. 27 can be considered to comprise mesas of a first monocrystalline silicon material 18. In the aspect of Fig. 27 , the semiconductor material of the pillars is substantially entirely the monocrystalline semiconductor material 18 of the mesas. In other words, the semiconductor material of the pillars consists essentially of, or consists of, the mesas of monocrystalline semiconductor material. This is in contrast to the aspect of Figs. 1-21 in which the pillars comprise two segments of semiconductor material, with the lowermost segment being the mesa of first semiconductor material and the uppermost segment being a second semiconductor material.the aspects of the invention described above can have several advantages. For instance, exemplary methodology of the present invention can be incorporated into conventional processes without additional new tooling. Also, exemplary methodology of the present invention can be done with or without epitaxial semiconductor growth. Exemplary aspects of the present invention can be low cost and simple for incorporation into semiconductor fabrication and can reduce, or at least not increase, the number of masking steps relative to conventional processes. Exemplary aspects of the present invention are generally shrinkable for application to future applications with higher levels of integration.Fig. 28 illustrates generally, by way of example but not by way of limitation, an embodiment of a computer system 400 according to an aspect of the present invention. Computer system 400 includes a monitor 401 or other communication output device, a keyboard 402 or other communication input device, and a motherboard 404. Motherboard 404 can carry a microprocessor 406 or other data processing unit, and at least one memory device 408. Memory device 408 can comprise various aspects of the invention described above. Memory device 408 can comprise an array of memory cells, and such array can be coupled with addressing circuitry for accessing individual memory cells in the array. Further, the memory cell array can be coupled to a read circuit for reading data from the memory cells. The addressing and read circuitry can be utilized for conveying information between memory device 408 and processor 406. Such is illustrated in the block diagram of the motherboard 404 shown in Fig. 29 . In such block diagram, the addressing circuitry is illustrated as 410 and the read circuitry is illustrated as 412. Various components of computer system 400, including processor 406, can comprise one or more of the memory constructions described previously in this disclosure.Processor device 406 can correspond to a processor module, and associated memory utilized with the module can comprise teachings of the present invention.Memory device 408 can correspond to a memory module. For example, single in-line memory modules (SIMMs) and dual in-line memory modules (DIMMs) may be used in the implementation which utilize the teachings of the present invention. The memory device can be incorporated into any of a variety of designs which provide different methods of reading from and writing to memory cells of the device. One such method is the page mode operation. Page mode operations in a DRAM are defined by the method of accessing a row of a memory cell arrays and randomly accessing different columns of the array. Data stored at the row and column intersection can be read and output while that column is accessed.An alternate type of device is the extended data output (EDO) memory which allows data stored at a memory array address to be available as output after the addressed column has been closed. This memory can increase some communication speeds by allowing shorter access signals without reducing the time in which memory output data is available on a memory bus. Other alternative types of devices include SDRAM, DDR SDRAM, SLDRAM, VRAM and Direct RDRAM, as well as others such as SRAM or Flash memories.Memory device 408 can comprise memory formed in accordance with one or more aspects of the present invention.Fig. 30 illustrates a simplified block diagram of a high-level organization of various embodiments of an exemplary electronic system 700 of the present invention. System 700 can correspond to, for example, a computer system, a process control system, or any other system that employs a processor and associated memory. Electronic system 700 has functional elements, including a processor or arithmetic/logic unit (ALU) 702, a control unit 704, a memory device unit 706 and an input/output (I/O) device 708. Generally, electronic system 700 will have a native set of instructions that specify operations to be performed on data by the processor 702 and other interactions between the processor 702, the memory device unit 706 and the I/O devices 708. The control unit 704 coordinates all operations of the processor 702, the memory device 706 and the I/O devices 708 by continuously cycling through a set of operations that cause instructions to be fetched from the memory device 706 and executed. In various embodiments, the memory device 706 includes, but is not limited to, random access memory (RAM) devices, read-only memory (ROM) devices, and peripheral devices such as a floppy disk drive and a compact disk CD-ROM drive. One of ordinary skill in the art will understand, upon reading and comprehending this disclosure, that any of the illustrated electrical components are capable of being fabricated to include memory constructions discussed previously in this disclosure.Fig. 31 is a simplified block diagram of a high-level organization of various embodiments of an exemplary electronic system 800. The system 800 includes a memory device 802 that has an array of memory cells 804, address decoder 806, row access circuitry 808, column access circuitry 810, read/write control circuitry 812 for controlling operations, and input/output circuitry 814. The memory device 802 further includes power circuitry 816, and sensors 820, such as current sensors for determining whether a memory cell is in a low-threshold conducting state or in a high-threshold non-conducting state. The illustrated power circuitry 816 includes power supply circuitry 880, circuitry 882 for providing a reference voltage, circuitry 884 for providing the first wordline with pulses, circuitry 886 for providing the second wordline with pulses, and circuitry 888 for providing the bitline with pulses. The system 800 also includes a processor 822, or memory controller for memory accessing.The memory device 802 receives control signals 824 from the processor 822 over wiring or metallization lines. The memory device 802 is used to store data which is accessed via I/O lines. It will be appreciated by those skilled in the art that additional circuitry and control signals can be provided, and that the memory device 802 has been simplified to help focus on the invention. At least one of the processor 822 or memory device 802 can include a memory construction of the type described previously in this disclosure.The various illustrated systems of this disclosure are intended to provide a general understanding of various applications for the circuitry and structures of the present invention, and are not intended to serve as a complete description of all the elements and features of an electronic system using memory cells in accordance with aspects of the present invention. One of the ordinary skill in the art will understand that the various electronic systems can be fabricated in single-package processing units, or even on a single semiconductor chip, in order to reduce the communication time between the processor and the memory device(s).Applications for memory cells can include electronic systems for use in memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. Such circuitry can further be a subcomponent of a variety of electronic systems, such as a clock, a television, a cell phone, a personal computer, an automobile, an industrial control system, an aircraft, and others.Embodiments of the present invention may comprise features of the following clauses: CLAUSES1. A method of forming a semiconductor construction, comprising: providing a semiconductor substrate, the substrate comprising a plurality of trenched isolation regions extending within a monocrystalline semiconductor material, the isolation regions being spaced from one another by first regions comprising the monocrystalline semiconductor material; and patterning the monocrystalline semiconductor material into a plurality of pillars within the first regions.2. The method of clause 1 wherein the patterning comprises: forming a patterned hard mask over the monocrystalline semiconductor material; and transferring a pattern from the patterned hard mask into the monocrystalline semiconductor material.3. The method of clause 1 wherein the trenched isolation regions have an uppermost surface at a first elevational level, wherein the monocrystalline semiconductor material has an uppermost surface at a second elevational level, and wherein the first elevational level is at or above the second elevational level at initiation of the patterning of the semiconductor material.4. The method of clause 1 wherein the trenched isolation regions have an uppermost surface at a first elevational level, wherein the monocrystalline semiconductor material has an uppermost surface at a second elevational level, and wherein the first elevational level is below the second elevational level at initiation of the patterning of the semiconductor material.5. The method of clause 4 wherein the monocrystalline semiconductor material is a first semiconductor material, the method further comprising forming a second semiconductor material over the first semiconductor material, and wherein the patterning patterns the second semiconductor material and forms individual of the pillars to comprise a segment of the second semiconductor material over a segment of the first semiconductor material.6. The method of clause 5 wherein the second semiconductor material consists essentially of polycrystalline or amorphous silicon.7. The method of clause 5 wherein the second semiconductor material consists essentially of single crystal silicon.8. The method of clause 7 wherein the second semiconductor material is epitaxially grown from the first semiconductor material.9. The method of clause 1 wherein the trenched isolation regions extend along a defined longitudinal direction, wherein the pillars form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and further comprising forming one or more horizontally-extending gatelines extending along pillars that are along a common row as one another.10. The method of clause 9 further comprising: forming sections of vertically-extended semiconductor material between the pillars; forming source/drain regions within upper regions of the sections; forming source/drain regions in upper regions of the pillars; and incorporating paired source/drain regions into transistor devices; individual pairs of source/drain regions comprising one source/drain region within a section and the other source/drain region within a pillar, the transistor devices comprising channel regions interconnecting the paired source/drain regions.11. The method of clause 10 further comprising forming a DRAM unit cell by: forming a capacitor in electrical connection with one of the paired source/drain regions of an individual transistor device; and forming a bitline in electrical connection with the other of the paired source/drain regions of the transistor device.12. The method of clause 1 wherein the trenched isolation regions extend along a defined longitudinal direction, wherein the pillars form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and wherein horizontally adjacent pillars are longitudinally staggered relative to one another.13. The method of clause 1 wherein the trenched isolation regions extend along a defined longitudinal direction, wherein the pillars form an array having columns along the longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the longitudinal direction, and wherein horizontally adjacent pillars are substantially not longitudinally staggered relative to one another.14. A method of forming a semiconductor construction, comprising: providing a semiconductor substrate, the substrate comprising rows of trenches extending within a first semiconductor material, the rows being spaced from one another by first regions comprising the first semiconductor material, the trenches having a first dielectric material therein, the first dielectric material within the trenches forming rows of dielectric material; forming a second semiconductor material over the semiconductor substrate, the second semiconductor material extending over the rows of first dielectric material and also extending across the first regions between the rows of first dielectric material; forming openings extending through the second semiconductor material and to the first dielectric material; filling the openings with a second dielectric material to extend the height of the rows of dielectric material to an upper surface of the second semiconductor material; and patterning the first and second semiconductor materials into a plurality of pillars, the individual pillars comprising a segment of the second semiconductor material over a segment of the first semiconductor material, the pillars extending along rows, at least some of the pillar rows being spaced from one another by second regions comprising one or more of the rows of dielectric material.15. The method of clause 14 wherein the first and second dielectric materials are compositionally the same as one another.16. The method of clause 14 wherein the first and second dielectric materials are compositionally different from one another.17. The method of clause 14 wherein the first semiconductor material consists essentially of single crystal silicon and the second semiconductor material consists essentially of polycrystalline or amorphous silicon.18. The method of clause 14 wherein the first and second semiconductor materials consist essentially of single crystal silicon.19. The method of clause 18 wherein the second semiconductor material is epitaxially grown from the first semiconductor material.20. The method of clause 14 wherein the first dielectric material comprises silicon dioxide.21. The method of clause 14 wherein the first dielectric material consists essentially of silicon dioxide.22. The method of clause 14 wherein the first dielectric material consists of silicon dioxide.23. The method of clause 14 wherein the first and second dielectric materials comprise silicon dioxide.24. The method of clause 14 wherein the first and second dielectric materials consist essentially of silicon dioxide.25. The method of clause 14 wherein the first and second dielectric materials consist of silicon dioxide.26. The method of clause 14 wherein the patterning comprises: forming a patterned hard mask over the second semiconductor material; and transferring a pattern from the patterned hard mask through the second semiconductor material and into the first semiconductor material.27. The method of clause 26 wherein the patterned hard mask comprises silicon nitride.28. The method of clause 26 wherein the patterned hard mask consists essentially of silicon nitride.29. The method of clause 26 wherein the patterned hard mask consists of silicon nitride.30. A method of forming a semiconductor construction, comprising: providing a semiconductor substrate, the substrate comprising a plurality of trenches extending within a first semiconductor material, the first semiconductor material comprising an uppermost surface at a first elevational level, the trenches being spaced from one another by first regions comprising the first semiconductor material; filling the trenches with dielectric material; reducing a level of the dielectric material within the trenches to form dielectric material lines within the trenches, the dielectric material lines having uppermost surfaces at a second elevational level which is below the first elevational level; after reducing the level of the dielectric material, forming a second semiconductor material over the semiconductor substrate, the second semiconductor material extending over the dielectric material lines and also extending across the first regions; and patterning the first and second semiconductor materials into a plurality of pillars within the first regions, the individual pillars comprising a segment of the second semiconductor material over a segment of the first semiconductor material, the pillars having uppermost surfaces at a third elevational level which is above the first elevational level.31. The method of clause 30 wherein the dielectric material is a first dielectric material, and further comprising, prior to patterning the first and second materials into the pillars: forming openings extending through the second semiconductor material to the first dielectric material; and filling the openings with a second dielectric material.32. The method of clause 31 further comprising, during the patterning of the first and second semiconductor materials into the pillars, patterning lines comprising the second dielectric material and the second semiconductor material, the lines extending between the pillars, the lines comprising sections of the second semiconductor material which are separated from one another by regions of the second dielectric material.33. The method of clause 32 further comprising: forming gateline material between the pillars and the lines; forming first source/drain regions within the pillars; and forming second source/drain regions within the sections of the second semiconductor material within the lines, the first source/drain regions being gatedly connected to the second source/drain regions through the gateline.34. The method of clause 30 wherein the pillars have bases at about the second elevational level.35. The method of clause 30 wherein the dielectric material comprises silicon dioxide.36. The method of clause 30 wherein the dielectric material consists essentially of silicon dioxide.37. The method of clause 30 wherein the dielectric material consists of silicon dioxide.38. The method of clause 30 wherein the first semiconductor material consists essentially of single crystal silicon and the second semiconductor material consists essentially of polycrystalline or amorphous silicon.39. The method of clause 30 wherein the first and second semiconductor materials consist essentially of single crystal silicon.40. The method of clause 39 wherein the second semiconductor material is epitaxially grown from the first semiconductor material.41. A method of forming a semiconductor construction, comprising: providing a semiconductor substrate, the substrate comprising a plurality of trenched isolation regions extending within a monocrystalline first semiconductor material, the isolation regions being spaced from one another by first regions comprising the first semiconductor material; epitaxially growing a second semiconductor material from the first semiconductor material; and patterning the second semiconductor material into a plurality of pillars within the first regions.42. The method of clause 41 wherein the first and second semiconductor materials comprise silicon.43. The method of clause 41 wherein the first and second semiconductor materials consist essentially of silicon.44. The method of clause 41 wherein the trenched isolation regions have an uppermost surface at a first elevational level, wherein the first semiconductor material has an uppermost surface at a second elevational level, and wherein the first elevational level is at or above the second elevational level at initiation of the epitaxially growing of the second semiconductor material.45. The method of clause 41 wherein the trenched isolation regions have an uppermost surface at a first elevational level, wherein the first semiconductor material has an uppermost surface at a second elevational level, and wherein the first elevational level is below the second elevational level at initiation of the epitaxially growing of the second semiconductor material.46. The method of clause 41 wherein the patterning utilized to pattern the second semiconductor material also extends into the first semiconductor material so that the pillars comprise segments of the second semiconductor material over segments of the first semiconductor material.47. The method of clause 46 wherein the patterning comprises: forming a patterned hard mask over the second semiconductor material; and transferring a pattern from the patterned hard mask through the second semiconductor material and into the first semiconductor material.48. The method of clause 47 wherein the patterned hard mask comprises silicon nitride.49. The method of clause 41 wherein the patterning utilized to pattern the second semiconductor material does not extend into the first semiconductor material.50. The method of clause 49 wherein the patterning comprises: forming a patterned hard mask over the second semiconductor material; and transferring a pattern from the patterned hard mask to the second semiconductor material.51. The method of clause 50 wherein the patterned hard mask comprises silicon nitride.52. A semiconductor construction, comprising: a semiconductor substrate comprising a monocrystalline semiconductor material; a plurality of isolation regions within the semiconductor material and extending along a defined longitudinal direction, the isolation regions being spaced from one another by longitudinally-extending strips of the monocrystalline semiconductor material; a plurality of lines extending substantially orthogonally to the isolation regions; the lines having dielectric regions over the isolation regions and semiconductor sections between the dielectric regions; an array of pillars extending upwardly from the monocrystalline semiconductor material, the array comprising columns along the defined longitudinal direction and rows along a defined horizontal direction which is substantially orthogonal to the defined longitudinal direction; the columns of the array being between the isolation regions and along the longitudinally-extending strips of the monocrystalline semiconductor material, the pillars comprising mesas of the monocrystalline semiconductor material extending upwardly from the longitudinally-extending strips; a first set of source/drain regions at upper regions of the pillars; a second set of source/drain regions within the sections of the lines; a set of channel regions between the first and second sets of source/drain regions; and a plurality of gateline rows extending along the defined horizontal direction; the gateline rows extending along the rows of the array of pillars; the gateline rows, channel regions, and first and second sets of source/drain regions forming a plurality of transistor devices; individual transistor devices comprising a first source/drain region of the first set, a second source/drain region of the second set, a channel region extending from the first source/drain region to the second source/drain region, and a gate within the gateline row and proximate the channel region.53. The construction of clause 52 wherein the pillars consist essentially of the mesas of the monocrystalline semiconductor material.54. The construction of clause 52 wherein the monocrystalline semiconductor material is a first semiconductor material, and wherein at least some of the individual pillars comprise a segment of second semiconductor material over the mesa of the monocrystalline semiconductor material.55. The construction of clause 54 wherein the semiconductor sections are of the second semiconductor material.56. The construction of clause 54 wherein the second semiconductor material is a monocrystalline semiconductor material.57. The construction of clause 54 wherein the second semiconductor material is a polycrystalline or amorphous semiconductor material.58. The construction of clause 52 wherein horizontally adjacent pillars are longitudinally staggered relative to one another.59. The construction of clause 52 wherein horizontally adjacent pillars are substantially not longitudinally staggered relative to one another.60. The construction of clause 52 further comprising: a capacitor in electrical connection with the second source/drain region of a transistor device; and a bitline in electrical connection with the first source/drain region of the transistor device.61. An electronic device comprising the construction of clause 60.
A semiconductor structure includes a first substrate portion having a surface and a first active region disposed in the first substrate portion. An insulator region is disposed on the first substrate portion outside of the first active region and extends out from the surface. A second substrate portion is disposed on the insulator region, and a second active region is disposed in the second substrate portion. Thus, by disposing a portion of the substrate on the isolation region, the usable substrate area is dramatically increased.
What is claimed is: 1. A method for forming a semiconductor structure, comprising:forming tapered towers in a semiconductor substrate, each tapered tower projecting outwardly from a neck attached to a first portion of the substrate to a top, the top having a larger cross-sectional area than the neck; forming an insulator layer on the substrate; anisotropically etching the insulator layer; and forming active regions in the substrate on the tops of and between the towers. 2. The method of claim 1, further comprising:growing a thermal oxide on the towers before forming the insulator layer; and wherein the etching includes anisotropically etching the thermal oxide. 3. The method of claim 1, further comprising forming a well region in the semiconductor structure before forming the tapered towers.4. The method of claim 1 wherein forming tapered towers within a semiconductor substrate comprises:forming a layer of photoresist on the substrate; and patterning the layer of photoresist. 5. The method of claim 1 wherein forming tapered towers within a semiconductor substrate comprises etching the substrate.6. The method of claim 1 wherein forming an insulator layer on the substrate comprises forming an insulator layer on sidewall portions of the tapered towers.7. The method of claim 1 wherein forming an insulator layer on the substrate comprises forming a layer on the substrate having a void between adjacent towers.8. The method of claim 1 wherein anisotropically etching the insulator layer comprises etching the insulator layer with an etchant highly selective to the insulator layer.9. The method of claim 1, further comprising forming isolation regions within at least some of the active regions.10. The method of claim 1, further comprising forming silicon-trench isolation regions within at least some of the active regions.11. A method for forming a semiconductor structure, comprising:forming a pair of trenches in a semiconductor substrate, each trench having a retrograde profile to form a tapered tower therebetween, the tapered tower having side walls that taper to a first portion of the substrate; forming an insulator layer on the side walls of the tapered tower; etching the insulator layer; and forming active regions in the substrate on the tapered tower and in the first portion of the substrate. 12. The method of claim 11, further comprising:forming a thermal oxide on the side walls of the tower before forming the insulator layer; and wherein the etching includes etching the thermal oxide. 13. The method of claim 11, further comprising forming a well region in the semiconductor structure before forming the pair of trenches.14. The method of claim 11 wherein forming a pair of trenches within a semiconductor substrate comprises:forming a layer of photoresist on the substrate; and patterning the layer of photoresist. 15. The method of claim 11 wherein forming a pair of trenches within a semiconductor substrate comprises etching the substrate.16. The method of claim 11 wherein forming an insulator layer on the side walls of the tapered tower comprises forming a tetraethylorthosilicate layer on the side walls of the tapered tower.17. The method of claim 11 wherein etching the insulator layer comprises anisotropically etching the insulator layer.18. The method of claim 11 wherein etching the insulator layer comprises etching the insulator layer with an etchant highly selective to the insulator layer.19. The method of claim 11, further comprising forming isolation regions within at least some of the active regions.20. The method of claim 11, further comprising forming silicon-trench isolation regions within at least some of the active regions.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a divisional of U.S. patent application Ser. No. 09/291,415, filed Apr. 13, 1999, now U.S. Pat. No. 6,198,158, which is a divisional of Application Ser No. 09/075,391, filed May 8, 1998 and issued Mar. 7, 2000 as U.S. Pat. No. 6,034,417.TECHNICAL FIELDThe invention relates generally to integrated circuits, and more specifically to a semiconductor structure having an increased ratio of usable to unusable substrate surface area. The usable substrate area is where transistors and other active devices are disposed.BACKGROUND OF THE INVENTIONAs customers continue to push for smaller, higher-performance integrated circuits (ICs), IC manufacturers continue their efforts to squeeze more transistors and other components onto smaller dies. For example, the present trend is toward memory circuits that have greater storage capacities but that are no larger than their predecessors.One technique for increasing an IC's component density is to reduce the minimum feature size of a process-the minimum allowable width of, e.g., a transistor gate or an interconnection line-and thus reduce the sizes of the components themselves. Although manufacturers have made great strides in this area over the last few years, there are problems, such as degradation of transistor performance at smaller sizes, that they must overcome before the minimum feature size can be further reduced.Another density-increasing technique is to use silicon-trench isolation (STI) instead of local oxidation of a semiconductor (LOCOS). But although STI significantly increases the ratio of usable to unusable substrate area as compared to LOCOS, the widths of the STI regions can be no narrower than the minimum feature size, and thus cannot be reduced until the minimum feature size is reduced.SUMMARY OF THE INVENTIONIn one aspect of the invention, a semiconductor structure includes a first substrate portion having a surface, and a first active region disposed in the first substrate portion. An isolation region is disposed on the first substrate portion outside of the first active region and extends out from the surface. A second substrate portion is disposed on the isolation region, and a second active region is disposed in the second substrate portion.Thus, by disposing portions of the substrate on the isolation regions, a manufacturer can dramatically increase the usable area of the substrate.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is an isometric view with portions broken away of a semiconductor structure according to an embodiment of the invention.FIG. 2 is a cross-sectional view of a semiconductor structure at one point in a process for forming the structure of FIG. 1 according to an embodiment of the invention.FIG. 3 is a cross-sectional view of the structure of FIG. 2 at a subsequent point in the process.FIG. 4 is a cross-sectional view of the structure of FIG. 3 at a subsequent point in the process.FIG. 5 is a cross-sectional view of the structure of FIG. 4 at a subsequent point in the process.FIG. 6 is a cross-sectional view of the structure of FIG. 5 at a subsequent point in the process.FIG. 7 is an isometric and cross-sectional view of a portion of a memory array according to an embodiment of the invention.FIG. 8 is a top plan view of the memory array of FIG. 7 after further processing.FIG. 9 is a block diagram of one embodiment of a memory circuit that incorporates the memory array of FIGS. 7 and 8.FIG. 10 is a block diagram of one embodiment of an electronic system that incorporates the memory circuit of FIG. 9.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1 is an isometric view with portions broken away of a semiconductor structure 10 according to an embodiment of the invention. As discussed below the structure 10 has a significantly higher ratio of usable to unusable substrate area than semiconductor structures that use conventional isolation techniques such as STI or LOCOS.The structure 10 includes a substrate 12, which is formed from a semiconductor material such as silicon or gallium arsenide (Ga-As), and which has a first portion 14 and one or more second portions 16. The second substrate portions 16 are disposed at the top of isolation runners 18, which electrically isolate the first substrate portion 14 from the respective second substrate portions 16. Although three second substrate portions 16 and three runners 18 are shown, there can be more or fewer portions 16 and runners 18. The runners 18 include side walls 20, which are formed from a conventional dielectric material such as silicon dioxide. In one embodiment, the runners 18 are substantially parallel to and evenly spaced from one another. In another embodiment, the runners 18 include multiple layers 22, 24, and 26, which may be formed from any conventional materials so long as these layers electrically isolate the first substrate portion 14 from the second substrate portions 16. The runners 18 define trenches 28 having bottoms 30, which are respective surface regions of the first substrate portion 14. Conventional active regions 32-in which, for example, the source/drain regions of transistors are located-are disposed in the trench bottoms 30 (recessed active regions 32) and in the surface regions of the second substrate portions 16 (elevated active regions 32). Thus, by placing the second substrate portions 16 at the top of the respective runners 18, one uses vertical isolation instead of horizontal isolation so that compared to conventional semiconductor structures, the structure 10 has a much greater portion of the substrate 12 surface area in which to form transistors and other components.In another embodiment of the structure 10, conventional isolation regions 34, such as STI regions, are disposed in the trench bottoms 30 and in the second substrate portions 16 to isolate adjacent active areas 32 from one another. Although these regions 34 reduce the usable portion of the substrate 12 surface area, the structure 10 still has approximately 50% more usable substrate surface area than a conventional semiconductor structure.In yet another embodiment, the cross-sections of the runners 18 and trenches 28 may have shapes other than rectangular.FIGS. 2-7 are cross-sections taken along lines A-A of FIG. 1 at points of a process for forming the structure 10 according to an embodiment of the invention.Referring to FIG. 2, well regions 48 are formed by conventionally doping the substrate 12. For clarity, only one well region 48 is shown. Next, a layer of photoresist 50 is conventionally formed on the substrate 12 and then conventionally patterned.Next, referring to FIG. 3, a retrograde profile is conventionally etched into the substrate 12 to form semiconductor towers 52, which have side walls that taper toward the first substrate portion 14 and end in necks 54.Referring to FIG. 4, the photoresist 50 is removed. Then, a layer 56 of thermal oxide is conventionally grown on the towers 52 and the substrate portion 14 to reduce the thicknesses of the necks 54 while forming combined necks 55, which are thicker than the necks 54 were before oxidation. This increases both the electrical isolation and the strength of the attachment between the towers 52 and the substrate portion 14. Alternatively, in embodiments where the necks 54 provide adequate isolation and support before thermal oxidation, then the thermal oxidation step can be omitted.Referring to FIG. 5, a layer 58 of a conventional dielectric is formed on the towers 52 and the substrate portion 14. For example, the layer 56 is preferably formed from tetraethylorthosilicate (TEOS), or any other dielectric that can withstand the process heating cycles, has a low enough dielectric constant, and does not contaminate or stress the substrate 12.Still referring to FIG. 5, in some embodiments the layer 58 "bread loaves" and forms voids 60 between the towers 52. But as long as portions of the layer 58 are formed on the tower 52 side walls, the voids 60 typically cause no problems.Referring to FIG. 6, the layers 56 and 58 are anisotropically etched in a conventional manner. This etching forms the isolation runners 18 from the remaining portions of the layers 56 and 58 and exposes the trench bottoms 30 and the second substrate portions 16. In one embodiment, the etchant used is highly selective to the insulator layer 58 to reduce or eliminate pitting of the substrate portions 16. Next, the active areas 32 are conventionally formed.Still referring to FIG. 6, in one embodiment, the isolation regions 34 are then formed to give the structure 10 of FIG. 1.FIG. 7 is an isometric and cross-sectional view of a portion of a memory array 70 according to an embodiment of the invention. The memory array 70 is formed from the structure 10 of FIG. 1 without the isolation regions 34. In one embodiment, the array 70 is a dynamic-random-access-memory (DRAM) array.To form the array 70, a gate dielectric 72 is conventionally formed on the active regions 32 of FIG. 6. Next, a conductive layer 74 is conventionally formed on the gate dielectric 72 and planarized. Another conductive layer 76 is then conventionally formed on the layer 74, and a capping layer 78 is conventionally formed on the layer 76. In one embodiment, the layer 74 is polysilicon, the layer 76 is tungsten silicide, and the layer 78 is silicon nitride.Next, the layers 76 and 78 are conventionally patterned and etched, and the layer 74 is anisotropically etched in a conventional manner to form word lines 80, and, in some embodiments, isolation lines (not shown in FIG. 7) as discussed below in conjunction with FIG. 8. Where the layer 74 is polysilicon, an etchant that is highly selective to polysilicon is used to thoroughly remove the exposed portions of the layer 74 from the trenches 28 without etching through the gate dielectric 72 and pitting the substrate portions 16. In one embodiment, this highly selective etch is followed by a short, isotropic etch to remove any polysilicon stringers from the sidewalls 20 of the trenches 28. Alternatively, the array 70 can be run through a furnace to oxidize any such polysilicon stringers.Next, transistor source/drain regions 82 are formed in the active regions 32. The regions 82 and the word lines 80, which act as transistor gates, form memory-cell access transistors 83 in the both the substrate portions 16 and trench bottoms 30. Each transistor 83 includes a pair of adjacent source/drain regions 82 that are on opposite sides of the same word line 80. Depending on the doping process used, the exposed portions of the gate dielectric 72 are conventionally removed either before or after the regions 82 are formed.To form the source/drain regions 82, for example, source/drain regions 82 of P-channel transistors that are formed in an N well 48, the active regions 32 are first conventionally implanted with a relatively light concentration of dopant to form lightly doped drain (LDD) regions 85. Next, spacers 84 are conventionally formed along the side walls of the word lines 80. In some embodiments, this process also forms spacers 87 along the side walls 20 of the isolation runners 18 and along the sides of the substrate portions 16. Although the spacers 87 are not required to form the LDD regions 85, in some embodiments they are useful in a later process step to align source and drain contacts with the trench bottoms 30. Then, the exposed portions of the active regions 32 are conventionally implanted with a relatively heavy dose of dopant to form the remaining portions of the source/drain regions 82.Next, the remaining parts of the memory array 70, such as the capacitors, digit lines, and interconnections (none shown in FIG. 7), are formed in a conventional manner.FIG. 8 is a top plan view of the memory array 70 of FIG. 7 after digit lines (not shown in FIG. 8) and cell capacitors 81 have been formed over respective source/drain regions 82. In one embodiment, pairs of adjacent word lines 80 intersect pairs of adjacent memory cells 88 that share a common digit-line contact 90. Each word line 80 thus defines a row of memory cells 88, which each include a respective transistor 83 and capacitor 81. Isolation lines 92, which have the same structure as the word lines 80, are disposed between these word-line pairs and between adjacent memory cells 88 that do not share a common digit-line contact. Thus, the isolation lines 92 act as pseudo-gates between adjacent source/drain regions 82 of such uncommon cells 88. The isolation lines 92 are voltage biased to isolate these adjacent source/drain regions 82 by preventing a channel region from forming therebetween. For example, for N-channel transistors 83 (FIG. 7), the isolation lines 92 are biased at a low voltage such as ground.The memory array 70 has a much greater memory-cell density than conventional memory arrays. For example, in one embodiment, the widths of the isolation runners 18, trenches 28, word lines 80 and isolation lines 92, and source/drain regions 82 are one minimum feature size. Therefore, as shown by the dashed line, a pair of cells 88 that share a common digit-line contact 90 have a combined area of six square feature sizes. It follows that one of the memory cells 88 occupies half that area, that is, three square feature sizes. In comparison, a memory cell of a conventional folded-digit-line DRAM occupies eight square feature sizes, and a memory cell of a conventional open-digit-line DRAM occupies six square feature sizes. Thus, memory cell 88 occupies only about half of the area occupied by a conventional memory cell.FIG. 9 is a block diagram of a memory circuit 100, which may include the memory array 70 of FIGS. 7 and 8. Specifically, memory banks 102a and 102b of the memory circuit 100 may each include respective memory array 70 of FIGS. 7 and 8. In one embodiment, the memory circuit 100 is a DRAM.The memory circuit 100 includes an address register 104, which receives an address from an ADDRESS bus, a control logic circuit 106 receives a clock (CLK) signal, and receives, e.g., clock enable (CKE), chip select ({overscore (CS)}), row address strobe ({overscore (RAS)}), column address strobe ({overscore (CAS)}), and write enable ({overscore (WE)}) signals from a COMMAND bus, and communicates with the other circuits of the memory circuit 100. A row address multiplexer 108 receives the address signal from the address register 104 and provides the row address to the row-address latch-and-decode circuits 110a and 110b for the memory banks 102a or 102b , respectively. During read and write cycles, the row-address latch-and-decode circuits 110a and 110b activate the word lines of the addressed rows of memory cells in the memory banks 102a and 102b , respectively. Read/write circuits 112a and 112b read data from the addressed memory cells in the memory banks 102a and 102b , respectively, during a read cycle, and write data to the addressed memory cells during a write cycle. A column-address latch-and-decode circuit 114 receives the address from the address register 104 and provides the column address of the selected memory cells to the read/write circuits 112a and 112b . For clarity, the address register 104, the row-address multiplexer 108, the row-address latch-and-decode circuits 110a and 110b, and the column-address latch-and-decode circuit114 can be collectively referred to as an address decoder.A data input/output (I/O) circuit 116 includes a plurality of input buffers 118. During a write cycle, the buffers 118 receive and store data from the DATA bus, and the read/write circuits 112a and 112b provide the stored data to the memory banks 102a and 102b , respectively. The data I/O circuit 116 also includes a plurality of output drivers 120. During a read cycle, the read/write circuits.112a and 112b provide data from the memory banks 102a and 102b , respectively, to the drivers 120, which in turn provide this data to the DATA bus.A refresh counter 122 stores the address of the row of memory cells to be refreshed either during a conventional auto-refresh mode or self-refresh mode. After the row is refreshed, a refresh controller 124 updates the address in the refresh counter 122, typically by either incrementing or decrementing the contents of the refresh counter 122 by one. Although shown separately, the refresh controller 124 may be part of the control logic 106 in other embodiments of the memory circuit 100.The memory circuit 100 may also include an optional charge pump 126, which steps up the power-supply voltage VDD to a voltage VDDP. In one embodiment, the pump 126 generates VDDP approximately 1-1.5 V higher than VDD. The memory circuit 100 may also use VDDP to conventionally overdrive selected internal transistors.FIG. 10 is a block diagram of an electronic system 130, such as a computer system, that incorporates the memory circuit 100 of FIG. 9. The system 130 includes computer circuitry 132 for performing computer functions, such as executing software to perform desired calculations and tasks. The circuitry 132 typically includes a processor 134 and the memory circuit 100, which is coupled to the processor 134. One or more input devices 136, such as a keyboard or a mouse, are coupled to the computer circuitry 132 and allow an operator (not shown) to manually input data thereto. One or more output devices 138 are coupled to the computer circuitry 132 to provide to the operator data generated by the computer circuitry 132. Examples of such output devices 138 include a printer and a video display unit. One or more data-storage devices 140 are coupled to the computer circuitry 132 to store data on or retrieve data from external storage media (not shown). Examples of the storage devices 140 and the corresponding storage media include drives that accept hard and floppy disks, tape cassettes, and compact disc read-only memories (CD-ROMs). Typically, the computer circuitry 132 includes address data and command buses and a clock line that are respectively coupled to the ADDRESS, DATA, and COMMAND buses, and the CLK line of the memory device 100.From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention.
Methods to form contact openings and allow the formation of self-aligned contacts for use in the manufacture of semiconductor devices are described. During formation of a multi-layered resist, a hard mask material is introduced beneath an anti-reflective coating to be used as an etch stop layer. The multi-layered resist is patterned and etched, to transfer the desired contact pattern to a substrate material, such as a silicon substrate, to form contact openings therein. The contact openings provide for the formation of self-aligned contacts therein.
1. A method of forming self-aligning contact openings for a semiconductor assembly comprising the following sequence of steps:patterning exposed regions and non-exposed regions of a multi-layered resist material comprising a photoresist layer overlying an anti-reflective layer, the multi-layered resist overlying a hard mask material;removing the photoresist layer and the anti-reflective layer in the exposed regions to expose an underlying area of the hard mask material;partially removing the hard mask material in the exposed underlying area such that at least half the thickness of the hard mask material is removed;removing the photoresist layer and the anti-reflective layer in the non-exposed regions;further removing the hard mask material in the exposed underlying area until an underlying isolation material is encountered and the hard mask material is completely cleared from the underlying isolation material; andremoving the underlying isolation material at the exposed regions to form the self-aligning contact openings.2. The method of claim 1, wherein the hard mask material is carbon material.3. The method of claim 2, wherein the carbon material is amorphous carbon or transparent carbon.4. A method of forming openings in a substrate for a semiconductor assembly comprising:forming a disposable hard mask material over an isolation material;forming a multi-layered resist material on the disposable hard mask material;patterning exposed regions and non-exposed regions in the multi-layered resist material;removing the multi-layered resist material in the exposed regions to expose an underlying area of the hard mask material;partially removing the hard mask material in the exposed underlying area such that a sufficient thickness of the hard mask material remains to cover the isolation material;removing the multi-layered resist material in the non-exposed regions;further removing the hard mask material in the exposed underlying area until an underlying isolation material is encountered and the hard mask material is completely cleared from the underlying isolation material; andremoving the underlying isolation material at the exposed regions to form the openings therein.5. The method of claim 4, wherein the sufficient thickness of the remaining hard mask material is less than half an original thickness of the hard mask.6. The method of claim 4, wherein the hard mask material is carbon material.7. The method of claim 4, wherein the carbon material is amorphous carbon or transparent carbon.8. A method of forming self-aligning contact openings for a semiconductor assembly comprising the following sequence of steps:patterning exposed regions and non-exposed regions of a multi-layered resist material comprising a photoresist layer overlying an anti-reflective layer, the multi-layered resist material overlying a disposable carbon layer;removing the photoresist layer and the anti-reflective layer in the exposed regions to expose an underlying area of the carbon layer;partially removing the carbon layer in the exposed underlying area such that at least half the thickness of the carbon layer is removed;removing the photoresist layer and the anti-reflective layer in the non-exposed regions;etching the carbon layer in the exposed underlying area until an underlying isolation material is encountered and the carbon layer is completely cleared from the underlying isolation material; andremoving the underlying isolation material to form the self-aligning contact openings.9. A method of forming self-aligned contacts for a semiconductor assembly comprising the following sequence of steps:patterning a photoresist layer overlying an anti-reflective layer to have exposed regions and non-exposed regions, the anti-reflective layer overlying a carbon layer;removing the photoresist layer and the anti-reflective layer in the exposed regions to expose an underlying area of the carbon layer;partially removing the carbon layer in the exposed underlying area such that at least half the thickness of the carbon layer is removed;removing the photoresist layer and the anti-reflective layer in the non-exposed regions;etching the carbon layer in the exposed underlying area until an underlying isolation material is encountered and the carbon layer is completely cleared from the underlying isolation material;removing the underlying isolation material at the exposed regions to form a self-aligned contact openings; andforming conductive material into the self-aligned contact openings to form the self-aligned contacts.
FIELD OF THE INVENTIONThis invention relates to semiconductor fabrication processing and, more particularly, to methods of patterning contact openings that will allow the formation of self-aligned contacts using a disposable hard mask for semiconductor devices, such as dynamic random access memories (DRAMs).BACKGROUND OF THE INVENTIONThe continuing trend of scaling down integrated circuits has motivated the semiconductor industry to consider new techniques for fabricating precise components at sub-micron levels. As is the case for most semiconductor integrated circuitry, circuit density is continuing to increase at a fairly constant rate and a major area of technological efforts is in fabrication processes to pattern contact locations for interconnection within the integrated circuitry. A typical nanometer lithography process may use a multi-layered resist process, such as a top photoresist layer and an anti-reflective coating. However, anti-reflective photoresist coatings used in the multi-resist process cannot be etched selective to materials used to form self-aligned contact locations during pattern transfer using a conventional anti-reflective coating etch as the etch will not only remove the anti-reflective coating but the underlying material (i.e., nitride) as well.Typical multi-layered resist processing does not allow for the anti-reflective coating to be removed before complete pattern transfer from the multi-layered resist to the underlying material takes place. If the anti-reflective coating is not removed before complete pattern transfer, then problems will occur, two of which are: 1) when the anti-reflective coating is removed a partial pattern transfer will occur in the underlying materials and 2) the anti-reflective coating will lift off during subsequent removal of the remaining layers of the multi-layered resist.For example, when employing a standard fabrication process to pattern multi-layered resist (i.e., a top photoresist layer and an anti-reflective coating), the anti-reflective coating is removed after an anti-reflective coating/carbon etch is performed. In this case, the anti-reflective coating etch has selectivity to the underlying material (i.e., nitride) and the anti-reflective coating. With the anti-reflective coating being present when the resist is stripped, the anti-reflective coating will peel off of the underlying carbon, which is a highly undesirable occurrence during the patterning stage as the desired pattern will be affected. Thus, conventional multi-resist processing using an anti-reflective coating, is not suitable for use in the formation of self-aligned contact openings (or vias) due to etch selectivity requirements to underlying materials.What is needed is a method to successfully pattern and etch contact openings and ultimately to form self-aligned contacts therein, by using a multi-resist process, which employs anti-reflective materials, in order to achieve the nanometer line widths now being demanded in current and future semiconductor fabrication processes.SUMMARY OF THE INVENTIONAn exemplary implementation of the present invention includes a method to form contact openings that will allow the formation of self-aligned contacts for use in the manufacture of semiconductor devices. During the formation of the multi-layered resist, a hard mask material is introduced beneath an anti-reflective coating to be used as an etch stop layer. The multi-layered resist is patterned and etched to transfer the desired contact pattern to a substrate material, such as a silicon substrate, to form contact openings therein. The contact openings now provide for the formation of self-aligned contacts therein.BRIEF DESCRIPTION OF THE DRAWINGFIG. 1 is top-down view of a semiconductor substrate section or semiconductor assembly, covered with a multiple-layered resist patterned by photolithography.FIG. 2 is a cross-sectional view taken through line 1-1' of FIG. 1 showing a semiconductor substrate section depicting isolated transistor structures covered with a disposable hard mask material and a multiple-layered resist comprising an anti-reflective layer and a non-exposed region of a photoresist layer.FIG. 3 is a cross-sectional view taken through line 2-2' of FIG. 1 showing a semiconductor substrate section depicting isolated transistor structures covered with a disposable hard mask material and a multiple-layered resist comprising an anti-reflective layer and an exposed region of a photoresist layer.FIG. 4 is a cross-sectional view taken through line 3-3' of FIG. 1 showing a semiconductor substrate section isolation material covered with a disposable hard mask material and a multiple-layered resist comprising an anti-reflective layer (or coating) and a photoresist layer having exposed and non-exposed regions.FIG. 5 is a subsequent cross-sectional view taken from FIG. 2 following the removal of exposed photoresist regions with the non-exposed regions of photoresist remaining.FIG. 6 is a subsequent cross-sectional view taken from FIG. 3 following the removal of exposed photoresist regions.FIG. 7 is a subsequent cross-sectional view taken from FIG. 4 following the removal of exposed photoresist regions with the non-exposed regions of photoresist remaining.FIG. 8 is a subsequent cross-sectional view taken from FIG. 5 following the removal of exposed regions of the anti-reflective coating with the non-exposed regions of anti-reflective coating remaining.FIG. 9 is a subsequent cross-sectional view taken from FIG. 6 following the removal of exposed regions of the anti-reflective coating.FIG. 10 is a subsequent cross-sectional view taken from FIG. 7 following the removal of exposed regions of the anti-reflective coating with the non-exposed regions of anti-reflective coating remaining.FIG. 11 is a subsequent cross-sectional view taken from FIG. 8 following a partial etch of exposed regions of a disposable hard mask material.FIG. 12 is a subsequent cross-sectional view taken from FIG. 9 following a partial etch of exposed regions of a disposable hard mask material.FIG. 13 is a subsequent cross-sectional view taken from FIG. 10 following a partial etch of exposed regions of a disposable hard mask material.FIG. 14 is a subsequent cross-sectional view taken from FIG. 11 following a photoresist and anti-reflective coating strip.FIG. 15 is a subsequent cross-sectional view taken from FIG. 12 following a photoresist and anti-reflective coating strip.FIG. 16 is a subsequent cross-sectional view taken from FIG. 13 following a photoresist and anti-reflective coating strip.FIG. 17 is a subsequent cross-sectional view taken from FIG. 14 following a hard mask etch.FIG. 18 is a subsequent cross-sectional view taken from FIG. 15 following a hard mask etch.FIG. 19 is a subsequent cross-sectional view taken from FIG. 16 following a hard mask etch.FIG. 20 is a subsequent cross-sectional view taken from FIG. 18 following an etch of the isolation material to form self-aligned openings that provide access to source/drain areas between transistor gates.FIG. 21 is a subsequent cross-sectional view taken from FIG. 20 following the formation of self-aligned contacts to source/drain areas between transistor gates.FIG. 22 is a simplified block diagram of a semiconductor system comprising a processor and memory device to which the present invention may be applied.DETAILED DESCRIPTION OF THE INVENTIONIn the following description, the terms "wafer" and "substrate" are to be understood as a semiconductor-based material including silicon, silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. Furthermore, when reference is made to a "wafer" or "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in or over the base semiconductor structure or foundation. In addition, the semiconductor need not be silicon-based, but could be based on silicon-germanium, silicon-on-insulator, silicon-on-saphire, germanium, or gallium arsenide, among others.An exemplary implementation of the present invention and variations thereof are directed to processes for forming self-aligned contact openings and self-aligned contacts in a semiconductor device as depicted in the embodiments of FIGS. 1-22.FIG. 1 is top-down view of a semiconductor substrate section covered with a multiple-layered resist patterned by photolithography. FIGS. 2-4 are cross-sectional views taken through various regions of the semiconductor substrate to demonstrate the results of photolithography patterning steps. The process steps used to form a desired pattern on a semiconductor substrate assembly may be conventional processing steps know to those skilled in the art.Referring to the cross-sectional view of FIG. 2, taken through line 1-1' of FIG. 1, a semiconductor substrate section 10 depicting transistor structures comprising transistor source/drain regions 12 spanning between transistor gate structures 11, covered by gate insulation 13, with each transistor gate structure isolated from one-another by transistor gate isolation regions 14 formed from isolation material such as oxide. A disposable mask material 15, such as amorphous carbon or transparent carbon materials, is first placed on isolation material 14. Next, a multiple-layered resist, comprising an anti-reflective layer 16 and an overlying photoresist layer 17, is formed on disposable mask material 15. As shown in FIG. 2, photoresist layer 17 in this area of the substrate is not exposed to ultraviolet radiation and is depicted as such by non-exposed regions 19.FIG. 3 is a cross-sectional view taken through line 2-2' of FIG. 1 showing the semiconductor substrate section 10 depicting transistor structures comprising transistor source/drain regions 12 spanning between transistor gate structures 11, covered by gate insulation 13, with each transistor gate structure isolated from one-another by transistor gate isolation material 14. A disposable mask material 15, such as amorphous carbon or transparent carbon materials, is first placed on isolation material 14. Next, a multiple-layered resist comprising an anti-reflective layer 16 and an overlying photoresist layer 17 is formed on disposable mask material 15. As shown in FIG. 3, photoresist layer 17 in this area of the substrate is exposed to ultraviolet radiation and is depicted as such by exposed regions 18.In FIG. 4, a cross-sectional view taken through line 3-3' of FIG. 1, the semiconductor substrate section 10 in this region shows runs perpendicular to the cross-sectional views of FIGS. 2 and 3 to show isolation material 14 overlying source/drain region 12. This view shows transistor gate isolation material 14 (i.e., oxide 14) covered with disposable mask material 15, such as amorphous carbon or transparent carbon materials, and multiple-layered resist comprising anti-reflective layer 16 and photoresist layer 17, with both exposed regions 18 and non-exposed regions 19 being shown in this area and due to the photo-lithography pattern.FIGS. 5-7 depict subsequent cross-sectional views that correspond to FIGS. 2-4, respectively, to demonstrate the results following the removal of exposed regions of photoresist 17. In FIG. 5 (a cross-sectional view taken from FIG. 2), photoresist 17 remains in place as this region of photoresist was not exposed during the previous photo-lithography patterning step. Thus, there is no change to this region of the semiconductor substrate between FIGS. 5 and 2 at this point.However, as shown in FIG. 6, a subsequent cross-sectional view corresponding to FIG. 3, photoresist 17 has been removed as this area of the semiconductor substrate contained exposed photoresist regions 18 that are seen previously in FIG. 3. With photoresist 17 stripped, underlying anti-reflective layer 16 is now exposed.In a perpendicular view to FIGS. 5 and 6, FIG. 7, a subsequent cross-sectional view corresponding to FIG. 4, shows the results following the removal of photoresist 17 at exposed regions 18 thereby exposing the underlying regions of anti-reflective layer 16, while leaving non-exposed regions of photoresist 17 remaining.FIGS. 8-10 depict subsequent cross-sectional views that correspond to FIGS. 5-7, respectively, to demonstrate the results following an etch to strip exposed regions of anti-reflective layer 16. For example, an etch using He/CF4 for a period of approximately 15 seconds can be used to strip exposed regions of anti-reflective layer 16. As shown in FIG. 8, a subsequent cross-sectional view corresponding to FIG. 5, the anti-reflective layer 16 has not been exposed as it is still covered with photoresist 17. Thus as shown in FIG. 8, in the area of the semiconductor substrate covered with photoresist 17, none of anti-reflective layer 16 is removed.FIG. 9 is a subsequent cross-sectional view corresponding to FIG. 6, following the removal of exposed regions of the anti-reflective layer 16. During an etch to remove anti-reflective layer 16 the underlying hard mask material 15, remains completely intact while the anti-reflective layer 16 is completely stripped.FIG. 10 is a subsequent cross-sectional view corresponding to FIG. 7, following the removal of exposed regions 18 of the anti-reflective layer 16 with the non-exposed regions 19 of anti-reflective layer 16 remaining. The underlying hard mask material 15 remains completely intact while the anti-reflective layer 16 is completely stripped in the exposed regions 18. Following the anti-reflective material etch the semiconductor assembly is now ready for the following etching procedure as depicted in FIGS. 11-13.FIGS. 11-13, show cross-sectional views of the semiconductor assembly after a timed partial hard mask etch is preformed. Referring to FIG. 11, a subsequent cross-sectional view corresponding to FIG. 8, the partial hard mask etch is performed to remove an upper portion of the now exposed hard mask material 15. As shown in FIG. 11, the anti-reflective layer 16 has not been exposed as it is still covered with photoresist 17. Obviously, in the area of the semiconductor substrate that remains covered with photoresist 17, no anti-reflective coating material is removed.Referring to FIG. 12, a subsequent cross-sectional view corresponding to FIG. 9, a partial etch is performed to remove an upper portion of the now exposed hard mask material 15. This partial etch of hard mask material 15 is a timed etch such that at least half the thickness of the hard mask material is removed. The minimum thickness of the hard mask to be removed is determined by the amount of hard mask material (i.e., carbon) that will be removed during a subsequent via opening etch (such as an oxide etch if the underlying isolation material is oxide) performed to open the self-aligned contacts, as depicted in FIGS. 20 and 21.For example, after defining the desired feature in the hard mask, the via opening etch mentioned above is performed that will remove approximately 10% of the hard mask. In one scenario, if the subsequent via opening etch removes approximately 500 angstroms of carbon, then the minimum thickness of hard mask removed during the partial etch will be around twice that or approximately 1000 angstroms. For example, performing a SO2/O2 etch for a period of approximately 55 seconds will successfully remove approximately 1000 angstroms of the hard mask (carbon).In another scenario, if the hard mask is approximately 2000 angstroms, by etching down approximately 1000 angstroms during the partial hard mask etch, the resist and the anti-reflective coating are removed. As the partial etch continues, the remaining underlying hard mask material will be approximately 1000 angstroms. The subsequent via oxide etch will remove around 100 angstroms of the hard mask. Thus, in this scenario it is preferred to have a minimum of 500 angstroms of hard mask material remaining during the via opening etch. A partial etch using SO2/O2 will then need adjusted to successfully remove the desired amount of carbon.Referring to FIG. 13, a subsequent cross-sectional view corresponding to FIG. 10, at the exposed regions 18 the hard mask material 15 is removed as indicated in FIG. 12 by the partial hard mask etch.FIGS. 14-16, show cross-sectional views of the semiconductor assembly after a partial photoresist and anti-reflective coating strip is preformed. Referring to FIG. 14, a subsequent cross-sectional view corresponding to FIG. 11, hard mask material 15 is now exposed following the removal of photoresist 17 and anti-reflective coating 16 seen in FIG. 11. The maximum amount of hard mask material 15 removed is determined by the amount of the mask material (i.e., carbon) removed during the anti-reflective coating strip. For example, performing a SO2/O2 etch for a period of approximately five seconds removes approximately 100 angstroms of carbon. Thus, if the anti-reflective etch step removes approximately 100 angstroms of carbon, then the maximum amount of hard mask material 15 removed must be such that a minimum of approximately 200 angstroms of carbon remains in the exposed areas.In FIG. 15, a subsequent cross-sectional view corresponding to FIG. 12, the photoresist and anti-reflective coating have been removed in previous process steps so this cross-sectional view does not show any change following the photoresist and anti-reflective coating strip.Referring to FIG. 16, a subsequent cross-sectional view corresponding to FIG. 13, hard mask material 15 is now exposed following the removal of photoresist 17 and anti-reflective coating 16 seen in FIG. 13.FIGS. 17-19, show cross-sectional views of the semiconductor assembly after a hard mask etch is preformed. In FIGS. 17-19, the hard mask etch removes an amount of hard mask material 15 that corresponds to the remaining thickness of hard mask material 15 that is resident in the exposed regions 18, as seen in FIG. 16. FIGS. 18 and 19 give the best illustration to depict the effects of the hard mask etch even though this etch effects all areas of the semiconductor assembly as depicted in the three cross-sectional views of FIGS. 17-19.Referring to FIG. 17, a subsequent cross-sectional view corresponding to FIG. 14, a hard mask etch removes a portion of hard mask material 15 that was resident in non-exposed regions 19.Referring to FIG. 18, a subsequent cross-sectional view corresponding to FIG. 15, a hard mask etch removes the remaining hard mask material 15 that was resident in exposed regions 18 (seen in FIG. 15) and the etch stops when isolation material 14 (in this example oxide 14) and gate insulation material 13 are encountered (insignificant amounts of isolation oxide 14 and gate insulation material 13 may be removed during this etch step). For example, performing an O2/SO2 etch for a period of approximately 30 seconds will successfully remove the remaining hard mask material 15 that is exposed.Referring to FIG. 19, a subsequent cross-sectional view corresponding to FIG. 16, the final hard mask etch removes the remaining hard mask material 15, that was resident in exposed regions 18 (seen in FIG. 16), removes the corresponding amount of hard mask material 15 in non-exposed regions 19 and the etch stops when isolation oxide 14 is encountered. At this point in the process, the semiconductor assembly is now ready for an etch step that will form self-aligned contact openings as depicted in FIGS. 20 and 21.Referring to FIG. 20, a subsequent cross-sectional view corresponding to FIG. 18, a via opening etch of the isolation material 14 is performed to create self-aligned openings 20 that provide access to source/drain areas 12 between transistor gates. The via opening etch will etch isolation material 14, such as oxide, selective to gate insulation material 13, such as nitride and thus form the self-aligned contact openings therein. As explained in previous process steps, the via opening etch must be taken into account to determine the amount of hard mask material to remove in the partial etch step of the hard mask material. Finally, after the via opening etch, a hard mask etch is performed to strip any remaining hard mask material from the present surface of the semiconductor assembly.The examples provided herein suggest layer thicknesses, etching solutions and etching rates and serve as exemplary implementations of the present invention and are not intended to limit the scope of the present invention. One skilled in the art has the knowledge to substitute etching solutions and etching rates of various materials used to obtain the desired removal of the types of materials and material thicknesses used in a given process.Referring to FIG. 21, a subsequent cross-sectional view corresponding to FIG. 20, self-aligned contacts 21 are formed to connect to source/drain areas between transistor gates, by methods know to those skilled in the art.Implementation of the present invention to form self-aligned contact openings and self-aligned contacts in semiconductor devices may be applied to a semiconductor system, such as the one depicted in FIG. 22. FIG. 22 represents a general block diagram of a semiconductor system, the general operation of which is known to one skilled in the art, the semiconductor system comprising a processor 222 and a memory device 223 showing the basic sections of a memory integrated circuit, such as row and column address buffers 224 and 225, row and column decoders, 226 and 227, sense amplifiers 228, memory array 229 and data input/output 230, which are manipulated by control/timing signals from the processor through control 231.It is to be understood that although the present invention has been described with reference to a preferred embodiment, various modifications, known to those skilled in the art, such as utilizing the disclosed methods to form self-aligned contacts in any semiconductor device or semiconductor assembly, may be made to the process steps presented herein without departing from the invention as recited in the several claims appended hereto.
In one embodiment, an apparatus includes: a storage having a plurality of entries each to store address information of an instruction and a count value of a number of executions of the instruction during execution of code including the instruction; and at least one comparator circuit to compare a count value from one of the plurality of entries to a threshold value, where the instruction is a tagged instruction of the code, the tagged instruction tagged by a static compiler prior to execution of the code. Other embodiments are described and claimed.
A processor comprising:a storage having a plurality of entries each to store address information of an instruction and a count value of a number of executions of the instruction during execution of code including the instruction; andat least one comparator circuit to compare a count value from one of the plurality of entries to a threshold value, wherein the instruction comprises a tagged instruction of the code, the tagged instruction tagged by a static compiler prior to execution of the code.The processor of claim 1, further comprising a control circuit to output hint information to identify at least one instruction associated with at least one of the plurality of entries having a count value greater than the threshold value.The processor of claim 2, further comprising a threshold storage to store the threshold value, wherein the threshold value is to be dynamically updated based on a minimum count value of a first set of the plurality of entries.The processor of claim 2, further comprising a dynamic profile circuit including the storage and the control circuit.The processor of claim 4, further comprising a cache memory coupled to the dynamic profile circuit to receive the hint information, the cache memory including a cache controller to control eviction of a cache line of the cache memory based at least in part on the hint information.The processor of claim 4, wherein the processor comprises a multicore processor having a plurality of cores, wherein the dynamic profile circuit comprises a separate circuit of the multicore processor to be dynamically shared by at least some of the plurality of cores.The processor of claim 2, wherein the storage includes NxM entries, and wherein the control logic is to store information associated with N most frequently accessed tagged instructions of the code in a first subset of the NxM entries.A method comprising:determining whether an instruction to be executed in a processor is a part of a code loop; andresponsive to determining that the instruction is part of the code loop, tagging the instruction to enable the instruction to be profiled in a dynamic profiler of the processor during execution of the code loop on at least one core of the processor.The method of claim 8, further comprising analyzing the instruction via a static compiler to determine whether the instruction is part of the code loop.The method of claim 8, wherein the code loop comprises one of a function and recursive code.The method of claim 8, further comprising:determining that the instruction is part of a nested loop; andnot tagging the instruction if a number of instructions of the nested loop is less than a first threshold.The method of claim 8, further comprising conditionally tagging the instruction if one or more variables of the instruction is not known at compile time, the instruction comprising a conditional instruction of the code loop.The method of claim 8, further comprising tagging the instruction and linking the instruction to another instruction of the code loop, wherein the instruction is a last instruction of the code loop.An apparatus comprising means to perform a method as claimed in any one of claims 8 to 13.A machine-readable storage medium including machine-readable instructions, when executed, to implement a method as claimed in any one of claims 8 to 13.
Technical FieldEmbodiments relate to a processor and more particularly to a processor having profiling capabilities.BackgroundDuring a design process of a processor, dynamic profiling of instructions is traditionally used prior to a hardware design freeze for improving instruction set architecture (ISA) performance and/or improving software performance on a fixed ISA prior to a software design freeze. However, this approach suffers in that the optimal ISA performance is based on simulations that assume certain system behavior (memory accesses for instance) that could be different in reality. As such, optimal ISA performance is based on simulations that may not cover all possibilities that could occur in real life applications post hardware design freeze.Brief Description of the DrawingsFIG. 1A is a block diagram of an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline to be included in a processor according to embodiments of the invention.FIG. 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention.FIG. 2 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention.FIG. 3 illustrates a block diagram of a system in accordance with one embodiment of the present invention.FIG. 4 illustrates a block diagram of a second system in accordance with an embodiment of the present invention.FIG. 5 illustrates a block diagram of a third system in accordance with an embodiment of the present invention.FIG. 6 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present invention.FIG. 7 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.FIG. 8 is a block diagram of a dynamic profiling module in accordance with an embodiment of the present invention.FIG. 9 is a flow diagram of a method in accordance with an embodiment of the present invention.FIG. 10 is a flow diagram of a method in accordance with another embodiment of the present invention.FIG. 11 is a block diagram of a processor in accordance with an embodiment of the present invention.FIG. 12 is a graphical illustration of a frequency response of a moving average filter in accordance with an embodiment.FIG. 13 is a flow diagram of a method in accordance with yet another embodiment of the present invention.FIG. 14 is a block diagram of a multicore processor in accordance with an embodiment of the present invention.FIG. 15 is a flow diagram of a method in accordance with a still further embodiment of the present invention.Detailed DescriptionIn various embodiments, techniques are provided for performing non-invasive dynamic profiling to enable a mechanism to improve ISA performance post hardware design freeze. The basic principle involvesin situprofiling of instructions executed on a processor, in an intelligent manner. To this end, embodiments may track and keep count of the most used set of select instructions. Dynamic profiling of all instructions would be expensive in terms of area. Instead, embodiments may identify a subset of instructions based at least in part on static analysis of code during compile time, to identify potential candidate instructions suitable for dynamic profiling.In turn, these potential candidate instructions may be profiled dynamically during runtime to identify a subset of these instructions that are most active. Hint information regarding these most active instructions of the potential candidate instructions may be provided to various resources of a processor to optimize performance. In a particular example, this hint information can be provided to an instruction caching structure to optimize storage and maintenance of these most used instructions within the caching structure. In this way, the performance penalty of cache miss for most active instructions can be reduced or avoided.In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.FIG. 1A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline to be included in a processor according to embodiments of the invention. FIG. 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGS. 1A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 1A , a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 110, a scheduling (also known as a dispatch or issue) stage 112, a register read/memory read stage 114, an execute stage 116, a write back/memory write stage 118, an exception handling stage 122, and a commit stage 124.FIG. 1B shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170. The core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140. The decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 140 or otherwise within the front end unit 130). The decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150.The execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156. The scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158. Each of the physical register file(s) unit(s) 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 158 comprises a vector register unit, a write mask register unit, and a scalar register unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 154 and the physical register file unit(s) 158 are coupled to the execution cluster(s) 160. The execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164. The execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 176. Instruction cache unit 134 and data cache unit 174 may together be considered to be a distributed L1 cache. In one exemplary embodiment, the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170. The instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170. The L2 cache unit 176 may be coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1) the instruction fetch unit 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 110; 4) the scheduler unit(s) 156 performs the schedule stage 112; 5) the physical register file unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 116; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 118; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.The core 190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set developed by MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2, and/or some form of the generic vector friendly instruction format (U=0 and/or U=1)), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a L1 internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the caches may be external to the core and/or the processor.FIG. 2 is a block diagram of a processor 200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG. 2 illustrate a processor 200 with a single core 202A, a system agent unit 210, a set of one or more bus controller units 216, while the optional addition of the dashed lined boxes illustrates an alternative processor 200 with multiple cores 202A-N, and a set of one or more integrated memory controller unit(s) 214 in the system agent unit 210. As further illustrated in FIG. 2 , processor 200 also may include a dynamic profiling circuit 208, as described herein which may be leveraged by one or more of cores 202A-202N. In some cases, dynamic profiling circuit 208 may be controlled to be dynamically shared by multiple ones of these cores as will be described further herein.Thus, different implementations of the processor 200 may include: 1) a CPU with special purpose logic being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 202A-N being a large number of general purpose in-order cores. Thus, the processor 200 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache units 204A-204N (including L1 cache) within the cores, a set of one or more shared cache units 206, and external memory (not shown) coupled to the set of integrated memory controller units 214. The set of shared cache units 206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 212 interconnects special purpose logic 208, the set of shared cache units 206, and the system agent unit 210/integrated memory controller unit(s) 214, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 206 and cores 202A-N.In some embodiments, one or more of the cores 202A-N are capable of multithreading. The system agent unit 210 includes those components coordinating and operating cores 202A-N. The system agent unit 210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 202A-N and the integrated graphics logic 208. The display unit may be for driving one or more externally connected displays.The cores 202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 202A-N may be capable of execution of the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. In one embodiment, the cores 202A-N are heterogeneous and include both the "small" cores and "big" cores described below.FIGS. 3-6 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, tablets, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, smartphones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 3 , shown is a block diagram of a system 300 in accordance with one embodiment of the present invention. The system 300 may include one or more processors 310, 315, which are coupled to a controller hub 320. In one embodiment the controller hub 320 includes a graphics memory controller hub (GMCH) 390 and an Input/Output Hub (IOH) 350 (these may be on separate chips); the GMCH 390 includes memory and graphics controllers to which are coupled to a memory 340 and a coprocessor 345; the IOH 350 couples input/output (I/O) devices 360 to the GMCH 390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 340 and the coprocessor 345 are coupled directly to the processor 310, and the controller hub 320 is a single chip with the IOH 350.The optional nature of additional processors 315 is denoted in FIG. 3 with broken lines. Each processor 310, 315 may include one or more of the processing cores described herein and may be some version of the processor 200.The memory 340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 320 communicates with the processor(s) 310, 315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as an Intel ® QuickPath Interconnect (QPI), or similar connection 395.In one embodiment, the coprocessor 345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 320 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 310, 315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 345. Accordingly, the processor 310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 345. Coprocessor(s) 345 accept and execute the received coprocessor instructions.Referring now to FIG. 4 , shown is a block diagram of a first more specific exemplary system 400 in accordance with an embodiment of the present invention. As shown in FIG. 4 , multiprocessor system 400 is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled via a point-to-point interconnect 450. Each of processors 470 and 480 may be some version of the processor 200 of FIG. 2 . In one embodiment, processors 470 and 480 are respectively processors 310 and 315, while coprocessor 438 is coprocessor 345. In another embodiment, processors 470 and 480 are respectively processor 310 and coprocessor 345.Processors 470 and 480 are shown including integrated memory controller (IMC) units 472 and 482, respectively. In addition, processors 470 and 480 include a dynamic profiling module (DPM) 475 and 485 respectively, details of which are described further below. Processor 470 also includes as part of its bus controller units point-to-point (P-P) interfaces 476 and 478; similarly, second processor 480 includes P-P interfaces 486 and 488. Processors 470, 480 may exchange information via a point-to-point (P-P) interface 450 using P-P interface circuits 478, 488. As shown in FIG. 4 , IMCs 472 and 482 couple the processors to respective memories, namely a memory 432 and a memory 434, which may be portions of main memory locally attached to the respective processors.Processors 470, 480 may each exchange information with a chipset 490 via individual P-P interfaces 452, 454 using point to point interface circuits 476, 494, 486, 498. Chipset 490 may optionally exchange information with the coprocessor 438 via a high-performance interface 439 using point-to-point interface circuit 492. In one embodiment, the coprocessor 438 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 490 may be coupled to a first bus 416 via an interface 496. In one embodiment, first bus 416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG. 4 , various I/O devices 414 may be coupled to first bus 416, along with a bus bridge 418 which couples first bus 416 to a second bus 420. In one embodiment, one or more additional processor(s) 415, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 416. In one embodiment, second bus 420 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 420 including, for example, a keyboard and/or mouse 422, communication devices 427 and a storage unit 428 such as a disk drive or other mass storage device which may include instructions/code and data 430, in one embodiment. Further, an audio I/O 424 may be coupled to the second bus 420. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 4 , a system may implement a multi-drop bus or other such architecture.Referring now to FIG. 5 , shown is a block diagram of a second more specific exemplary system 500 in accordance with an embodiment of the present invention. Like elements in FIGS. 4 and 5 bear like reference numerals, and certain aspects of FIG. 4 have been omitted from FIG. 5 in order to avoid obscuring other aspects of FIG. 5 .FIG. 5 illustrates that the processors 470, 480 may include integrated memory and I/O control logic ("CL") 472 and 482, respectively. Thus, the CL 472, 482 include integrated memory controller units and include I/O control logic. Processors 470, 480 further include a DPM 475, 485 respectively, details of which are described further below. FIG. 5 illustrates that not only are the memories 432, 434 coupled to the CL 472, 482, but also that I/O devices 514 are also coupled to the control logic 472, 482. Legacy I/O devices 515 may be coupled to the chipset 490.Referring now to FIG. 6 , shown is a block diagram of a SoC 600 in accordance with an embodiment of the present invention. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG. 6 , an interconnect unit(s) 612 is coupled to: an application processor 610 which includes a set of one or more cores 602A-N having cache unit(s) 604A-604N, and shared cache unit(s) 606; a dynamic profiling unit 608 which may be shared by multiple ones of cores 602A-602N as described herein; a system agent unit 610; a bus controller unit(s) 616; an integrated memory controller unit(s) 614; a set of one or more coprocessors 620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a static random access memory (SRAM) unit 630; a direct memory access (DMA) unit 632; and a display unit 640 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Program code, such as code 430 illustrated in FIG. 4 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a non-transitory machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible non-transitory, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIG. 7 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 7 shows a program in a high level language 702 may be compiled using an x86 compiler 704 to generate x86 binary code 706 that may be natively executed by a processor with at least one x86 instruction set core 716. The processor with at least one x86 instruction set core 716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel® x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The x86 compiler 704 represents a compiler that is operable to generate x86 binary code 706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 716. Similarly, FIG. 7 shows the program in the high level language 702 may be compiled using an alternative instruction set compiler 708 to generate alternative instruction set binary code 710 that may be natively executed by a processor without at least one x86 instruction set core 714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 712 is used to convert the x86 binary code 706 into code that may be natively executed by the processor without an x86 instruction set core 714. This converted code is not likely to be the same as the alternative instruction set binary code 710 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 706.Referring now to FIG. 8 , shown is a block diagram of a dynamic profiling module in accordance with an embodiment of the present invention. More specifically, dynamic profiling module (DPM) 800 is a representative profiling module that can be used to dynamically profile tagged instructions as described herein. In different embodiments, DPM 800 may be implemented as hardware circuitry, software and/or firmware, or combinations thereof. In some cases, DPM 800 may be dedicated hardware circuitry of a particular core of a single core or multicore processor. In other cases, DPM 800 may be implemented by logic that executes on one or more execution units of such core. In still further cases, DPM 800 may be implemented as a dedicated hardware unit separate from any cores of a multicore processor and as such, may be a dynamically reconfigurable hardware logic unit that can be reused by a set of cores of the processor as described herein.In any event, FIG. 8 shows details of DPM 800. As illustrated, DPM 800 includes a storage 805. Storage 805 may be implemented as any type of memory structure including volatile and non-volatile memories. In the embodiment shown, storage 805 includes a first plurality of entries 810, namely entries 8101-810N. As will be described herein, this subset of entries 810 may be used to store information regarding N instructions, namely the N most hot instructions being profiled within DPM 800. As used herein, the term "hot instruction" means an instruction that is often used, such as more than a threshold number of times and more so than at least some other instructions. As further seen in FIG. 8 , a representative entry 8101shown in the inset of FIG. 8 includes a comparator field 8121and a corresponding count field 8141to store a count value. Comparator field 8121may be implemented to store address information of an instruction associated with the entry to determine whether incoming address information matches this stored address information, while count field 8141is configured to store a count value corresponding to the number of executions of the given instruction. As further seen, storage 805 also includes a second plurality of entries 815. Specifically, this subset of entries (815N+1- 815NxM) may be used to store information regarding additional tagged instructions. More specifically, these tagged instructions may be less frequently used than the N hot instructions. As will be described herein, instructions may be dynamically swapped between these two sets of entries as given entries within subset 810 become less frequently used in favor of instructions from subset 815 that become more frequently used.To aid in determination of the N hot instructions, a threshold storage 820 is present. As seen, threshold storage 820 may store at least one threshold value within a threshold register 822. In an embodiment, this threshold value may be a count equal to the count value of the least hot of the N hot entries. And a corresponding pointer to this entry may be stored in pointer storage 824. Understand that in other embodiments, multiple sets of these threshold registers may be provided to enable multiple segmentations of instructions. For example, with two sets of threshold registers, a first portion of the N hot registers can be identified corresponding to the X top used instructions and an N-X remainder of the top N hot instructions can be associated with a second portion of the N hot registers. Of course many additional sets of registers and possibilities exist in other embodiments.As further illustrated, DPM 800 further includes a DPM control logic 830, which may be configured to perform dynamic swap operations relating to dynamically updating the threshold value(s) as the number of instructions executed varies over time. In an embodiment, DPM control logic 830 may be implemented as a finite state machine (FSM) although other embodiments, including hardware circuitry, software and/or firmware are possible. As will be described herein, DPM control logic 830 may be configured to perform control operations with regard to the various entries within storage 805, as well as to perform processing on the resulting count information to identify hot instructions and send hint information associated with one or more of these hot instructions to one or more consumers. Still further as discussed herein in embodiments in which DPM 800 is implemented as a standalone component of a processor, DPM control logic 830 also may be configured to perform arbitration between the various cores or other processors to enable DPM 800 to be dynamically shared as a shared resource by multiple cores or other processing elements of the processor. Understand while shown at this high level in the embodiment of FIG. 8 , many variations and alternatives are possible.Still with reference to FIG. 8 , assuming N*M counters and comparators in hardware entries 810, 815 available for dynamic profiling, operation may begin with a software settable (initialization) threshold stored in threshold register 822 of threshold storage 820 to be used in a comparison to identify whether a tagged instruction is hot (where hot means that an instruction is more often used). By inclusion of the top N hot tagged instructions within entries 810, this threshold may be dynamically adapted (taking the minimum count of the top N hot tagged instructions if it is greater than the initialized threshold value). And tagged instructions that are not in the top N hot tagged instructions but having a higher count than the current threshold can unseat an existing top N hot tagged instruction. This mechanism maintains at any point in time the top N hot tagged instructions within entries 810. Embodiments may be scalable in multiples of top N hot tagged instructions. The dynamic profiling output for the top N hot tagged instructions (or multiple of N thereof) can be used to optimize processor performance. For example, in embodiments an instruction caching architecture may leverage this information as a hint to improve dynamically the ISA performance post hardware design freeze. Of course, other uses for the profiling information provided herein are contemplated, including future ISA extensions based on profile information and improving compiler optimization.Threshold register 822 (which may include multiple registers) may be configured to hold the value of a threshold (set either by software or by a minimum count among the N top tagged instruction entries or multiples of N thereof), while pointer storage 824 may be configured to store the pointer to the entry of entries 810 having the minimum count. Comparator field 812 of each entry is used for comparing the incoming tagged instruction address with its entry address, and if there is a match then the counter value stored in count field 814 is incremented, e.g., by 1 for that entry. This update may also invoke a comparison of the count value with that of threshold value from the threshold register 822.At initialization, the threshold value stored in threshold register 822 may be set to X (which is a software initialized value). In addition, all N*M entries 810, 815 are initialized to zero, both for tagged instruction address and count value. During operation, every tagged instruction address enters dynamic profiling module 800. If the tagged instruction address has not been profiled before, a new entry is created (e.g., in one of entries 815), and its corresponding counter is incremented and the count value is compared to the threshold value. Instead if the tagged instruction address is already being profiled through an entry in dynamic profiling module 800, that entry's corresponding counter is updated and the count value is compared to the threshold value. If any of the N top tagged instructions (or multiples of N thereof) have a minimum count value greater than the threshold (initialized to X at start by software), then threshold register(s) 822 are updated with the minimum count and pointer storage(s) 824 are updated to the entry that had minimum count. If any of non-N top tagged instructions has a count value greater than the threshold, then this initiates a swap operation in which this entry is swapped with the entry identified by pointer register 824 (namely the entry having the minimum count among N top tagged instructions). Also, threshold register(s) 822 are updated with the new minimum value.Thus in an embodiment, there are two phases of operation in the dynamic profiling module per processor clock tick: phase 1, in which entry update is performed, which includes count update and comparison with a threshold post comparison of the incoming tagged instruction address with the address stored in the entry, and if the comparison returns a match, operation proceeds to phase 2. In phase 2, if any entry has a higher count than the threshold value, then a dynamic swap operation is performed. In an embodiment, the following operations may be performed in a dynamic swap: if the entry is not part of the N top tagged entries, then this entry is swapped with the entry indicated in pointer register 824 and the threshold value stored in threshold storage 822 is updated to the new minimum. If the entry is part of the N top tagged entries, then the entry with the minimum count among N top tagged entries will update the threshold registers (value and pointer, if needed).In an embodiment, dynamic profiling module 800 may output, per processor clock tick, profiling information regarding the N top tagged instructions (or multiples of N). Of course hint information regarding fewer instructions may be sent instead per clock cycle. Understand that in different embodiments, dynamic profiling module 800 may be scalable in multiples of N. The minimum count value among N or multiples of N can be determined hierarchically. In an embodiment, the threshold value and pointer value stored in threshold storage 820 may be broadcast to N*M entries 810, 815 to enable the above determinations to occur.Referring now to FIG. 9 , shown is a flow diagram of a method in accordance with an embodiment of the present invention. More specifically, method 900 shown in FIG. 9 may be performed by control logic of a DPM as described herein. As such, embodiments of method 900 may be performed by hardware circuitry, software and/or firmware. For example in different implementations, this control logic may be hardware circuitry implemented within a dedicated DPM. In other cases, method 900 may be executed within control logic of a core, such as dedicated logic or general-purpose circuitry. Of course many other embodiments are possible.As illustrated, method 900 begins by receiving a tagged instruction in a dynamic profiling circuit (block 910). Note that the terms "dynamic profiling circuit" and "dynamic profiling module" are used interchangeably herein to refer to hardware circuitry, software, firmware and/or combinations thereof to perform the dynamic profiling described herein. As discussed above, this tagged instruction may be received as part of an instruction stream during execution of a given process on one or more cores of a processor. Next it is determined at diamond 920 whether an entry is present in the dynamic profiling circuit for this tagged instruction. In an embodiment, this determination as to whether an entry is present may be based on at least some portion of the address associated with the instruction, which may be used by each of the entries to perform a comparison to determine whether a match exists for a given entry within the DPM. This entry may be one of the N hot entries or may be one of the additional entries associated with less hot instructions. If no entry is present, a new entry may be created for this tagged instruction (block 925). Typically, this created entry will be within one of the less hot additional entries of the DPM. In some embodiments, when all of the entries include instruction information already, an eviction process first may be performed to remove, e.g., an entry associated with the least recently used instruction or a cleanup process may be routinely performed to remove tagged instructions if not active for a given (e.g., threshold) period of time or periodic reset of the DPM.Still with reference to FIG. 9 , from both of block 925 and diamond 920, control passes to block 930 where the count of the entry associated with the tagged instruction may be updated, e.g., by incrementing the count by one. Control next passes to diamond 940 to determine whether a count of the entry exceeds a threshold (which may be stored in a threshold storage of the DPM). If not, no further operations occur for this cycle with regard to this instruction entry. Accordingly, control passes to block 980 where instruction information associated with various entries of the DPM may be output. For example, instruction address information and count information for each of the top N entries (at least) may be output per cycle of execution. As will be described further herein, this information may be used to optimize execution.Still with reference to FIG. 9 instead if it is determined that the count exceeds a given threshold, control passes to diamond 950 to determine whether the entry is one of the top N entries in the DPM. If so, control passes to block 955 where the threshold storage may be updated with a new threshold, namely a count of the minimum one of the top N entries. Note that in a given cycle, this threshold update operation may not be performed. Still with reference to FIG. 9 , if instead it is determined that the entry is not one of the top N entries, control passes to block 960 where this entry may be swapped with the top N entry identified in the pointer storage. That is, as this entry under consideration now has a higher count than the least used entry within the top N entries, a dynamic swapping may be performed such that this entry under consideration is placed into the top N entries. And accordingly, at block 970 the threshold storage may be updated with the count of this newly swapped entry. Thereafter, control passes to block 980, discussed above for output of information associated with the top N entries.Note that in embodiments herein, the DPM may be used during execution of code that has not been translated or instrumented (such as by a dynamic binary translator (DBT)), enabling applicability in a wide variety of situations, and without the overhead of translation time. Understand while shown at this high level in the embodiment of FIG. 9 , many variations and alternatives are possible.Embodiments may identify select (tagged) instructions in different ways. In some embodiments, static analysis of code may be performed, e.g., during compilation. As an example, for a loop of code the choice for tagged instructions could be the first and last instructions of the loop body, along with a conditional instruction that checks the loop iteration. In nested loop code, the choice for tagged instructions could be the first and last instructions of the top loop body, or first and last instructions at several levels of the nested loop body, depending on the total number of instructions at various levels of the nested loop body.Note that for functions, macros and other similar programming constructs that lead to a fixed body of code tagged instructions can be identified similar to loop constructs. In some embodiments, all instructions that are part of a recursion can be tagged.In some embodiments, instructions can be classified into three bins: tagged instructions; non-tagged instructions; and conditionally tagged instruction; as described further below. Tagging instructions during static analysis of compilation may enable reduced resource consumption of a dynamic profiling module, which may be resource constrained.Referring to Table 1 below, shown is an example static analysis of code during a compilation process to identify instructions suitable for dynamic profiling. More specifically, the loop code of Table 1 shows that the choice for tagged instructions could be the first and last instructions of a loop body and the instruction that determines the loop iterator.Table 1for(n1 = 0; n1 <= 9; n1++) // Instruction that determines the loop iterator is tagged n2 = n3*n4; // This instruction is tagged n5 = n6-n7; // perform more actions n8 = n9+n10; // This instruction is tagged and linked }For the above example of Table 1, it is sufficient to tag the loop first and last instructions. Also, note that the last instruction is linked to the first instruction of the loop so that the complete range of address between first and last instructions can be determined. In addition, the instruction that determines the loop iterator is tagged.Tables 2A-2C show examples of static analysis for nested loop code. As seen in the different examples of these Tables, the choice for tagged instructions could be the first and last instructions of the top loop body or first and last instructions at several levels of the nested loop body, depending on the total number of instructions at various levels of the nest loop body.Table 2Afor(n1 = 0; n1 <= 9; n1++) // Instruction that determines the loop iterator is tagged n2 = n3*n4; // This instruction is tagged n5 = n6-n7; // perform few actions for(m1 = 0; m1 <= 7; m1++) m2 = m3*m4; m5 = m6-m7; // perform few actions m8 = m9+m 10; n8 = n9+n10; // This instruction is tagged and linked }For the above example of Table 2A, it is sufficient to tag the outer nested loop first and last instructions since the total nested loop instructions are not that many. And, the instruction that determines the outer loop iterator is also tagged.Table 2Bfor(n1 = 0; n1 <= 9; n1++) // Instruction that determines the loop iterator is tagged n2 = n3*n4; // This instruction is tagged n5 = n6-n7; // perform many actions for(m1 = 0; m1 <= 7; m1++) // Instruction that determines the loop iterator is tagged m2 = m3*m4; // This instruction is tagged m5 = m6-m7; // perform many actions m8 = m9+m10; // This instruction is tagged and linked n8 = n9+n10; // This instruction is tagged and linked }For the above example of Table 2B, the outer and inner nested loop first and last instructions are tagged since the total nested loop instructions are many. And the outer and inner nested loop instructions that determine the loop iterations are tagged.Table 2Cfor(n1 = 0; n1 <= 9; n1++) n2 = n3*n4; n5 = n6-n7; // perform few actions for(m1 = 0; m1 <= 7; m1++) // Instruction that determines the loop iterator is tagged m2 = m3*m4; // This instruction is tagged m5 = m6-m7; // perform very many actions m8 = m9+m10; // This instruction is tagged and linked n8 = n9+n10; }For the above example of Table 2C, the inner nested loop first and last instructions are tagged since the total inner nested loop instructions are very many. And the inner nested loop instruction that determines the loop iterator is tagged.While shown with these representative examples for purposes of illustration, understand that embodiments are not so limited and other static-based analyses may be performed to identify instructions for tagging. Note also that in an embodiment, the number of resources available in hardware for dynamic profiling may be an input to the compiler to enable the compiler to select an appropriate subset of instructions for dynamic profiling adhering to hardware resource constraints.As to binning instructions into three categories, note that tagged and non-tagged instructions can be determined during static analysis of compilation. Conditionally tagged instructions are those instructions that cannot be binned into tagged or non-tagged classification at compile time, since these instructions rely on run-time values in order to be considered either as tagged or non-tagged. In embodiments, these instructions may be classified as conditionally tagged instructions. Then during operation, run-time hardware may be configured to determine whether a conditionally tagged instruction should be tagged or not based on run-time value. For instance, a loop iterator instruction may be identified as a conditionally tagged instruction, in that a run-time variable of the instruction is one where a programmer has not provided pragmas to indicate the minimum value of the iterator. The run-time hardware may be configured to determine the value of the expression of iterator, and based on, e.g., a software settable threshold, this conditionally tagged instruction will be classified either as tagged or non-tagged. This hardware, based on the result of the execution of the iterator instruction, may flip the tag from "conditionally tagged" to "tagged" if the iterator value is higher than the threshold and "conditionally tagged" to "non-tagged" instruction if it is otherwise. In an embodiment, this hardware is located within the processor execution circuitry.Table 3 below shows example code including a conditionally tagged instruction.Table 3x= y + z; // y and z are variables whose values are not known at compile time for(n1 = 0; n1 <= x; n1++) // Instruction that determines the loop iterator is conditionally tagged n2 = n3*n4; // This instruction is conditionally tagged n5 = n6-n7; // perform many actions n8 = n9+n10; // This instruction is conditionally tagged and linked }For the above example, since the value of x is not known at compile time, the instruction that determines the loop iterator is conditionally tagged. Also, the first and last instructions of the loop are conditionally tagged. And the last instruction is linked to the first instruction in the loop to enable identification of the complete range of addresses between first and last instructions.Referring now to FIG. 10 , shown is a flow diagram of a method in accordance with another embodiment of the present invention. More specifically, as illustrated in FIG. 10 , method 1000 may be performed to statically analyze program code to identify instructions to be tagged as discussed herein. In one embodiment, method 1000 may be performed by a compiler such as a static compiler that analyzes program code to be executed by a processor.As illustrated, method 1000 begins by analyzing an incoming instruction (block 1005). Next it is determined whether this instruction is part of a loop within the code (diamond 1010). If not, no further analyses occurs for this instruction, and accordingly, an instruction counter for the analysis tool can be incremented (block 1015) to enable control to pass back to block 1005 for analysis of a next instruction. Note that while described in the context of method 1000 as considering whether an instruction is part of a loop, understand that this determination may also consider whether the instruction is part of a function or recursive code.If it is determined that the instruction is part of a loop, control passes to diamond 1020 to determine whether it is part of a nested loop. If so, control passes next to diamond 1025 to determine whether the number of nested loop instructions within this nested loop is less than a nested loop threshold. Although the scope of the present invention is not limited in this regard, in one embodiment this nested loop threshold (which can be dynamically set in some cases) may be between approximately 5 and 10.If it is determined that the number of nested loop instructions is less than this nested loop threshold, control passes to block 1030 where further analysis of this nested loop may be bypassed. As such, control jumps to the end of this nested loop (block 1035). Thereafter, the instruction counter may be incremented (block 1040) so that the next instruction can be analyzed (as discussed above at block 1005).Still with reference to FIG. 10 , it is determined whether the instruction is a conditional instruction of the loop (diamond 1050). If so, control passes to diamond 1055 to determine whether the variables associated with this conditional instruction are known at compile time. If so, control passes to block 1060 where the instruction may be identified as a tagged instruction. In an embodiment, a tag indicator may be associated with the instruction which, in an embodiment may be a single bit that is set (namely, to 1) to indicate that the instruction is a tagged instruction. Or two bits may be used to indicate that the instruction is a tagged instruction, and where the two bits can be used to cover the three possibilities, namely tagged (01), non-tagged (00) and conditionally tagged (10). After tagging the instruction, control passes to block 1040, discussed above to increment the instruction counter.If instead it is determined that the one or more variables of the conditional instruction is not known at compile time (and thus are to be determined at run time), control passes to block 1065 where this instruction may be conditionally tagged. In an embodiment, an instruction can be conditionally tagged by setting a conditional tag indicator (single bit, namely 1) of the instruction or as stated above with two bits (10).Still referring to FIG. 10 , if the instruction is not identified as a conditional instruction, control passes to diamond 1070 to determine whether the instruction is the first instruction of the loop. If so, control passes to block 1075 where the instruction may be tagged. And if this first instruction is of a conditional loop, the instruction may be conditionally tagged. Finally, if the instruction is not identified as a first instruction of the loop, control passes to diamond 1080 to determine whether the instruction is the last instruction of the loop. If so, control passes to block 1085, where this last instruction may be tagged and linked to the first instruction. And if this last instruction is of a conditional loop, the instruction may be conditionally tagged. Understand while shown at this high level in the embodiment of FIG. 10 , many variations and alternatives are possible.In most cases, an N hot tagged instruction leads to a triplet of instructions that are linked representing a loop or nested loops. This triplet includes the loop iterator instruction, first instruction in the loop body and the last instruction in the loop body. Given the triplet, there can be many instructions within the loop body that are not tagged but can be derived from the triplet. As will be described further below, a hint information consumer such as a caching structure may use this triplet to determine whether a specific instruction is not tagged but within a loop body. If so, that specific instruction may also be handled as an N hot instruction, such as storage into a second instruction cache portion. This basically means that a triplet of tagged instructions representing a loop present in the N hot tagged instructions (as analyzed in the dynamic profiling module), can in fact lead to 3+L instructions that may be specially cached, where L is the total instructions in the loop body minus 3 (triplet). There are also cases where the N hot tagged instructions lead to a pair of instructions that are linked, such as representing a hard macro having start and end instructions. The same logic above as to triplets applies to instructions that are within the start/end instructions of the hard macro. There are also cases where an N hot tagged instruction leads to a single instruction that is linked to no other instruction, representing a recursion. By tagging only pair of instructions in the case of hard macros and triplet in the case of (nested) loops, the amount of dynamic profiling hardware is minimized.As discussed above, a dynamic profiler as described herein may produce at every clock tick updated dynamic profile information that can potentially be used for caching instructions that are most often used. Embodiments may apply filtering to this information, e.g., by low pass filtering the dynamic profile information in order to avoid any high frequency changes to the profile information, which could be an outlier and can cause negative impact on instruction caching.In one particular embodiment, a moving average filtering technique (or other low pass filter or boxcar filter) may be used to filter dynamic profile information. Such filtering may ensure that any spurious high frequency outliers are removed before providing the low pass filtered dynamic profile information as hint information, e.g., to an instruction caching structure. Coupling a low pass filter in the path between the dynamic profile module and a hint consumer such as an instruction caching structure may ensure that received hint information enhances ISA performance (e.g., enabling caching the most often used instructions).Referring now to FIG. 11 , shown is a block diagram of a processor in accordance with an embodiment of the present invention. More specifically, processor 1100 may, in one embodiment, be a detail of a given core of a multicore processor in which at least some cores have dedicated DPM circuitry as described herein. Thus in the embodiment of FIG. 11 , processor 1100 includes a dynamic profile module 1110 (which may be implemented similarly to DPM module 800 of FIG. 8 ). As seen, DPM 1110 may output hint information for N-top tagged instruction addresses, e.g., per execution cycle. In turn, this hint information is provided to a filter 1115, which in an embodiment may be implemented as a low pass filter. Low pass filter 1115 may filter this hint information to remove spurious effects. The resulting filtered hint information is provided to a cache structure 1120. In different embodiments, cache structure 1120 may be a given instruction cache.Different types of cache memories and cache memory hierarchies are possible. However, for purposes of discussion herein, assume that cache memory 1120 includes a first portion 1122 and a second portion 1124, where first portion 1122 is a dedicated cache storage for the N hot instructions, and second cache portion 1124 is an instruction cache for non-tagged instructions and tagged instructions that are outside of the current N hot instructions. As further seen, in embodiments migrations of instructions between these two caches are possible such that when a given tagged instruction within cache portion 1124 is elevated to one of the top N hot instructions, that cache line may be migrated to first cache portion 1122 (and similarly, a least used instruction that is replaced by this new incoming instruction is demoted to second cache portion 1122). Understand that to perform these migrations and further to leverage the hint information, cache memory 1120 may include a cache controller 1126, which may perform these migrations of instructions between the two caches, as well as additional cache control functions.As further illustrated in FIG.11 , processor 1100 further includes an execution circuit 1130, which may be implemented as one or more execution units to execute instructions received from cache memory 1120. Understand that while shown at this high level, many additional structures within a processor and core of a processor may be present in particular embodiments. However such structures are not shown for ease of illustration in FIG. 11 .Referring to FIG. 12 , shown is a graphical illustration of a frequency response of a moving average filter in accordance with an embodiment. The filter characteristics indicated in the graph relate to 4, 8 and 16 sample moving averages, as shown at curves A, B and C of illustration 1200, respectively. Notice that in all three cases, the frequency response has a low pass characteristic. A constant component (zero frequency) in the input passes through the filter un-attenuated. Note the boxcar filter attenuates from the zero frequency position for all three curves. Any spurious high frequency outliers in the dynamic profile information may be filtered using a filtering technique as described herein. In some embodiments filter 1115 may be configured as a plurality of independent filters. For example, with the assumption that hint information for each of the top N hot instructions is output from DPM 1110 per clock cycle, an independent moving average filter may be provided per corresponding count entry per instruction. In an embodiment, the filter may be configured such that if the output of a given moving filter differs from a current count for that entry, then the hint information for that instruction is not passed to a consumer (e.g., an instruction caching structure). However, if the moving average filter output matches with the current count for that entry, then the (positive) hint information for that instruction is passed to the instruction caching structure. In this way, if the instruction corresponding to a positive hint information is already identified as a top N hot instruction within the instruction cache (e.g., as located in a special instruction cache or in a locked way), then no action is taken. If however the instruction corresponding to the positive hint information is not present in the special instruction cache or way-locked cache, then that instruction is migrated from the regular cache or non-way locked location.Embodiments may improve ISA performance dynamically via non-intrusive dynamic profiling as described herein at least in part by caching most often used instructions, and further maintaining these instructions such that they are not frequently evicted. As discussed above, non-intrusive dynamic profiling as described herein may provide information regarding: instructions that are most often used and not part of any (nested) loop body but can be part of a recursion body or hard macros; and instructions that are most often used and are part of a loop body. Based on linking information present in the last instruction of a loop body linking it to the first instruction of the loop, a complete range of addresses between first and last instructions that constitute a loop can be determined. This information may be used to appropriately store more active instructions for longer durations in an instruction cache. As such for the case of loop instructions, where the first and last instructions are identified as most often used, non-tagged instructions of the loop body between these first and last instructions may be stored and controlled the same as these first and last instructions. Similar logic can be applied to hard macros for which first and last instructions are identified as most often used.In different embodiments, there may be multiple manners of implementing an instruction caching structure to leverage the profiling hint information described herein. In a first embodiment, one or more separate structures may be provided for most often used instructions. In this embodiment, all instructions are fetched into a first or regular instruction cache regardless of whether they are most often used instruction or not. Based on hint information from the dynamic profiling module, instructions that are most often used then may be cached in a second or special instruction cache. Specifically, the most often used instructions can be dynamically migrated from the regular instruction cache that is the receiver of fetched instructions to the special instruction cache. This approach ensures that the most often used instructions are not evicted in case there is a tsunami of sequential code execution that can potentially evict most often used instructions.In another embodiment, instead of providing special and regular instruction cache arrays, a single cache memory array may be provided for all instructions, with different portions allocated or locked for the most often used instructions. In one example, a set associative cache memory may be arranged with certain ways locked for use only with the most often used instructions. Such ways may be controlled so that the instructions stored therein are evicted only based on hint information received from the dynamic profiling module (and not based on least recently used or other conventional cache eviction schemes). With this configuration, with certain ways allocated for most often used instructions, all instructions are fetched and inserted into non-locked ways. Based on hint information from the dynamic profiling module, the cache structure can migrate most often used instructions from the non-reserved ways to the reserved ways, thereby protecting the most often used instructions from a potential tsunami of sequential code execution. In either case, dynamic hint information from the dynamic profiling module may be used to identify which set of instructions to specially cache and protect them from eviction.In yet other embodiments, a cache structure may include a separate storage for decoded instructions, referred to as a decoded instruction cache or decoded streaming buffer). Such separate storage may be used to store decoded instructions that are often used, so that front end units such as instruction fetch and decode stages can be bypassed. Embodiments may control a decoded instruction storage to only store N hot decoded instructions, to improve hit rate.Eviction from the special instruction cache or locked way of an instruction cache is only when the cache is full, and new hint information (for a new hot instruction) arrives from the dynamic profiling module. In an embodiment, the size of the special instruction cache or the number of ways of an instruction cache locked for storing most often used instructions may be set at a maximum or multiples of N (where N is the top N hot tagged instructions). Understand that in other cases, the expense of a dynamic profiling module can be avoided by directly using compiler tagging to cache tagged instructions based on static analysis. However, in the interest of adding benefits of dynamic profiling, a potentially smaller sized cache may be used to ensure access to the most used instructionsReferring now to FIG. 13 , shown is a flow diagram of a method in accordance with yet another embodiment of the present invention. Method 1300 is a method for controlling storage of hot instructions within a cache memory such that they may be retained or more likely maintained within the cache memory to reduce performance and power consumption penalties of cache misses for such instructions. As shown in FIG. 13 , method 1300 may be performed, e.g., by control logic of a caching structure. While in some embodiments, method 1300 may be performed by a cache controller of the cache memory, in other cases a dedicated tagged instruction manager of the cache memory may perform method 1300 (which in some cases may be a FSM or other control logic, e.g., implemented within the cache controller itself).As illustrated, method 1300 begins by receiving hint information from a dynamic profiling circuit (block 1310). In an embodiment, this hint information may include address information and corresponding counts, e.g., of the top N instructions, to thus identify to the cache memory the most active instructions. Next, control passes to block 1320 where an instruction is received in the instruction cache. For example, this instruction may be received as a result of an instruction fetch, prefetch or so forth. Note that the ordering of blocks 1310 and 1320 may be flipped in certain cases.In any event, control passes to block 1330 where this instruction is stored in a first instruction cache portion. That is, in embodiments described herein a caching structure that is to leverage the hint information can be controlled to provide different portions associated with tagged and non-tagged instructions. For example, different memory arrays may be provided for, at least, the top N hot instructions. In other examples, these separate cache portions may be implemented as certain dedicated ways of sets of the cache memory only for storage of tagged instructions.In any event, at block 1330 this instruction is stored in a first cache portion, where this first cache portion is associated with non-tagged instructions. Next, control passes to diamond 1340 to determine whether this instruction is associated with a top N instruction. This determination may be based on comparison of address information of this instruction to address information of the hint information. Note that if the instruction itself is one of the top N hot instructions, a match occurs. In other cases, this determination can be based on determining that the instruction, although not tagged itself, is within a loop associated with tagged instructions.If it is determined that this instruction is not associated with a top N instruction, no further operations occur with regard to this instruction within the cache and thus this instruction remains in the first instruction cache portion. Otherwise, if it is determined that this instruction is associated with a top N instruction, control passes to block 1350 where the instruction may be migrated to a second instruction cache portion. As described above, this second cache portion may be a separate memory array dedicated for hot instructions or a given way of a set dedicated to such hot instructions. As part of this migration it may be determined whether this second instruction cache portion is full (diamond 1360). If so, control passes to block 1370 where a less used instruction is migrated from this second cache portion to the first instruction cache portion. From both of diamond 1360 and block 1370, control passes to block 1380 where the instruction is stored in the second instruction cache portion. Understand while shown at this high level in the embodiment of FIG. 13 , many variations and alternatives are possible.As discussed above in some cases, a dynamic profiling module may be provided within or associated with each core of a multicore processor. In other cases, such circuitry can be shared for use by multiple cores or other processing engines, to provide a solution for efficient dynamic profiling infrastructure.With one or more shared dynamic profiling modules as described herein, each core, when it employs the dynamic profiling infrastructure, will reach a steady state, e.g., with respect to benefiting from increased instruction cache hit rate based on the hint information provided by the dynamic profiling infrastructure. In embodiments, this steady state can be used as a trigger condition to either turn off the dynamic profiling module or switch use of the dynamic profiling infrastructure to another core or other processing engine of the SoC or other processor. Since the dynamic profiling infrastructure is independent of the processor architecture, it can be seamlessly used as dynamic profiling infrastructure for any processor architecture. In this way homogenous and heterogeneous processor architectures may benefit by efficient reuse with regard to a dynamic profiling infrastructure.In an embodiment, a core, when it has an instruction cache hit rate that falls below a certain threshold, may be configured to issue a request to use the shared dynamic profiling infrastructure. To this end, a request queue may be provided to store these requests. In turn, the dynamic profiling infrastructure may access this request queue (which may be present in a control logic of the DPM, in an embodiment) to identify a given core or other processing element to select for servicing. In some embodiments, a priority technique may be used in which a core can issue a request with a given priority level based on a level of its instruction cache hit rate. And in turn, the shared dynamic profiling infrastructure may include priority determination logic (e.g., within the DPM control logic) to choose an appropriate core (or other processor) for use of the infrastructure based at least in part on the priority levels.Referring now to FIG. 14 , shown is a block diagram of a multicore processor in accordance with an embodiment of the present invention. More specifically, processor 1400 includes a plurality of processor cores 14250-1425N. In different implementations, these cores may be homogeneous cores or heterogeneous cores or a mix of cores having different ISA capabilities, power consumption levels, microarchitectures and so forth. As further illustrated in FIG. 14 , each core 1425 is associated with a corresponding caching structure 14200-1420N. While shown separately from the processor cores for ease of illustration, understand that in various embodiments caching structures 1420, which may be instruction caches as described herein, may be present within processor cores 1425. In other aspects, the arrangement of processor 1400 further including at least one dynamic profile module 1410 and a corresponding low pass filter 1415 may be similar to the arrangement described above in FIG. 11 . Understand while shown with these limited components within the multicore processor, many more components, including accelerators, which also may leverage the dynamic profiling module, a power controller, memory control circuitry, graphics circuitry and so forth also may be present. And in some cases, multiple dynamic profiling modules may be present.As further illustrated in FIG. 14 , to enable reuse of the dynamic profiling infrastructure as described herein, embodiments may locate dynamic profiling module 1410 (and filter 1415) external to one or more processor cores 1425 of multicore processor 1400, which may leverage use of this common circuitry. In different embodiments, multiple cores may share dynamic profiling module 1410 at the same time, e.g., by allocating certain entries for use by particular cores. In other embodiments, sharing of the dynamic profiling infrastructure may occur in a time multiplexed manner, such that a single core is allowed to access this infrastructure at any given time period. Although the scope of the present invention is not limited in this regard, in one embodiment a core may be allowed to access the dynamic profiling infrastructure until it reaches a steady state of operation, such as where its instruction cache is generally fully populated and a relatively low instruction cache miss rate occurs. In example embodiments, this steady state operation may correspond to an instruction cache miss rate of between approximately 5 and 10%. In another embodiment, a core may send a request signal to request use of the dynamic profiling infrastructure when its instruction cache miss rate goes above a given threshold percentage, e.g., 20%. Of course in other cases other sharing techniques, such as a round robin approach or a priority-based approach (e.g., based at least in part on instruction cache miss rate), among other techniques are possible.Referring now to FIG. 15 , shown is a flow diagram of a method in accordance with a still further embodiment of the present invention. As shown in FIG. 15 , method 1500 may be used by control logic of a multicore processor to arbitrate access to a dynamic profiling circuit as described herein. As an example, this control logic may be implemented within the dynamic profiling circuit itself. In other cases, a resource controller may be used to arbitrate access to a dynamic profiling module. As illustrated, method 1500 begins by identifying a core to be granted access to the dynamic profiling circuit (block 1510). As described above, different manners of arbitrating access may include a time multiplexed manner, a priority basis such as according to instruction cache miss rate, or so forth.Control next passes to block 1520 where the dynamic profiling circuit can be configured for the identified core. For example, this configuration may include dynamically controlling switching of the dynamic profiling circuit to the given core to enable communication of hint information to the core from the dynamic profiling circuit, as well as to provide an instruction stream which includes address (with links in the case of (nested) loops and hard macros) from the core to the dynamic profiling circuit.Still with reference to FIG. 15 , next tagged instruction information may be received from the identified core (block 1530). That is, an instruction stream of tagged instructions may be received from the identified core. Understand that in other cases, all instructions may be provided and the dynamic profiling circuit can parse out non-tagged instructions. However efficiency may be improved by only sending tagged instructions to the DPM. Next at block 1540 the dynamic profiling circuit can process the tagged instruction information (such as discussed above with regard to FIG. 9 ) to identify the top N hot instructions that are undergoing execution within the core. Based on such processing, hint information is provided to the identified core (block 1550).As a core begins to run in steady state while leveraging hint information as described herein to dynamically control its instruction cache memory, its instruction cache hit rate may increase over time. Thus as illustrated, at diamond 1560 it can be determined whether this instruction cache hit rate exceeds a given hit rate threshold. Although the scope of the present invention is not limited in this regard, in an embodiment this hit rate threshold may be between approximately 90 and 95%. If the core instruction cache hit rate does not exceed this hit rate threshold, this is an indication that execution of the program on the core has not reached steady state. As such, additional use of the dynamic profiling circuit to generate hint information for the identified core may continue at block 1530. Otherwise if it is determined that the instruction cache hit rate for the core exceeds the hit rate threshold, this is an indication that the dynamic profiling circuit can be used by another core, e.g., according to a given arbitration policy. Understand while shown at this high level in the embodiment of FIG. 15 , many variations and alternatives are possible.The following examples pertain to further embodiments.In one embodiment, a processor includes: a storage having a plurality of entries each to store address information of an instruction and a count value of a number of executions of the instruction during execution of code including the instruction; and at least one comparator circuit to compare a count value from one of the plurality of entries to a threshold value, where the instruction comprises a tagged instruction of the code, the tagged instruction tagged by a static compiler prior to execution of the code.In an example, the processor further comprises a control circuit to output hint information to identify at least one instruction associated with at least one of the plurality of entries having a count value greater than the threshold value.In an example, the processor further comprises a threshold storage to store the threshold value, where the threshold value is to be dynamically updated based on a minimum count value of a first set of the plurality of entries.In an example, the processor further comprises a dynamic profile circuit including the storage and the control circuit.In an example, the processor further comprises a cache memory coupled to the dynamic profile circuit to receive the hint information, the cache memory including a cache controller to control eviction of a cache line of the cache memory based at least in part on the hint information.In an example, the cache memory includes a plurality of ways, where a first subset of the plurality of ways are to be reserved for at least a subset of tagged instructions of the code.In an example, the cache memory includes a first storage array to store at least non-tagged instructions and a second storage array to store at least a subset of tagged instructions.In an example, the processor comprises a multicore processor having a plurality of cores, where the dynamic profile circuit comprises a separate circuit of the multicore processor to be dynamically shared by at least some of the plurality of cores.In an example, the storage includes NxM entries, and where the control circuit is to store information associated with N most frequently accessed tagged instructions of the code in a first subset of the NxM entries.In an example, the control circuit is to output the hint information associated with the N most frequently accessed tagged instructions.In another example, a method comprises: determining whether an instruction to be executed in a processor is a part of a code loop; and responsive to determining that the instruction is part of the code loop, tagging the instruction to enable the instruction to be profiled in a dynamic profiler of the processor during execution of the code loop on at least one core of the processor.In an example, the method further comprises analyzing the instruction via a static compiler to determine whether the instruction is part of the code loop.In an example, the code loop comprises one of a function and recursive code.In an example, the method further comprises: determining that the instruction is part of a nested loop; and not tagging the instruction if a number of instructions of the nested loop is less than a first threshold.In an example, the method further comprises conditionally tagging the instruction if one or more variables of the instruction is not known at compile time, the instruction comprising a conditional instruction of the code loop.In an example, the method further comprises tagging the instruction and linking the instruction to another instruction of the code loop, where the instruction is a last instruction of the code loop.In another example, a method comprises: storing an instruction in a first portion of an instruction cache associated with a core of a processor; receiving, in a controller associated with the instruction cache, hint information from a dynamic profiling circuit of the processor; determining whether the instruction is associated with at least some of the hint information; and responsive to determining that the instruction is associated with the at least some of the hint information, migrating the instruction from the first portion of the instruction cache to a second portion of the instruction cache.In an example, the method further comprises preventing the instruction from eviction from the second portion of the instruction cache until the instruction is not associated with the at least some of the hint information received from the dynamic profiling circuit.In an example, the second portion of the instruction cache comprises a dedicated memory array for storage of often accessed instructions.In an example, the first portion of the instruction cache comprises a first plurality of ways of the instruction cache and the second portion of the instruction cache comprises a second plurality of ways of the instruction cache, the second plurality of ways locked for storage of instructions associated with the hint information.In another example, a computer readable medium including instructions is to perform the method of any of the above examples.In another example, a computer readable medium including data is to be used by at least one machine to fabricate at least one integrated circuit to perform the method of any one of the above examples.In another example, an apparatus comprises means for performing the method of any one of the above examples.In another example, an apparatus comprises: storage means having a plurality of entries for storing address information of an instruction and a count value of a number of executions of the instruction during execution of code including the instruction; and comparison means for comparing a count value from one of the plurality of entries to a threshold value, where the instruction comprises a tagged instruction of the code, the tagged instruction tagged by a static compiler prior to execution of the code.In an example, the apparatus further comprises control means for outputting hint information to identify at least one instruction associated with at least one of the plurality of entries having a count value greater than the threshold value.In an example, the apparatus further comprises threshold storage means for storing the threshold value, where the threshold value is to be dynamically updated based on a minimum count value of a first set of the plurality of entries.In an example, the apparatus further comprises cache means for receiving the hint information, the cache means including a cache control means for evicting a cache line of the cache means based at least in part on the hint information.Understand that various combinations of the above examples are possible.Note that the terms "circuit" and "circuitry" are used interchangeably herein. As used herein, these terms and the term "logic" are used to refer to alone or in any combination, analog circuitry, digital circuitry, hard wired circuitry, programmable circuitry, processor circuitry, microcontroller circuitry, hardware logic circuitry, state machine circuitry and/or any other type of physical hardware component. Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. Still further embodiments may be implemented in a computer readable storage medium including information that, when manufactured into a SoC or other processor, is to configure the SoC or other processor to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Capacitors including a built-in electric field, and related devices and assemblies, are disclosed herein. In some embodiments, a capacitor may include a top electrode region, a bottom electrode region, and a dielectric region between and in contact with the top electrode region and the bottom electrode region, where the dielectric region includes a perovskite material, and the top electrode region has a different material structure from the bottom electrode region.
1.An integrated circuit (IC) die, comprising:capacitors, including:top electrode area;the bottom electrode area; anda dielectric region between and in contact with the top electrode region and the bottom electrode region, wherein the dielectric region comprises a perovskite material and the top electrode region Has a different material structure than the bottom electrode region.2.The IC die of claim 1 wherein the top electrode region has a different material composition than the bottom electrode region.3.3. The IC die of claim 2, wherein the top electrode region comprises germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.4.3. The IC die of claim 3, wherein the top electrode region has a thickness between 0.1 nanometers and 5 nanometers.5.4. The IC die of any of claims 3-4, wherein the top electrode region is a first top electrode region, the capacitor further comprising a second top electrode region, the first top electrode region being between the second top electrode region and the dielectric region, and the second top electrode region has a different material composition than the first top electrode region.6.6. The IC die of claim 5, wherein the second top electrode region comprises ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.7.6. The IC die of claim 5, wherein the second top electrode region has a thickness between 5 nanometers and 50 nanometers.8.The IC die of claim 1 wherein the top electrode region has the same material composition as the bottom electrode region.9.9. The IC die of claim 8, wherein the top electrode region has a different crystal phase than the bottom electrode region.10.9. The IC die of any of claims 8-9, wherein the top electrode region has a different defect density than the bottom electrode region.11.The IC die of claim 1 wherein the capacitor is in a metallization stack of the IC die.12.An integrated circuit (IC) die, comprising:a capacitor, wherein the capacitor includes a top electrode region, a bottom electrode region, and a dielectric region between and with the top electrode region and the bottom electrode A region contact, wherein the dielectric region includes a polar dielectric material, and the top electrode region has a different material structure than the bottom electrode region.13.13. The IC die of claim 12, wherein the bottom electrode region is a first bottom electrode region, the capacitor further comprising a second bottom electrode region, the first bottom electrode region being at the second bottom electrode region and the dielectric region, and the second bottom electrode region has a different material composition than the first bottom electrode region.14.14. The IC die of claim 13, wherein the capacitor further comprises a third bottom electrode region, the second bottom electrode region being between the first bottom electrode region and the third bottom electrode region, And the third bottom electrode region has a different material composition than the second bottom electrode region.15.15. The IC die of claim 14, wherein the capacitor further comprises a fourth bottom electrode region, the third bottom electrode region being between the second bottom electrode region and the fourth bottom electrode region, And the fourth bottom electrode region has a different material composition than the third bottom electrode region.16.16. The IC die of any of claims 12-15, wherein the capacitor is between a topmost metal layer and a second topmost metal layer of a metallization stack.17.An integrated circuit (IC) die, comprising:a capacitor, wherein the capacitor includes a top electrode region, a bottom electrode region, and a dielectric region between and with the top electrode region and the bottom electrode A region contact, wherein the dielectric region includes strontium, barium, bismuth, or lead, and the top electrode region has a different material structure than the bottom electrode region.18.18. The IC die of claim 17, wherein the top electrode region has a different defect density than the bottom electrode region.19.19. The IC die of claim 18, wherein the difference in defect density between the top electrode region and the bottom electrode region is between 1e16 defects per cubic centimeter and 1e20 defects per cubic centimeter.20.19. The IC die of any of claims 17-19, wherein the capacitor is a decoupling capacitor.
Capacitors with built-in electric fieldsBackground techniqueCapacitors are used in many different electronic device designs. In some devices, for example, the decoupling capacitors may be part of an integrated circuit (IC) die, package substrate, and/or circuit board.Description of drawingsEmbodiments will be readily understood from the following detailed description taken in conjunction with the accompanying drawings. For ease of description, like reference numerals refer to like structural elements. In the figures of the accompanying drawings, various embodiments are shown by way of example and not by way of limitation.1-4 are side cross-sectional views of example capacitors with built-in electric fields in accordance with various embodiments.5 is a top view of a wafer and die that may include capacitors in accordance with any of the embodiments disclosed herein.6 is a side cross-sectional view of an integrated circuit (IC) device that may include a capacitor in accordance with any of the embodiments disclosed herein.7 is a side cross-sectional view of an IC package that may include a capacitor in accordance with any of the embodiments disclosed herein.8 is a side cross-sectional view of an IC device assembly that may include a capacitor in accordance with any of the embodiments disclosed herein.9 is a block diagram of an example electrical device that may include a capacitor according to any of the embodiments disclosed herein.Detailed waysCapacitors including built-in electric fields and related devices and assemblies are disclosed herein. In some embodiments, a capacitor may include a top electrode region, a bottom electrode region, and a dielectric region between and in contact with the top and bottom electrode regions, wherein the dielectric region includes a perovskite material, and the top electrode region has a different material structure than the bottom electrode region.The capacitors disclosed herein may be more achievable than conventional capacitors by including a built-in electric field of a transpolar dielectric (eg, perovskite oxide) to move the maximum value of the voltage-dependent capacitance density of the polar dielectric capacitor to a target voltage range The capacitance density is higher than the capacitance density. In some embodiments, for example, the capacitors disclosed herein can achieve capacitance densities that are substantially greater than those of existing capacitors in the absolute value voltage range between 0.5 volts and 1.9 volts. The capacitors disclosed herein can be fabricated under back-end processing conditions (eg, at temperatures less than 400 degrees Celsius), and thus can be readily incorporated into metallization stacks of integrated circuit (IC) dies (eg, as a On-die metal-insulator-metal (MIM) capacitors). In some embodiments, on-die MIM capacitors according to any of the embodiments disclosed herein may be used as decoupling capacitors to stabilize the supply voltage of the die (eg, by mitigating voltage drops during load switching); such on-die The decoupling capacitors can be used in conjunction with on-package decoupling capacitors and/or on-board decoupling capacitors in IC assemblies, as discussed further below.In the following detailed description, reference is made to the accompanying drawings which form a part hereof, wherein like reference numerals refer to like parts throughout, and in which embodiments are illustrated by way of illustration. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description should not be construed as limiting.Various operations may be described as multiple discrete acts or operations in a manner that is most helpful in understanding the subject matter disclosed herein. However, the order of description should not be construed to imply that the operations are necessarily order-dependent. In particular, the operations may be performed out of the order presented. The described operations may be performed in a different order than the described embodiments. In additional embodiments, various additional operations may be performed, and/or described operations may be omitted.For purposes of this disclosure, the phrases "A and/or B" and "A or B" refer to (A), (B), or (A and B). For purposes of this disclosure, the phrases "A, B and/or C" and "A, B or C" mean (A), (B), (C), (A and B), (A and C), ( B and C) or (A, B and C). The drawings are not necessarily to scale. Although many of the drawings show straight-line structures with flat walls and right-angled corners, this is for illustrative purposes only, and actual devices fabricated using these techniques will exhibit rounded corners, surface roughness, and other features.This specification uses the phrases "in one embodiment" or "in an embodiment," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, "package" and "IC package" are synonymous. When used to describe a range of sizes, the phrase "between X and Y" means a range that includes both X and Y. The terms "top" and "bottom" are used herein for ease of description and should not be construed as requiring the necessary orientation unless otherwise stated.1 is a side cross-sectional view of an example capacitor 100 with a built-in electric field in accordance with various embodiments. The capacitor 100 may include a top electrode 102 , a bottom electrode 106 , and a dielectric region 104 between the top electrode 102 and the bottom electrode 106 . Dielectric region 104 may include a polar dielectric material, such as perovskite (eg, perovskite oxide). In some embodiments, dielectric region 104 may include strontium. For example, the dielectric region 104 may include strontium, titanium, and oxygen (eg, in the form of strontium titanate); strontium, barium, titanium, and oxygen (eg, in the form of barium strontium titanate); or strontium, lead, titanium, and oxygen ( For example, in the form of lead strontium titanate). In some embodiments, the dielectric region 104 may include barium. For example, the dielectric region 104 may include barium, titanium, and oxygen (eg, in the form of barium titanate); or strontium, barium, titanium, and oxygen (eg, in the form of barium strontium titanate). In some embodiments, the dielectric region 104 may include bismuth. For example, the dielectric region 104 may include bismuth, iron, and oxygen (eg, in the form of bismuth ferrite); or lanthanum, bismuth, and oxygen (eg, in the form of lanthanum bismuth oxide). In some embodiments, the dielectric region 104 may include lanthanum. For example, the dielectric region 104 may include lanthanum, bismuth, and oxygen (eg, in the form of lanthanum bismuth oxide). In some embodiments, the dielectric region 104 may include lead. For example, the dielectric region 104 may include lead, titanium, and oxygen (eg, in the form of lead titanate); or strontium, lead, titanium, and oxygen (eg, in the form of lead strontium titanate). The thickness of dielectric region 104 may take any suitable value. For example, in some embodiments, the thickness of the dielectric region 104 may be between 4 nanometers and 20 nanometers. In some embodiments, the capacitance density of the capacitors 100 disclosed herein may be in the absolute value range between 0.5 volts and 1.9 volts (ie, between 0.5 volts and 1.9 volts or between -0.5 volts and -1.9 volts) ) has a peak. In some embodiments, the capacitance density of the capacitors 100 disclosed herein may have a peak in an absolute value range between 0.9 volts and 1.9 volts.In the capacitor 100 , the top electrode 102 and/or the bottom electrode 106 may be selected so as to impart a built-in electric field to the capacitor 100 . For example, the top electrode 102 and the bottom electrode 106 may have different material structures. As used herein, two materials are defined if they differ in material composition, crystalline phase, defect density, and/or other structural properties that induce electric fields between the materials when the materials are separated by an intervening dielectric material. Materials can have different "material structures". In some embodiments, as discussed further below, top electrode 102 and bottom electrode 106 may each include one or more regions that include different materials; thus, if at least some regions of top electrode 102 have The top electrode 102 and the bottom electrode 106 may be said to have different material structures than at least some regions of the bottom electrode 106 are of different material structures. For example, in some embodiments, when the material of the top electrode 102 closest to the dielectric region 104 has a different material structure than the material of the bottom electrode 106 closest to the dielectric region 104, the top electrode 102 and the bottom electrode 106 may be said to have Different material structures.As mentioned above, in some embodiments, the top electrode 102 and the bottom electrode 106 can have different defect densities that can induce an electric field therebetween. For example, the difference between the defect density of the top electrode 102 and the defect density of the bottom electrode 106 may be between 1e16 and 1e20 per cubic centimeter. This difference in defect density cannot occur unintentionally or accidentally during conventional fabrication processes, but is the result of the deliberate choice of fabrication conditions and materials to ensure an atypically large difference in defect density between top electrode 102 and bottom electrode 106 . In such an embodiment, the top electrode 102 may take the form of any of the top electrodes 102 discussed below with reference to FIG. 2 , and the bottom electrode 106 may take the form of any of the bottom electrodes 106 discussed below with reference to FIG. 3 .As mentioned above, in some embodiments, the top electrode 102 and bottom electrode 106 of capacitor 100 may have the same material composition, but may have different crystalline phases that induce an electric field therebetween. FIG. 2 is a side cross-sectional view of an example of such an embodiment. In particular, top electrode 102 is provided by material 108 and bottom electrode 106 is provided by material 110 , material 112 and material 114 . Material 112 may be between materials 110 and 114, as shown, and dielectric region 104 may be between and in contact with material 108 (of top electrode 102) and material 110 (of bottom electrode 106). In some embodiments, material 108 and material 110 may have the same material composition, but may have different phases. For example, material 108 may be a metal with a face-centered cubic (fcc) structure (eg, ruthenium metal with an fcc structure), while material 110 may be the same metal, but with a hexagonal close-packed (hcp) structure (eg, with hcp) structure of ruthenium metal), or vice versa. Material 108 and material 110 may comprise any suitable material. In some embodiments, materials 108 and 110 may include ruthenium, iridium, copper, titanium, and nitrogen (eg, in the form of titanium nitride), titanium, gold, platinum, silver, cobalt, molybdenum, strontium, and ruthenium and oxygen ( For example, as strontium ruthenium oxide), iridium and oxygen (eg, as iridium oxide), ruthenium and oxygen (eg, as ruthenium oxide), lanthanum and nickel and oxygen (eg, as lanthanum nickel oxide) ) or tungsten. The thickness of materials 108 and 110 may take any suitable value. For example, in some embodiments, the thickness of material 108 may be between 5 nanometers and 50 nanometers, and the thickness of material 110 may be between 5 nanometers and 50 nanometers.The material 112 of the bottom electrode 106 of the capacitor 100 of FIG. 2 may have a different material structure (eg, a different material composition) than the material 110 . Material 112 may include ruthenium, iridium, strontium, and ruthenium and oxygen (eg, as strontium ruthenium oxide), iridium and oxygen (eg, as iridium oxide), ruthenium and oxygen (eg, as ruthenium oxide), Tantalum, copper, titanium, and nitrogen (eg, in the form of titanium nitride), titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten. The thickness of material 112 may take any suitable value. For example, in some embodiments, the thickness of material 112 may be between 5 nanometers and 50 nanometers.The material 114 of the bottom electrode 106 of the capacitor 100 of FIG. 2 may have a different material structure (eg, a different material composition) than the material 112 . The material 114 may include ruthenium, iridium, tantalum, copper, titanium, and nitrogen (eg, in the form of titanium nitride), titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten. The thickness of material 114 may take any suitable value. For example, in some embodiments, the thickness of material 114 may be between 0.5 nanometers and 10 nanometers.As mentioned above, in some embodiments, the top electrode 102 and bottom electrode 106 of the capacitor 100 may have different material compositions that induce an electric field therebetween. 3 and 4 are side cross-sectional views of examples of these embodiments. In the embodiment of FIG. 3 , top electrode 102 may include material 116 and material 118 , wherein material 118 is between and in contact with material 116 and dielectric region 104 . The material 116 may take the form of any of the materials 108 disclosed herein (eg, as discussed above with reference to FIG. 2 ). The material 118 of the capacitor 100 of FIG. 3 can provide a dipole layer at the interface between the material 116 and the dielectric region 104 and thus can create a strong charge difference between the top electrode 102 and the bottom electrode 106 of the capacitor 100 of FIG. 3 . In some embodiments, the material 118 may include germanium, lanthanum, hafnium, zirconium, yttrium, barium, bismuth, lead, calcium, magnesium, beryllium, or lithium. In some particular embodiments, dielectric region 104 may include strontium, titanium, and oxygen (eg, in the form of strontium titanate), and material 118 may include lanthanum, hafnium, zirconium, yttrium, barium, bismuth, lead, calcium, magnesium , beryllium or lithium. The thickness of material 118 may take any suitable value. For example, in some embodiments, the thickness of material 118 may be between 0.1 nanometers and 5 nanometers. Bottom electrode 106 of capacitor 100 of FIG. 3 may include material 120 , material 124 , and material 122 between materials 120 and 124 . Materials 120 , 122 and 124 may take the form of any of the embodiments of materials 110 , 112 and 114 , respectively, discussed above with reference to FIG. 2 . Additionally, in some embodiments, material 120 may include strontium and ruthenium and oxygen (eg, as strontium ruthenium oxide), iridium and oxygen (eg, as iridium oxide), or ruthenium and oxygen (eg, as ruthenium oxide) form).In the embodiment of FIG. 4 , bottom electrode 106 may include material 118 . In particular, in capacitor 100 of FIG. 4 , top electrode 102 may include material 126 . The material 126 may take the form of any of the materials 108 disclosed herein (eg, as discussed above with reference to FIG. 2 ). Bottom electrode 106 of capacitor 100 of FIG. 4 may include material 128 and material 118 , wherein material 118 is between and in contact with material 128 and dielectric region 104 . Bottom electrode 106 of capacitor 100 of FIG. 4 may also include material 130 and material 132 , where material 130 is between materials 128 and 132 . The material 118 of the capacitor 100 of FIG. 4 can provide a dipole layer at the interface between the material 116 and the dielectric region 104 and thus can create a strong charge difference between the top electrode 102 and the bottom electrode 106 of the capacitor 100 of FIG. 2 . In some embodiments, the material 118 may include germanium, lanthanum, hafnium, zirconium, yttrium, barium, bismuth, lead, calcium, magnesium, beryllium, or lithium. In some particular embodiments, dielectric region 104 may include strontium, titanium, and oxygen (eg, in the form of strontium titanate), and material 118 may include lanthanum, hafnium, zirconium, yttrium, barium, bismuth, lead, calcium, magnesium , beryllium or lithium. The thickness of material 118 may take any suitable value. For example, in some embodiments, the thickness of material 118 may be between 0.1 nanometers and 5 nanometers. Bottom electrode 106 of capacitor 100 of FIG. 3 may include material 120 , material 124 , and material 122 between materials 120 and 124 . Materials 120 , 122 and 124 may take the form of any of the embodiments of materials 110 , 112 and 114 discussed above with reference to FIG. 2 , respectively. Additionally, in some embodiments, material 120 may include strontium and ruthenium and oxygen (eg, as strontium ruthenium oxide), iridium and oxygen (eg, as iridium oxide), or ruthenium and oxygen (eg, as ruthenium oxide) form).Various of the features of capacitor 100 disclosed herein may be combined in capacitor 100 . For example, capacitor 100 having a difference in defect density between top electrode 102 and bottom electrode 106 (eg, as discussed above with reference to FIG. 1 ) may also include a top electrode 102 having a different crystal phase than bottom electrode 106 (eg, as discussed above) 2), and/or may also include a top electrode 102 having a different material composition than the bottom electrode 106 (eg, as discussed above with reference to FIGS. 3 and 4). Similarly, a capacitor 100 having a top electrode 102 (eg, as discussed above with reference to FIG. 2 ) having a different crystal phase from the bottom electrode 106 may also include a material composition and a bottom electrode 106 (eg, as discussed above with reference to FIGS. 3 and 4 ) discussed) different top electrodes 102.The capacitor 100 disclosed herein may be included in any suitable electronic component. 5-9 illustrate various examples of devices that may include any of the capacitors 100 disclosed herein.5 is a top view of a wafer 1500 and a die 1502 that may include one or more capacitors 100 according to any embodiments disclosed herein. Wafer 1500 may be composed of semiconductor material and may include one or more dies 1502 having IC structures formed on the surface of wafer 1500 . Each die 1502 may be a repeating unit of a semiconductor product including any suitable IC. After fabrication of the semiconductor product is complete, the wafer 1500 may undergo a singulation process in which the dies 1502 are separated from each other to provide discrete "chips" of the semiconductor product. The die 1502 may include one or more capacitors 100 (eg, as discussed below with reference to FIG. 6 ), one or more transistors (eg, some of the transistors 1640 of FIG. 6 as discussed below), and/or Support circuits used to transmit electrical signals to transistors and any other IC components. In some embodiments, wafer 1500 or die 1502 may include memory devices (eg, random access memory (RAM) devices such as static RAM (SRAM) devices, magnetic RAM (MRAM) devices, resistive RAM (RRAM) devices , conductive bridge RAM (CBRAM) devices, etc.), logic devices (eg, AND, OR, NAND, or NOR gates), or any other suitable circuit element. Multiple of these devices may be combined on a single die 1502 . For example, a memory array formed from a plurality of memory devices may be formed in conjunction with a processing device (eg, processing device 1802 of FIG. 9) or other logic configured to store information in the memory devices or execute instructions stored in the memory array on the same die 1502.6 is a side cross-sectional view of an IC device 1600 that may include one or more capacitors 100 according to any embodiments disclosed herein. One or more of IC devices 1600 may be included in one or more dies 1502 (FIG. 5). IC device 1600 may be formed on a substrate 1602 (eg, wafer 1500 of FIG. 5 ) and may be included in a die (eg, die 1502 of FIG. 5 ). The substrate 1602 may be a semiconductor substrate composed of a semiconductor material system including, for example, an n-type or p-type material system (or a combination of both). Substrate 1602 may include, for example, a crystalline substrate formed using bulk silicon or silicon-on-insulator (SOI) substructures. In some embodiments, substrate 1602 may be formed using alternative materials that may or may not be combined with silicon, including but not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, arsenic gallium or gallium antimonide. Other materials classified as II-VI, III-V, or IV may also be used to form substrate 1602 . Although a few examples of materials from which substrate 1602 can be formed are described herein, any material that can be used as a basis for IC device 1600 can be used. Substrate 1602 may be a singulated die (eg, die 1502 of FIG. 5 ) or part of a wafer (eg, wafer 1500 of FIG. 5 ).IC device 1600 may include one or more device layers 1604 disposed on substrate 1602 . Device layer 1604 may include features of one or more transistors 1640 (eg, metal oxide semiconductor field effect transistors (MOSFETs)) formed on substrate 1602 . Device layer 1604 may include, for example, one or more source and/or drain (S/D) regions 1620 , gate 1622 for controlling current flow between S/D regions 1620 in transistor 1640 , and One or more S/D contacts 1624 for transmitting electrical signals to/from S/D area 1620. Transistor 1640 may include additional features not shown for clarity, such as device isolation regions, gate contacts, and the like. Transistor 1640 is not limited to the type and configuration shown in FIG. 6, and may include a wide variety of other types and configurations, such as planar transistors, non-planar transistors, or a combination of the two. Planar transistors may include bipolar junction transistors (BJTs), heterojunction bipolar transistors (HBTs), or high electron mobility transistors (HEMTs). Non-planar transistors may include FinFET transistors, such as dual-gate transistors or tri-gate transistors, and wraparound or full wraparound gate transistors, such as nanoribbon and nanowire transistors.Each transistor 1640 may include a gate 1622 formed from at least two layers, a gate dielectric, and a gate electrode. The gate dielectric may comprise a layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide, silicon carbide, and/or high-k dielectric materials. High-k dielectric materials may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in gate dielectrics include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium oxide Titanium, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and zinc lead niobate. In some embodiments, when using high-k materials, an annealing process may be performed on the gate dielectric to improve its quality.The gate electrode may be formed on the gate dielectric and may include at least one p-type work function metal or n-type work function metal, depending on whether transistor 1640 is a p-type metal oxide semiconductor (PMOS) transistor or an n-type metal oxide semiconductor (NMOS) transistor. In some embodiments, the gate electrode may consist of a stack of two or more metal layers, wherein one or more metal layers are work function metal layers and at least one metal layer is a fill metal layer. Additional metal layers, such as barrier layers, may be included for other purposes. For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, conductive metal oxides (eg, ruthenium oxide), and any of the metals discussed below with reference to NMOS transistors (eg, for power function adjustment). For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, carbides of these metals (eg, hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide) and any of the metals discussed above with reference to PMOS transistors (eg, for work function tuning).In some embodiments, when the cross-section of transistor 1640 is viewed along the source-channel-drain direction, the gate electrode may consist of a U-shaped structure including a bottom that is substantially parallel to the surface of the substrate portion and two sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments, at least one of the metal layers forming the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments, the gate electrode may be composed of a combination of U-shaped structures and planar non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.In some embodiments, a pair of sidewall spacers may be formed on opposing sides of the gate stack to support the gate stack. The sidewall spacers may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, carbon-doped silicon nitride, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and typically include deposition and etching process steps. In some embodiments, multiple pairs of spacers may be used; for example, two, three, or four pairs of sidewall spacers may be formed on opposite sides of the gate stack.S/D regions 1620 may be formed within the substrate 1602 adjacent to the gate 1622 of each transistor 1640 . The S/D region 1620 may be formed using, for example, an implant/diffusion process or an etch/deposition process. In the previous process, dopants such as boron, aluminum, antimony, phosphorus, or arsenic may be ion-implanted into the substrate 1602 to form the S/D regions 1620 . The ion implantation process may be followed by an annealing process that activates the dopants and diffuses them further into the substrate 1602 . In the latter process, the substrate 1602 may be first etched to form grooves at the locations of the S/D regions 1620 . Then, an epitaxial deposition process may be performed to fill the grooves with the material used to fabricate the S/D regions 1620 . In some embodiments, the S/D regions 1620 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In some embodiments, the S/D regions 1620 may be formed using one or more alternative semiconductor materials, such as germanium or group III-V materials or alloys. In further embodiments, the S/D region 1620 may be formed using one or more layers of metal and/or metal alloy.Power such as power may be transferred to and/or from devices (eg, transistors 1640 ) of device layer 1604 through one or more interconnect layers (shown in FIG. 6 as interconnect layers 1606 - 1610 ) disposed on device layer 1604 and/or input/output (I/O) signals. For example, conductive features of device layer 1604 (eg, gate 1622 and S/D contacts 1624) can be electrically coupled to interconnect structures 1628 of interconnect layers 1606-1610. One or more interconnect layers 1606 - 1610 may form a metallization stack (also referred to as "ILD stack") 1619 of IC device 1600 . In some embodiments, one or more capacitors 100 may be disposed in one or more of interconnect layers 1606-1610 according to any of the techniques disclosed herein. 6 shows a single capacitor 100 between metal lines in interconnect layers 1608 and 1610 for illustrative purposes, but any number and arrangement of capacitors 100 may be included in any one or more of metallization stacks 1619 in a layer. The capacitors 100 included in the metallization stack 1619 may be referred to as "back end" capacitors 100 . One or more capacitors 100 in metallization stack 1619 may be coupled to any suitable of the devices in device layer 1604, and/or to one or more of conductive contacts 1636 (discussed below). ).Interconnect structures 1628 may be arranged within interconnect layers 1606-1610 to carry electrical signals according to various designs (in particular, the arrangement is not limited to the particular configuration of interconnect structures 1628 shown in FIG. 6). Although a particular number of interconnect layers 1606-1610 is shown in FIG. 6, embodiments of the present disclosure include IC devices having more or fewer interconnect layers than shown.In some embodiments, interconnect structure 1628 may include lines 1628a and/or vias 1628b filled with a conductive material, such as metal. The lines 1628a may be arranged to transmit electrical signals in a direction substantially parallel to the plane of the surface of the substrate 1602 on which the device layer 1604 is formed. For example, wire 1628a may transmit electrical signals in a direction in and out of the page from the perspective of FIG. 6 . The vias 1628b may be arranged to transmit electrical signals in a direction substantially perpendicular to the plane of the surface of the substrate 1602 on which the device layer 1604 is formed. In some embodiments, vias 1628b may electrically couple together lines 1628a of different interconnect layers 1606-1610.Interconnect layers 1606-1610 may include dielectric material 1626 disposed between interconnect structures 1628, as shown in FIG. In some embodiments, the dielectric material 1626 disposed between the interconnect structures 1628 in different ones of the interconnect layers 1606-1610 may have different compositions; in other embodiments, the different interconnect layers 1606- The composition of the dielectric material 1626 between 1610 may be the same.A first interconnect layer 1606 may be formed over the device layer 1604 . In some embodiments, the first interconnect layer 1606 may include lines 1628a and/or vias 1628b, as shown. Lines 1628a of first interconnect layer 1606 may be coupled with contacts of device layer 1604 (eg, S/D contacts 1624).The second interconnect layer 1608 may be formed over the first interconnect layer 1606 . In some embodiments, the second interconnect layer 1608 may include vias 1628b to couple the lines 1628a of the second interconnect layer 1608 with the lines 1628a of the first interconnect layer 1606 . Although lines 1628a and vias 1628b are structurally defined by lines within each interconnect layer (eg, within second interconnect layer 1608 ) for clarity, in some embodiments lines 1628a and vias 1628b It may be continuous in structure and/or material (eg, simultaneous filling during dual damascene process).A third interconnect layer 1610 (and additional interconnect layers, as desired) may be formed in succession on the second interconnect layer 1608 according to similar techniques and configurations described in connection with the second interconnect layer 1608 or the first interconnect layer 1606 . In some embodiments, "higher" (ie, further away from device layer 1604 ) interconnect layers in metallization stack 1619 in IC device 1600 may be thicker.IC device 1600 may include solder resist material 1634 (eg, polyimide or similar material) and one or more conductive contacts 1636 formed on interconnect layers 1606-1610. In FIG. 6, the conductive contacts 1636 are shown in the form of bond pads. Conductive contacts 1636 may be electrically coupled to interconnect structure 1628 and configured to communicate electrical signals of transistor(s) 1640 to other external devices. For example, solder joints may be formed on one or more conductive contacts 1636 to mechanically and/or electrically couple a chip including IC device 1600 to another component (eg, a circuit board). IC device 1600 may include additional or alternative structures to convey electrical signals from interconnect layers 1606-1610; for example, conductive contacts 1636 may include other similar features (eg, posts) that convey electrical signals to external components.7 is a side cross-sectional view of an example IC package 1650 that may include one or more capacitors 100 according to any embodiments disclosed herein. In some embodiments, IC package 1650 may be a system-in-package (SiP).The package substrate 1652 can be made of a dielectric material (eg, ceramic, build-up film, epoxy film with filler particles therein, glass, organic material, inorganic material, a combination of organic and inorganic materials, embedded portions formed from different materials, etc. ) is formed and may have conductive paths extending through the dielectric material between face 1672 and face 1674 , or between different locations on face 1672 , and/or between different positions on face 1674 . These conductive paths may take the form of any of the interconnect structures 1628 discussed above with reference to FIG. 6 . In some embodiments, in addition to the one or more decoupling capacitors 100 in the die 1656, the package substrate 1652 may also include one or more decoupling capacitors (eg, surface mounted to the package substrate 1652 or otherwise coupled to or embedded in the package substrate 1652).Package substrate 1652 may include conductive contacts 1663 coupled to conductive paths (not shown) through package substrate 1652, thereby allowing circuits within die 1656 and/or interposer 1657 to be electrically coupled to the conductive contacts Various conductive contacts in 1664 (or electrically coupled to other devices included in package substrate 1652, not shown).IC package 1650 may include interposer 1657 coupled to package substrate 1652 via conductive contacts 1661 of interposer 1657 , first level interconnects 1665 , and conductive contacts 1663 of package substrate 1652 . The first level interconnects 1665 shown in FIG. 7 are solder bumps, but any suitable first level interconnects 1665 may be used. In some embodiments, the interposer 1657 may not be included in the IC package 1650; More generally, the one or more dies 1656 may be coupled to the package via any suitable structure (eg, silicon bridges, organic bridges, one or more waveguides, one or more interposers, wire bonds, etc.) Substrate 1652.IC package 1650 may include one or more dies 1656 coupled to interposer 1657 via conductive contacts 1654 of die 1656 , first level interconnects 1658 , and conductive contacts 1660 of interposer 1657 . Conductive contacts 1660 may be coupled to conductive paths (not shown) through interposer 1657, allowing circuitry within die 1656 to be electrically coupled to individual ones of conductive contacts 1661 (or to electrically coupled to conductive contacts included in interposers 1661). other devices in layer 1657, not shown). The first level interconnects 1658 shown in FIG. 7 are solder bumps, but any suitable first level interconnects 1658 may be used. As used herein, a "conductive contact" can refer to a portion of a conductive material (eg, metal) that serves as an interface between different components; the conductive contact can be recessed into, flush with, or remote from the surface of the component , and can take any suitable form (eg, conductive pads or sockets).In some embodiments, underfill material 1666 may be disposed between package substrate 1652 and interposer 1657 around first level interconnect 1665, and mold compound 1668 may be disposed around die 1656 and interposer 1657 and with The package substrate 1652 contacts. In some embodiments, the underfill material 1666 may be the same as the molding compound 1668 . An example material that can be used for the underfill material 1666 and the molding compound 1668 is an epoxy molding material, as appropriate. Second level interconnects 1670 may be coupled to conductive contacts 1664 . The second level interconnects 1670 shown in FIG. 7 are solder balls (eg, for a ball grid array arrangement), but any suitable second level interconnects 1670 (eg, pins or pins in a pin grid array arrangement may be used) lands in a land grid array arrangement). The second level interconnect 1670 may be used to couple the IC package 1650 to another component, such as a circuit board (eg, a motherboard), an interposer, or another IC package, as known in the art and as described below with reference to FIG. 8 discussed.Die 1656 may take the form of any embodiment of die 1502 discussed herein (eg, may include any embodiment of IC device 1600). In embodiments where IC package 1650 includes multiple dies 1656, IC package 1650 may be referred to as a multi-chip package (MCP). Die 1656 may include circuitry to perform any desired function. For example, one or more of the dies 1656 may be logic dies (eg, silicon-based dies), and one or more of the dies 1656 may be memory dies (eg, high bandwidth memory). In some embodiments, the die 1656 may include one or more capacitors 100 (eg, as discussed above with reference to FIGS. 5 and 6 ).Although the IC package 1650 shown in FIG. 7 is a flip chip package, other packaging architectures may be used. For example, IC package 1650 may be a ball grid array (BGA) package, such as an embedded wafer level ball grid array (eWLB) package. In another example, the IC package 1650 may be a wafer level chip scale package (WLCSP) or a panel fan-out (FO) package. Although two dies 1656 are shown in the IC package 1650 of FIG. 7 , the IC package 1650 may include any desired number of dies 1656 . IC package 1650 may include additional passive components, such as surface mount resistors, capacitors, and inductors, disposed on first side 1672 or second side 1674 of package substrate 1652, or on either side of interposer 1657 superior. More generally, IC package 1650 may include any other active or passive components known in the art.8 is a side cross-sectional view of an IC device assembly 1700 that may include one or more IC packages or other electronic components (eg, dies) that include any implementation in accordance with the present disclosure One or more capacitors 100 for example. IC device assembly 1700 includes a number of components disposed on a circuit board 1702, which may be, for example, a motherboard. IC device assembly 1700 includes components disposed on a first side 1740 of circuit board 1702 and an opposing second side 1742 of circuit board 1702; typically, components may be disposed on one or both sides 1740 and 1742. Any of the IC packages discussed below with reference to IC device assembly 1700 may take the form of any of the embodiments of IC package 1650 discussed above with reference to FIG. 7 (eg, one or more capacitors 100 may be included in the die) ). In some embodiments, in addition to one or more decoupling capacitors 100 in the die of IC device assembly 1700 (eg, as discussed above with reference to FIG. 6 ), and in some embodiments, in addition to IC device assembly 1700 In addition to the one or more decoupling capacitors included in the package substrate (as discussed above with reference to FIG. 7 ), the circuit board 1702 may also include one or more decoupling capacitors (eg, surface mounted to the circuit board 1702 or otherwise coupled to or embedded in circuit board 1702).In some embodiments, circuit board 1702 may be a printed circuit board (PCB) that includes multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. Any one or more metal layers may be formed in a desired circuit pattern to carry electrical signals between components coupled to circuit board 1702 (optionally in combination with other metal layers). In other embodiments, the circuit board 1702 may be a non-PCB substrate.The IC device assembly 1700 shown in FIG. 8 includes a package-on-interposer structure 1736 coupled to the first side 1740 of the circuit board 1702 by a coupling feature 1716 . Coupling features 1716 may electrically and mechanically couple package-on-interposer structure 1736 to circuit board 1702 and may include solder balls (as shown in FIG. 8 ), male and female portions of the socket, adhesive, bottom Filling material and/or any other suitable electrical and/or mechanical coupling structure.Package-on-interposer structure 1736 may include IC package 1720 coupled to package interposer 1704 by coupling assembly 1718 . Coupling member 1718 may take any suitable form for the application, such as the forms discussed above with reference to coupling member 1716 . Although a single IC package 1720 is shown in FIG. 8 , multiple IC packages may be coupled to the package interposer 1704 ; in practice, additional interposers may be coupled to the package interposer 1704 . Package interposer 1704 may provide an intermediate substrate for bridging circuit board 1702 and IC package 1720 . IC package 1720 may be or include, for example, a die (die 1502 of FIG. 5 ), an IC device (eg, IC device 1600 of FIG. 6 ), or any other suitable component. Typically, package interposer 1704 can expand connections to wider pitches or reroute connections to different connections. For example, package interposer 1704 may couple IC package 1720 (eg, a die) to a set of BGA conductive contacts of coupling feature 1716 for coupling to circuit board 1702 . In the embodiment shown in FIG. 8 , the IC package 1720 and the circuit board 1702 are attached to opposite sides of the package interposer 1704 ; in other embodiments, the IC package 1720 and the circuit board 1702 may be attached to the same side. In some embodiments, three or more components may be interconnected through package interposer 1704 .In some embodiments, the package interposer 1704 may be formed as a PCB comprising multiple metal layers separated from each other by layers of dielectric material and interconnected by conductive vias. In some embodiments, the encapsulation interposer 1704 may be formed of epoxy, glass fiber reinforced epoxy, epoxy with inorganic fillers, ceramic materials, or polymeric materials such as polyimide. In some embodiments, the package interposer 1704 may be formed of alternating rigid or flexible materials, which may include the same materials described above for semiconductor substrates, such as silicon, germanium, and other III-V and IV materials. Package interposer 1704 may include metal lines 1710 and vias 1708 , including but not limited to through silicon vias (TSVs) 1706 . Package interposer 1704 may further include embedded devices 1714, including both passive and active devices. These devices may include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, electrostatic discharge (ESD) devices, and memory devices. More complex devices such as radio frequency devices, power amplifiers, power management devices, antennas, arrays, sensors, and microelectromechanical systems (MEMS) devices may also be formed on package interposer 1704 . The package-on-interposer structure 1736 may take the form of any package-on-interposer structure known in the art.IC device assembly 1700 may include IC package 1724 coupled to first side 1740 of circuit board 1702 by coupling features 1722 . Coupling member 1722 may take the form of any of the embodiments discussed above with reference to coupling member 1716 , and IC package 1724 may take the form of any of the embodiments discussed above with reference to IC package 1720 .The IC device assembly 1700 shown in FIG. 8 includes a package-on-package structure 1734 coupled to the second side 1742 of the circuit board 1702 by coupling features 1728 . Package-on-package structure 1734 may include IC package 1726 and IC package 1732 coupled together by coupling features 1730 such that IC package 1726 is disposed between circuit board 1702 and IC package 1732 . Coupling members 1728 and 1730 may take the form of any of the embodiments of coupling member 1716 described above, and IC packages 1726 and 1732 may take the form of any embodiment of IC package 1720 described above. The package-on-package structure 1734 may be configured according to any package-on-package structure known in the art.9 is a block diagram of an example electrical device 1800 that may include one or more capacitors 100 according to any embodiments disclosed herein. For example, any suitable ones of the components of electrical device 1800 may include one or more of IC device assembly 1700 , IC package 1650 , IC device 1600 , or die 1502 disclosed herein. Various components included in electrical device 1800 are shown in FIG. 9, but any one or more of these components may be omitted or duplicated as appropriate to the application. In some embodiments, some or all of the components included in electrical device 1800 may be attached to one or more motherboards. In some embodiments, some or all of these components are fabricated on a single system-on-chip (SoC) die.Additionally, in various embodiments, electrical device 1800 may not include one or more of the components shown in FIG. 9, but electrical device 1800 may include interface circuitry for coupling to one or more components. For example, electrical device 1800 may not include display device 1806, but may include display device interface circuitry (eg, connectors and driver circuitry) to which display device 1806 may be coupled. In another set of examples, electrical device 1800 may not include audio input device 1824 or audio output device 1808, but may include audio input or output device interface circuitry to which audio input device 1824 or audio output device 1808 may be coupled (eg, connectors and supporting circuits).Electrical device 1800 may include processing device 1802 (eg, one or more processing devices). As used herein, the term "processing device" or "processor" may refer to any device that processes electronic data from registers and/or memory to convert the electronic data into other electronic data that may be stored in the registers and/or memory equipment or part of equipment. Processing device 1802 may include one or more digital signal processors (DSPs), application specific integrated circuits (ASICs), central processing units (CPUs), graphics processing units (GPUs), cryptographic processors (special-purpose devices that execute cryptographic algorithms within hardware) processor), server processor, or any other suitable processing device. Electrical device 1800 may include memory 1804, which may itself include one or more memory devices, such as volatile memory (eg, dynamic random access memory (DRAM)), non-volatile memory (eg, read only memory (ROM) )), flash memory, solid state memory, and/or hard disk drives. In some embodiments, memory 1804 may include memory that shares a die with processing device 1802 . The memory can be used as cache memory and can include embedded dynamic random access memory (eDRAM) or spin transfer torque magnetic random access memory (STT-MRAM).In some embodiments, the electrical device 1800 may include a communication chip 1812 (eg, one or more communication chips). For example, communications chip 1812 may be configured to manage wireless communications for transferring data to and from electrical device 1800 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can communicate data through non-solid media using modulated electromagnetic radiation. The term does not imply that the associated devices do not include any wires, although in some embodiments they may not.The communication chip 1812 may implement any of a variety of wireless standards or protocols, including but not limited to Institute of Electrical and Electronics Engineers (IEEE) standards, including Wi-Fi (IEEE802.11 series), IEEE 802.16 standards (eg, IEEE802. 16-2005), the Long Term Evolution (LTE) project, and any revisions, updates, and/or revisions (eg, the LTE-Advanced project, the Ultra Mobile Broadband (UMB) project (also known as "3GPP2"), etc.). IEEE 802.16 compliant Broadband Wireless Access (BWA) networks are often referred to as WiMAX networks, an acronym that stands for Worldwide Interoperability for Microwave Access, which is tested for conformance and interoperability with the IEEE 802.16 standard Product certification mark. The communication chip 1812 may operate according to Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks. The communication chip 1812 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chip 1812 may be designated as 3G, 4G, 5G according to Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO) and derivatives thereof, and and any other wireless protocol after that. In other embodiments, the communication chip 1812 may operate according to other wireless protocols. The electrical device 1800 may include an antenna 1822 to facilitate wireless communications and/or receive other wireless communications (eg, AM or FM radio transmissions).In some embodiments, the communications chip 1812 may manage wired communications, such as electrical, optical, or any other suitable communications protocol (eg, Ethernet). As described above, the communication chip 1812 may include a plurality of communication chips. For example, the first communication chip 1812 may be dedicated to shorter-range wireless communication such as Wi-Fi or Bluetooth, and the second communication chip 1812 may be dedicated to communication such as Global Positioning System (GPS), EDGE, GPRS, CDMA, WiMAX, LTE, EV-DO or other longer range wireless communication. In some embodiments, the first communication chip 1812 may be dedicated to wireless communication, and the second communication chip 1812 may be dedicated to wired communication.Electrical device 1800 may include battery/power circuit 1814 . Battery/power circuit 1814 may include one or more energy storage devices (eg, batteries or capacitors) and/or be used to couple components of electrical device 1800 to an energy source (eg, AC line power) separate from electrical device 1800 circuit.The electrical device 1800 may include a display device 1806 (or corresponding interface circuitry, as described above). Display device 1806 may include any visual indicator, such as a heads-up display, computer monitor, projector, touch screen display, liquid crystal display (LCD), light emitting diode display, or flat panel display.Electrical device 1800 may include audio output device 1808 (or corresponding interface circuitry, as described above). Audio output device 1808 may include any device that generates an audible indicator, such as speakers, headphones, or earbuds.Electrical device 1800 may include audio input device 1824 (or corresponding interface circuitry, as described above). Audio input device 1824 may include any device that produces a signal representing sound, such as a microphone, a microphone array, or a digital musical instrument (eg, a musical instrument with a musical instrument digital interface (MIDI) output).Electrical device 1800 may include GPS device 1818 (or corresponding interface circuitry, as described above). GPS device 1818 can communicate with satellite-based systems and can receive the location of electrical device 1800, as is known in the art.The electrical device 1800 may include other output devices 1810 (or corresponding interface circuitry, as described above). Examples of other output devices 1810 may include audio codecs, video codecs, printers, wired or wireless transmitters for providing information to other devices, or additional storage devices.The electrical device 1800 may include other input devices 1820 (or corresponding interface circuitry, as described above). Examples of other input devices 1820 may include accelerometers, gyroscopes, compasses, image capture devices, keyboards, cursor control devices such as mice, styluses, touchpads, barcode readers, Quick Response (QR) code reading device, any sensor, or a radio frequency identification (RFID) reader.The electrical device 1800 may have any desired form factor, such as a handheld or mobile electrical device (eg, cell phone, smart phone, mobile internet device, music player, tablet, laptop, netbook, ultrabook, Personal digital assistants (PDAs, ultra-mobile personal computers, etc.), desktop electrical equipment, server equipment or other networked computing components, printers, scanners, monitors, set-top boxes, entertainment control units, vehicle control units, digital cameras, digital video recording appliances or wearable electrical devices. In some embodiments, electrical device 1800 may be any other electronic device that processes data.The following paragraphs provide various examples of the embodiments disclosed herein.Example 1 is an integrated circuit (IC) die including a capacitor, wherein the capacitor includes a top electrode region; a bottom electrode region; and a dielectric region between and with the top and bottom electrode regions a region contact; wherein the dielectric region includes a perovskite material, and the top electrode region has a different material structure than the bottom electrode region.Example 2 includes the subject matter of Example 1, and further specifies that the top electrode region has a different material composition than the bottom electrode region.Example 3 includes the subject matter of Example 2, and further specifies that the top electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 4 includes the subject matter of Example 3, and further specifies that the top electrode region has a thickness between 0.1 nanometers and 5 nanometers.Example 5 includes the subject matter of any of Examples 3-4, and further specifies that the top electrode region is a first top electrode region, the capacitor further comprising a second top electrode region, the first top electrode region being in the The second top electrode region is between the dielectric region, and the second top electrode region has a different material composition than the first top electrode region.Example 6 includes the subject matter of Example 5, and further specifies that the second top electrode region includes ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 7 includes the subject matter of any of Examples 5-6, and further specifies that the second top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 8 includes the subject matter of any of Examples 3-7, and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 9 includes the subject matter of any of Examples 3-8 and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 10 includes the subject matter of any of Examples 3-9, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region A region is between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 11 includes the subject matter of Example 10 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 12 includes the subject matter of any of Examples 10-11, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 13 includes the subject matter of any of Examples 10-12, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 14 includes the subject matter of Example 13, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 15 includes the subject matter of any of Examples 13-14, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 16 includes the subject matter of Example 2, and further specifies that the bottom electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 17 includes the subject matter of Example 16, and further specifies that the bottom electrode region has a thickness between 0.1 nanometers and 5 nanometers.Example 18 includes the subject matter of any of Examples 16-17, and further specifies that the bottom electrode region is a first bottom electrode region, the capacitor further comprising a second bottom electrode region, the first bottom electrode region being in the A second bottom electrode region is between and the dielectric region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 19 includes the subject matter of Example 18 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 20 includes the subject matter of any of Examples 18-19, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 21 includes the subject matter of any of Examples 18-20, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 22 includes the subject matter of Example 21 and further specifies that the third bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 23 includes the subject matter of any of Examples 21-22, and further specifies that the third bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 24 includes the subject matter of any of Examples 21-23, and further specifies that the capacitor further includes a fourth bottom electrode region, the third bottom electrode region is in the second bottom electrode region and the fourth bottom electrode regions, and the fourth bottom electrode region has a different material composition than the third bottom electrode region.Example 25 includes the subject matter of Example 24, and further specifies that the fourth bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 26 includes the subject matter of any of Examples 24-25, and further specifies that the fourth bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 27 includes the subject matter of Example 1, and further specifies that the top electrode region has the same material composition as the bottom electrode region.Example 28 includes the subject matter of Example 27, and further specifies that the top electrode region has a different crystal phase than the bottom electrode region.Example 29 includes the subject matter of any of Examples 27-28, and further specifies that the top electrode region has a crystallographic phase that is one of face-centered cubic and hexagonal close packing, and the bottom electrode region has a crystal phase that is face-centered cubic and another crystal phase in hexagonal close packing.Example 30 includes the subject matter of any of Examples 27-29, and further specifies that the top electrode region has a different defect density than the bottom electrode region.Example 31 includes the subject matter of Example 30, and further specifies that the difference in defect density between the top electrode region and the bottom electrode region is between 1e16 defects per cubic centimeter and 1e20 defects per cubic centimeter.Example 32 includes the subject matter of any of Examples 30-31, and further specifies that the top electrode region comprises ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 33 includes the subject matter of any of Examples 30-32, and further specifies that the top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 34 includes the subject matter of any of Examples 30-33, and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 35 includes the subject matter of any of Examples 30-34, and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 36 includes the subject matter of any of Examples 30-35, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 37 includes the subject matter of Example 36 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 38 includes the subject matter of any of Examples 36-37, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 39 includes the subject matter of any of Examples 36-38, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 40 includes the subject matter of Example 39, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 41 includes the subject matter of any of Examples 39-40, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 42 includes the subject matter of any of Examples 1-41, and further specifies that the capacitor is in a metallization stack of an IC die.Example 43 includes the subject matter of any of Examples 1-42, and further specifies that the capacitor is a decoupling capacitor.Example 44 includes the subject matter of any of Examples 1-43, and further specifies that the perovskite material includes strontium, titanium, and oxygen; barium, titanium, and oxygen; strontium, barium, titanium, and oxygen; bismuth, iron, and oxygen; Lanthanum, bismuth, and oxygen; lead, titanium, and oxygen; or strontium, lead, titanium, and oxygen.Example 45 includes the subject matter of any of Examples 1-44, and further specifies that the dielectric region has a thickness between 4 nanometers and 20 nanometers.Example 46 is an integrated circuit (IC) die comprising: a capacitor, wherein the capacitor includes a top electrode region, a bottom electrode region, and a dielectric region between the top electrode region and the bottom electrode between and in contact with the top electrode region and the bottom electrode region, wherein the dielectric region includes a polar dielectric material, and the top electrode region has a different material structure than the bottom electrode region.Example 47 includes the subject matter of Example 46, and further specifies that the top electrode region has a different material composition than the bottom electrode region.Example 48 includes the subject matter of Example 47, and further specifies that the top electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 49 includes the subject matter of Example 48, and further specifies that the top electrode region has a thickness between 0.1 nanometers and 5 nanometers.Example 50 includes the subject matter of any of Examples 48-49, and further specifies that the top electrode region is a first top electrode region, the capacitor further comprising a second top electrode region, the first top electrode region being in the The second top electrode region is between the dielectric region, and the second top electrode region has a different material composition than the first top electrode region.Example 51 includes the subject matter of Example 50, and further specifies that the second top electrode region includes ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 52 includes the subject matter of any of Examples 50-51, and further specifies that the second top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 53 includes the subject matter of any of Examples 48-52, and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 54 includes the subject matter of any of Examples 48-53, and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 55 includes the subject matter of any of Examples 48-54, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 56 includes the subject matter of Example 55 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 57 includes the subject matter of any of Examples 55-56, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 58 includes the subject matter of any of Examples 55-57, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 59 includes the subject matter of Example 58, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 60 includes the subject matter of any of Examples 58-59, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 61 includes the subject matter of Example 47, and further specifies that the bottom electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 62 includes the subject matter of Example 61, and further specifies that the bottom electrode region has a thickness between 0.1 nanometers and 5 nanometers.Example 63 includes the subject matter of any of Examples 61-62, and further specifies that the bottom electrode region is a first bottom electrode region, the capacitor further comprising a second bottom electrode region, the first bottom electrode region being in the A second bottom electrode region is between and the dielectric region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 64 includes the subject matter of Example 63 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 65 includes the subject matter of any of Examples 63-64, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 66 includes the subject matter of any of Examples 63-65, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 67 includes the subject matter of Example 66 and further specifies that the third bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 68 includes the subject matter of any of Examples 66-67, and further specifies that the third bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 69 includes the subject matter of any of Examples 66 to 68, and further specifies that the capacitor further includes a fourth bottom electrode region, the third bottom electrode region being in contact with the fourth bottom electrode region between the second bottom electrode region regions, and the fourth bottom electrode region has a different material composition than the third bottom electrode region.Example 70 includes the subject matter of Example 69, and further specifies that the fourth bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 71 includes the subject matter of any of Examples 69-70, and further specifies that the fourth bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 72 includes the subject matter of Example 46, and further specifies that the top electrode region has the same material composition as the bottom electrode region.Example 73 includes the subject matter of any of Example 72, and further specifies that the top electrode region has a different crystal phase than the bottom electrode region.Example 74 includes the subject matter of any of Examples 72-73, and further specifies that the top electrode region has a crystal phase that is one of face-centered cubic and hexagonal close packing, and the bottom electrode region has a crystal phase that is face-centered cubic and Another crystal phase in hexagonal close packing.Example 75 includes the subject matter of any of Examples 72-74, and further specifies that the top electrode region has a different defect density than the bottom electrode region.Example 76 includes the subject matter of Example 75, and further specifies that the difference in defect density between the top electrode region and the bottom electrode region is between 1e16 defects per cubic centimeter and 1e20 defects per cubic centimeter.Example 77 includes the subject matter of any of Examples 75-76, and further specifies that the top electrode region comprises ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 78 includes the subject matter of any of Examples 75-77, and further specifies that the top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 79 includes the subject matter of any of Examples 75-78, and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 80 includes the subject matter of any of Examples 75-79, and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 81 includes the subject matter of any of Examples 75-80, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 82 includes the subject matter of Example 81 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 83 includes the subject matter of any of Examples 81-82, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 84 includes the subject matter of any of Examples 81-83, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 85 includes the subject matter of Example 84, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 86 includes the subject matter of any of Examples 84-85, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 87 includes the subject matter of any of Examples 46-86, and further specifies that the capacitor is in a metallization stack of an IC die.Example 88 includes the subject matter of any of Examples 46-87, and further specifies that the capacitor is between a topmost metal layer and a second topmost metal layer of the metallization stack.Example 89 includes the subject matter of any of Examples 46-88, and further specifies that the capacitor is a decoupling capacitor.Example 90 includes the subject matter of any of Examples 46-89, and further specifies that the polar dielectric material includes strontium, barium, bismuth, or lead.Example 91 includes the subject matter of any of Examples 46-90, and further specifies that the polar dielectric material is a perovskite material.Example 92 includes the subject matter of any of Examples 46-91, and further specifies that the polar dielectric material includes strontium, titanium, and oxygen; barium, titanium, and oxygen; strontium, barium, titanium, and oxygen; bismuth, iron, and oxygen; Lanthanum, bismuth, and oxygen; lead, titanium, and oxygen; or strontium, lead, titanium, and oxygen.Example 93 includes the subject matter of any of Examples 46-92, and further specifies that the dielectric region has a thickness between 4 nanometers and 20 nanometers.Example 94 is an integrated circuit (IC) die comprising: a capacitor, wherein the capacitor includes a top electrode region, a bottom electrode region, and a dielectric region between the top electrode region and the bottom electrode between and in contact with the top and bottom electrode regions, wherein the dielectric region includes strontium, barium, bismuth, or lead, and the top electrode region has a different material structure than the bottom electrode region .Example 95 includes the subject matter of Example 94, and further specifies that the top electrode region has a different material composition than the bottom electrode region.Example 96 includes the subject matter of Example 95, and further specifies that the top electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 97 includes the subject matter of any of Examples 96, and further specifies that the top electrode region has a thickness of between 0.1 nanometers and 5 nanometers.Example 98 includes the subject matter of any of Examples 96-97, and further specifies that the top electrode region is a first top electrode region, the capacitor further comprising a second top electrode region, the first top electrode region being in the The second top electrode region is between the dielectric region, and the second top electrode region has a different material composition than the first top electrode region.Example 99 includes the subject matter of Example 98 and further specifies that the second top electrode region includes ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 100 includes the subject matter of any of Examples 98-99, and further specifies that the second top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 101 includes the subject matter of any of Examples 96-100, and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 102 includes the subject matter of any of Examples 96-101, and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 103 includes the subject matter of any of Examples 96-102, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 104 includes the subject matter of Example 103 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 105 includes the subject matter of any of Examples 103-104, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 106 includes the subject matter of any of Examples 103-105, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 107 includes the subject matter of Example 106, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 108 includes the subject matter of any of Examples 106-107, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 109 includes the subject matter of Example 94, and further specifies that the bottom electrode region includes germanium, lanthanum, hafnium, zirconium, yttrium, barium, lead, calcium, magnesium, beryllium, or lithium.Example 110 includes the subject matter of Example 109, and further specifies that the bottom electrode region has a thickness between 0.1 nanometers and 5 nanometers.Example 111 includes the subject matter of any of Examples 109-110, and further specifies that the bottom electrode region is a first bottom electrode region, the capacitor further comprising a second bottom electrode region, the first bottom electrode region being in the A second bottom electrode region is between and the dielectric region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 112 includes the subject matter of Example 111 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 113 includes the subject matter of any of Examples 111-112, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 114 includes the subject matter of any of Examples 111-113, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 115 includes the subject matter of Example 114 and further specifies that the third bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 116 includes the subject matter of any of Examples 114-115, and further specifies that the third bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 117 includes the subject matter of any of Examples 114-116, and further specifies that the capacitor further includes a fourth bottom electrode region, the third bottom electrode region being in contact with the fourth bottom electrode in the second bottom electrode region regions, and the fourth bottom electrode region has a different material composition than the third bottom electrode region.Example 118 includes the subject matter of Example 117, and further specifies that the fourth bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 119 includes the subject matter of any of Examples 117-118, and further specifies that the fourth bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 120 includes the subject matter of Example 93, and further specifies that the top electrode region has the same material composition as the bottom electrode region.Example 121 includes the subject matter of Example 120, and further specifies that the top electrode region has a different crystal phase than the bottom electrode region.Example 122 includes the subject matter of any of Examples 120-121, and further specifies that the top electrode region has a crystal phase that is one of face-centered cubic and hexagonal close packing, and the bottom electrode region has a crystal phase that is face-centered cubic and Another crystal phase in hexagonal close packing.Example 123 includes the subject matter of any of Examples 120-122, and further specifies that the top electrode region has a different defect density than the bottom electrode region.Example 124 includes the subject matter of Example 123, and further specifies that the difference in defect density between the top electrode region and the bottom electrode region is between 1e16 defects per cubic centimeter and 1e20 defects per cubic centimeter.Example 125 includes the subject matter of any of Examples 123-124, and further specifies that the top electrode region comprises ruthenium, iridium, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, or tungsten.Example 126 includes the subject matter of any of Examples 123-125, and further specifies that the top electrode region has a thickness between 5 nanometers and 50 nanometers.Example 127 includes the subject matter of any of Examples 123-126 and further specifies that the bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium , gold, platinum, silver, cobalt, molybdenum or tungsten.Example 128 includes the subject matter of any of Examples 123-127, and further specifies that the bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 129 includes the subject matter of any of Examples 123-128, and further specifies that the bottom electrode region is a first bottom electrode region, and the capacitor further includes a second bottom electrode region, wherein the first bottom electrode region between the dielectric region and the second bottom electrode region, and the second bottom electrode region has a different material composition than the first bottom electrode region.Example 130 includes the subject matter of Example 129 and further specifies that the second bottom electrode region includes ruthenium, iridium, strontium and ruthenium and oxygen, iridium and oxygen, ruthenium and oxygen, tantalum, copper, titanium and nitrogen, titanium, gold, platinum , silver, cobalt, molybdenum or tungsten.Example 131 includes the subject matter of any of Examples 129-130, and further specifies that the second bottom electrode region has a thickness between 5 nanometers and 50 nanometers.Example 132 includes the subject matter of any of Examples 129-131, and further specifies that the capacitor further includes a third bottom electrode region, the second bottom electrode region being in contact with the first bottom electrode region and the third bottom electrode regions, and the third bottom electrode region has a different material composition than the second bottom electrode region.Example 133 includes the subject matter of Example 132, and further specifies that the third bottom electrode region includes tantalum, copper, titanium and nitrogen, titanium, gold, platinum, silver, cobalt, molybdenum, ruthenium, iridium, or tungsten.Example 134 includes the subject matter of any of Examples 132-133, and further specifies that the third bottom electrode region has a thickness between 0.5 nanometers and 10 nanometers.Example 135 includes the subject matter of any of Examples 94-134, and further specifies that the capacitor is in a metallization stack of an IC die.Example 136 includes the subject matter of any of Examples 94-135, and further specifies that the capacitor is between a topmost metal layer and a second topmost metal layer of the metallization stack.Example 137 includes the subject matter of any of Examples 94-136, and further specifies that the capacitor is a decoupling capacitor.Example 138 includes the subject matter of any of Examples 94-137, and further specifies that the dielectric region includes a perovskite material.Example 139 includes the subject matter of any of Examples 94-138, and further specifies that the dielectric region includes strontium, titanium, and oxygen; barium, titanium, and oxygen; strontium, barium, titanium, and oxygen; bismuth, iron, and oxygen; lanthanum, Bismuth and oxygen; lead, titanium and oxygen; or strontium, lead, titanium and oxygen.Example 140 includes the subject matter of any of Examples 94-139, and further specifies that the dielectric region has a thickness between 4 nanometers and 20 nanometers.Example 141 is an integrated circuit (IC) assembly comprising: an IC die, wherein the IC die is the IC die of any of Examples 1-140; and a support coupled to the IC die core.Example 142 includes the subject matter of Example 141, and further specifies that the support includes a package substrate.Example 143 includes the subject matter of any of Examples 141-142, and further specifies that the support includes a circuit board.Example 144 includes the subject matter of any of Examples 143 and further specifies that the circuit board is a motherboard.Example 145 includes the subject matter of any of Examples 141-144, and further specifies that the support includes a housing.Example 146 includes the subject matter of any of Examples 141-145, and further specifies that the IC assembly is a handheld computing device.Example 147 includes the subject matter of any of Examples 141-145, and further specifies that the IC component is a server computing device.Example 148 includes the subject matter of any of Examples 141-145, and further specifies that the IC assembly is a laptop computing device.
A memory cell having a structure of a modified flash memory cell, but configured to operate in a low voltage domain (e.g., using voltages of ?6V amplitude for program and/or erase operations) is provided. The disclosed memory cells may be formed with dielectric layers having reduced thickness(es) as compared with conventional flash memory cells, which allows for such low voltage operation. The disclosed memory cells may be compatible with advanced, high density, low energy data computational applications. The disclosed memory cells may replace or reduce the need for RAM (e.g., SRAM or DRAM) in a conventional device, e.g., microcontroller or computer, and are thus referred to "RAM Flash" memory cells. Data retention of RAM Flash memory cells may be increased (e.g., to days, months, or years) by (a) applying a static holding voltage at selected nodes of the cell, and/or (b) periodically refreshing data stored in RAM Flash.
CLAIMS1. A system, comprising:a memory cell configured for low-voltage operation, the memory cell comprising: a floating gate formed over a channel region;a first dielectric layer formed between the floating gate and the channel region; a control gate formed over or adjacent the floating gate; anda second dielectric region formed between the floating gate and the control gate; wherein a thickness of at least one of the first dielectric layer and the second dielectric region is selected to allow low voltage program operations and erase operations on the memory cell.2. The system of Claim 1, wherein the memory cell comprises a single transistor (IT) flash memory cell structure.3. The system of Claim 1, wherein the memory cell comprises a split-gate flash memory cell structure.4. The system of Claim 1, wherein the memory cell comprises a modified version of a SUPERFLASH memory cell.5. The system of any of Claims 1-4, wherein the thickness of the first dielectric layer formed between the floating gate and the channel region is less than 80A.6. The system of any of Claims 1-4, wherein the thickness of the second dielectric layer formed between the floating gate and the control gate is less than lOOA.7. The system of any of Claims 1-6, further comprising control electronics configured to perform program operations and erase operations on the memory cell by applying a voltage (<6V) to respective nodes of the flash memory cell.8. The system of any of Claims 1-7, further comprising control electronics configured to apply a holding voltage to at least one node on the memory cell to increase data retention of the memory cell.9. The system of Claim 8, further comprising control electronics configured to select and/or dynamically adjust the holding voltage as a function of at least one measured performance characteristic of the memory cell.10. The system of any of Claims 1-9, further comprising control electronics configured to periodically refresh a storage state of the memory cell.11. The system of Claim 10, wherein the control electronics are configured to periodically refresh the storage state of the memory cell at a frequency of at least one day.12. The system of any of Claims 1-11, wherein the memory cell is configured to replace SRAM or DRAM in the system.13. The system of any of Claims 1-12, wherein the thickness of the at least one of the first dielectric layer and the second dielectric layer is selected to allow program operations and erase operations using voltages having an amplitude of <6V.14. The system of any of Claims 1-13, wherein:the memory cell further includes a word line and a source line; andthe system further comprises control electronics coupled to the memory cell and configured to perform at least one of program operations or erase operations on the memory cell by applying word line voltages at the word line, source line voltages at the source line, and control gate voltages at the control gate, wherein the applied word line voltages, source line voltages, and control gates voltages have amplitudes of <6V.15. The system of Claim 14, wherein the control electronics are configured to perform at least one of program operations or erase operations on the memory cell by applying word line voltages, source line voltages, and control gates voltages with amplitudes in the range of 1.5V-6V.16. The system of Claim 14, wherein the control electronics are configured to perform at least one of program operations or erase operations on the memory cell by applying word line voltages, source line voltages, and control gates voltages with amplitudes in the range of 2V-5V.17. The system of any of Claims 1-16, wherein:the memory cell further includes a source region and a drain region; andthe system further comprises control electronics coupled to the memory cell and configured to perform at least one of program operations or erase operations on the memory cell by applying source voltages at the source region, drain voltages at the drain region, and control gate voltages at the control gate, wherein the applied source voltages, drain voltages, and control gates voltages have amplitudes of <6V.18. The system of Claim 17, wherein the control electronics are configured to perform at least one of program operations or erase operations on the memory cell by applying source voltages, drain voltages, and control gates voltages with amplitudes in the range of 3V- 6V.19. A method of operating a memory cell configured for low-voltage operation and including a floating gate formed over a channel region, a first dielectric layer formed between the floating gate and the channel region and having a thickness of less than 80A, a control gate formed over or adjacent the floating gate, and a second dielectric layer formed between the floating gate and the control gate and having a thickness of less than 100 A, the method comprising:performing a program operation or an erase operation on the memory cell by applying a set of voltages to the flash memory cell, including:a source line voltage having an amplitude of <6V; anda control gate voltage having an amplitude of <6V.20. The method of Claim 20, comprising performing an erase operation on the memory cell by applying a set of voltages to the flash memory cell, including:a source line voltage in the range of 3-6V;a control gate voltage in the range of -3V to -6V; andwherein the drain is allowed to float.21. The method of Claim 20, comprising performing a program operation on the memory cell by applying a set of voltages to the memory cell, including:a control gate voltage in the range of 3-6V; anda drain voltage in the range of 3-6V.22. The method of Claim 20, comprising performing an erase operation on the memory cell by applying a set of voltages to the memory cell, including:a word line voltage in the range of 1.5-6.0V; anda control gate voltage in the range of -1.5V to -1.5 V.23. The method of Claim 20, comprising performing a program operation on the memory cell by applying a set of voltages to the memory cell, including:a source line voltage in the range of 1.5-6.0V; anda control gate voltage in the range of 1.5-6.0V.
FLASH MEMORY CELL ADAPTED FOR LOW VOLTAGE AND/OR NON-VOLATILE PERFORMANCERELATED APPLICATIONThis application claims priority to commonly owned United States Provisional Patent Application No. 62/858,088 filed June 6, 2019, the entire contents of which are hereby incorporated by reference for all purposes.TECHNICAL FIELDThe present disclosure relates to integrated circuit memory devices, and more particularly to modified flash memory adapted for low-voltage and/or non-volatile performance.BACKGROUNDMicrocontroller and other computer systems may include any one or more types of memory to meet various objectives, such as a target data processing speed, data retention, or cost, depending on the intended user or application of the respective computer system.Different types of memory vary in a number of different characteristics, such as speed (e.g., as measured by the time required for a central processing unit (CPU) to access stored data), data storage size, data retention (e.g., volatile or non-volatile), endurance (e.g., a number of program/erase cycles after which the memory may become degraded or unreliable), power consumption, physical size, and cost, for example.Volatile memory refers to memory that stores data only as long as it remains connected to a power supply. Examples of volatile memory include various types of random access memory (RAM), such as static RAM (SRAM) and Dynamic RAM (DRAM). Non-volatile memory refers to memory that can store data even when disconnected from a power supply. Example types of non-volatile memory include hard drives, flash memory, and EEPROM. Figure 1 shows an example taxonomy of various types of conventional volatile and non-volatile memory.Certain advanced process technology systems have strict design and performance requirements regarding the memory included in such systems. For example, advanced process technology systems typically require a very tight metal pitch and can only tolerate low voltages at minimum design rules. As used herein,“metal pitch” refers to the distance between two adjacent metal lines, including the width of the two metal lines. In addition, some systems use aluminum metallization, typically requiring very low energy, highly parallel design and fast data transfers.Modem advanced systems often incorporate flash, SRAM, and/or DRAM memory devices. Conventional flash memory cells are typically small (e.g., 1 transistor (IT) or 1.5 transistors (1.5)) and provide high data retention (e.g., >10 years), but require high voltages (typically >10V) and have slow data access time. For example, NOR Flash memory (e.g., Microchip’s SuperFlash™ ESF1, ESF3, and ESF4 cells) that are often used for high retention applications are small (e.g., 1.5T) but typically require >7V for program/erase operations and have slow access time as compared to DRAM or SRAM memory. In advanced systems, high voltage memory devices typically increase the required die size and may be incompatible with logic rules for minimum metal pitch.In contrast, volatile memory cells such as DRAM and SRAM, for example, are typically large and power hungry. For example, SRAM is typically fast and designed to operate at low voltage, but is physically large (typically 6 transistors (6T)) and has high power consumption.Table 1 shows various performance and physical size characteristics for example types of memory devices, including SRAM, DRAM, and flash. As shown, volatile memory such as SRAM and DRAM are much faster, consume less energy per bit, and have a longer endurance than typical non-volatile flash memory. Table 1. Comparison of selected memory typesFor some systems, there is a need for memory devices (memory cells) that operate in a low voltage domain and are compatible with advanced, high density, low energy data computational applications.SUMMARYEmbodiments of the present invention provide memory cells having a structure generally based on flash memory cell design, but (unlike conventional flash memory) configured to operate in a low voltage domain compatible with advanced, high density, low energy data computational applications. In some embodiments or applications, these inventive memory cells may replace at least a portion of the RAM (e.g., SRAM or DRAM) included in a conventional device (e.g., a microcontroller or other computer device). Thus, the memory cells according to embodiments of the present invention are referred to herein as“RAM Flash memory cells” or“RAM Flash cells.”As used herein,“low voltage” operation refers to memory cell operations (e.g., program or erase operations) in which the voltages applied to the cell have an absolute value, or amplitude, of <6V. For example, in some embodiments RAM Flash cells are configured for program and erase operations using a source line voltage (Vsl), a word line voltage (Vwl), and a control gate voltage (Vcg) each having a voltage amplitude of <6V. As discussed below, in some embodiments RAM Flash cells are configured for program and erase operations using a source line voltage (Vsl), a word line voltage (Vwl), and a control gate voltage (Vcg) each having a voltage amplitude in the range of 3-6V, or each having a voltage amplitude in the range of 1.5-6V, or each having a voltage amplitude in the range of 2-5 V, or each having a voltage amplitude in the range of 2-4V, or each at an amplitude of 3 V or about 3 V. All voltages listed here are in relation to ground, or other common reference potential.As noted in the Background section above, SRAM cells are typically fast, but physically large (typically 6 transistors (6T)) with high power consumption. DRAM cells on the other hand typically use a IT-IC (one transistor, one capacitor) architecture and are typically not compatible with a standard logic/microcontroller process flow. In addition, with DRAM cells, a continuous data refresh cycle must be performed, e.g., every 64ms, to maintain data stored in the cells. In contrast, as discussed above, flash memory cells are typically small, e.g., IT or 1.5T. Some embodiments of the present invention provide RAM Flash cells formed as modified flash memory cells configured to operate in a space typically assigned to SRAM or DRAM. For example, RAM Flash cells may be configured for low voltage (<6V) program/erase operations to operate effectively with high density advanced logic flows at minimum or small metal pitch rules.In some embodiments, RAM Flash cells may provide a flexible, controllable retention- hold approach to prolong data retention (as compared with traditional flash memory cells), for example to a time frame of week(s) or month(s). In some embodiments, data stored in RAM Flash cells may be restored on a periodic basis (e.g., every one or more days) from external NAND flash, HDD, or other data source.In some embodiments, RAM Flash cells may be integrated into the same die as a microcontroller and/or CPU, which may provide (a) an advantage in bus latency (data transmission delay) over an external DRAM, and/or (b) an advantage of a much lower data refresh frequency (e.g., a refresh rate of every N days or months) as compared with conventional DRAM cells (e.g., a refresh rate of every 64 ms).In some embodiments, the structure of a RAM Flash cell is a modified version of a conventional flash memory cell, in which at least one dielectric layer (e.g., oxide layer) has a reduced thickness as compared with the conventional flash memory cell. The reduced dielectric layer thickness may allow for low voltage program and/or erase operations, which may allow the use of an advanced low-k dielectric metal pitch with minimized or reduced leakage and reliability concerns.In some embodiments, RAM Flash cells may have the basic structure of any known flash memory cell, e.g., a IT flash memory cell or a split gate flash memory cell (e.g., a 1.5T SuperFlash™ memory cell by Microchip Inc.), but with a modified structure having at least one dielectric layer with a reduced thickness. For example, in a IT RAM Flash cell according to some embodiments, one or both of (a) the dielectric layer separating the floating gate from the underlying channel and (b) the dielectric layer between the floating gate and overlying control gate may have a reduced thickness as compared with conventional IT flash memory cells. As another example, in a 1 5T split-gate RAM Flash cell according to some embodiments (e.g., a 1.5T RAM Flash cell formed as a modified SuperFlash™ memory cell), each of (a) the floating gate dielectric (e.g., oxide) separating the floating gate from the underlying channel, and/or (b) the FG-control gate inter-poly dielectric (e.g., inter-poly oxide) layer between the floating gate (sidewall) and adjacent control gate, and/or (c) the FG-wordline inter-poly dielectric (e.g., inter-poly oxide) layer between the floating gate (sidewall) and adjacent wordline may have a reduced thickness as compared with corresponding layers of conventional split-gate flash memory cells.BRIEF DESCRIPTION OF THE DRAWINGSExample aspects of the present disclosure are described below in conjunction with the figures, in which:Figure 1 shows an example taxonomy of various types of conventional volatile and non-volatile memory;Figure 2 illustrates an example electronic device including RAM Flash cells according to certain embodiments of the present invention;Figure 3 illustrates an example IT RAM Flash cell according to certain example embodiments;Figure 4 illustrates an example known split-gate flash memory cell, namely a SuperFlash™ (ESF1+) memory cell;Figure 5 illustrates an example split-gate flash memory cell configured as a RAM Flash cell, according to one example embodiment of the invention;Figure 6 is a graph illustrating an example technique for determining a holding voltage to apply to the example split-gate RAM Flash cell shown in Figure 5, according to one example embodiment of the invention;Figure 7 illustrates an example of a conventional controller (e.g., microcontroller); andFigure 8 illustrates an example controller including RAM Flash cells, according to one example embodiment of the invention.It should be understood that the reference number for any illustrated element that appears in multiple different figures has the same meaning across the multiple figures, and the mention or discussion herein of any illustrated element in the context of any particular figure also applies to each other figure, if any, in which that same illustrated element is shown.DETAILED DESCRIPTIONEmbodiments of the present invention provide RAM Flash cells having a structure based on a modified version of a conventional flash memory cell, but (unlike conventional flash memory) configured to operate in a low voltage (<6V) domain. In some embodiments RAM Flash cells are formed with thinner dielectric region (e.g., oxide layers) as compared with conventional flash cells. For example, RAM Flash cells may be formed with thinner floating gate dielectric regions (e.g., floating gate oxide layers) and/or thinner inter-poly dielectric regions (e.g., inter-poly oxide layers) as compared with conventional flash memory cells, which may reduce the required voltages for program and erase operations as compared with conventional flash cells. As a result of being configured for low voltage operation, RAM Flash cells according to the present invention may be compatible with advanced, high density, low energy data computational applications, and compatible with advanced logic flows at minimum or small metal pitch rules.In some embodiments, RAM Flash cells may be configured for improved data retention characteristics. For example, in some embodiments, data retention of RAM Flash cells can be increased (e.g., to a time frame of days, months, or years) by (a) applying a static holding voltage at selected nodes of the cell, and/or (b) refreshing/restoring data stored in RAM Flash cells on a periodic basis, e.g., from external memory (e.g., external flash memory or external DRAM).Some embodiments provide an electronic device (e.g., computer or microcontroller) including RAM Flash cells. RAM Flash cells may replace or reduce at least a portion of the memory typically included in a conventional electronic device (e.g., conventional flash memory, SRAM, and/or DRAM), to thereby reduce the size and/or cost of the electronic device, and/or increase the performance (e.g., increased operational speed and/or battery life) of the electronic device.Figure 2 illustrates an example electronic device 10 including RAM Flash cells according to certain embodiments of the present invention. Electronic device 10 may include a processor (e.g., a microprocessor) 12, at least one RAM Flash array 14, a power supply 16, RAM Flash control electronics 20, and/or any other hardware, software, firmware, or other circuitry for providing any functionality of electronic device 10. Electronic device 10 may be a computer system (e.g., a server, desktop computer, laptop, tablet, smartphone, or any other type of computer system), a microcontroller, or any other type of electronic device that utilizes data storage. Power supply 16 may comprise at least one battery, mains power, or any other power source provided in, or external to, electronic device 10.Each RAM Flash array 14 may include any number and type(s) of RAM Flash cells disclosed herein or otherwise consistent with the disclosed principles, including but not limited to the example IT RAM Flash cells 50 discussed below with reference to Figure 3 and/or the example 1.5T split-gate RAM Flash cells 200 discussed below with reference to Figure 5. RAM Flash control electronics 20 may include any hardware, software, firmware, or other circuitry for controlling the operation of RAM Flash array(s) 14, including controlling voltages applied to the relevant contacts of RAM Flash cells within RAM Flash array(s) 14 to perform program, erase, and read operations on such RAM Flash cells. In some embodiments, RAM Flash control electronics 20 may include RAM Flash control logic 22, e.g., embodied as software or firmware, programmed to perform any of the functionality disclosed herein, including, for example: (a) controlling program, erase, and read operations, (b) determining and/or dynamically adjusting a holding voltage (Vh) to apply to RAM Flash cells to increase data retention (e.g., as discussed below), (c) performing and controlling data restoration or refresh operations, to further increase data retention of RAM Flash cells, as discussed below, and/or (d) any other functions of or related to RAM Flash array(s) 14. RAM Flash control electronics 20 may cooperate with processor 12, or in some embodiments, may include processor 12.Figure 3 illustrates an example IT RAM Flash cell 50 according to certain example embodiments. IT RAM Flash cell 50 may include a floating gate 52 and a control gate 54 formed over a substrate 60, which may include a source region 62 and a drain region 64 separated by a channel region 66. IT RAM Flash cell 50 may also include a source contact 70 in contact with the source region 62 and a drain contact 72 in contact with the drain region 64.The floating gate 52 may be separated from the substrate 60, in particular the channel region 66, by a floating gate dielectric region 80, sometimes referred to as a tunneling layer or region. Further, the control gate 54 may be separated from the floating gate 52 by an inter poly dielectric region 82, sometimes referred to as an inter-poly dielectric (IPD) layer or region. Each of the floating gate dielectric region 80 and the inter-poly dielectric region 82 may consist of a single layer or a multi-layer region (e.g., in a stacked layer arrangement). Each of the floating gate dielectric region 80 and the inter-poly dielectric region 82, or each layer within a multi-layer floating gate dielectric region 80 or a multi-layer inter-poly dielectric region 82, may comprise any suitable material(s), for example, one or more oxides e.g., thermal grown or deposited silicon dioxide, and/or one or more nitrides, e.g., silicon oxy-nitride or silicon nitride.In some embodiments, one or both of the floating gate dielectric region 80 and the inter poly dielectric region 82 may have a reduced thickness as compared with corresponding layers of conventional IT flash memory cells. For example, the floating gate dielectric region 80 may have a vertical thickness, defined with substrate layer 60 as a horizontal base, TFGD of less than 6qΆ, e.g., in the range of 25-50Ά. As another example, the inter-poly dielectric region 82 may have a vertical thickness defined with substrate layer 60 as a horizontal base, TIGD of less than 60A, e.g., in the range of 25-50Ά. In some embodiments, the floating gate dielectric region 80 may have a vertical thickness TFGD of less than 60A, e.g., in the range of 25-50A and the inter poly dielectric region 82 may have a vertical thickness TIGD of less than 60A, e.g., in the range of 25-50A.As noted above, each dielectric region 80 and 82 (e.g., oxide layers) may be thermally grown or deposited on the structure, depending on the particular embodiment. In some embodiments, the thickness of each dielectric region 80 and 82 may be controlled by selecting or adjusting parameters related to the growth or deposition of the respective region 80, 82, for example, time, temperature, and/or gas flow parameters for each respective dielectric (e.g., oxide) growth or deposition process.In example embodiments, IT RAM Flash cell 50 may be programmed and erased by applying defined voltages to one or more of the following: for a program operation (by hot electron injection), RAM Flash control electronics 20 may apply, e.g., 3-6V to the control gate 54, with the drain contact 72 at 3-6V and the source contact 70 at 0V for a defined time, to thereby create a cell current IrO that corresponds with a programmed state (“off’ state) of the cell 50. For an erase operation (by Fowler-Nordheim tunneling), RAM Flash control electronics 20 may apply a negative voltage, e.g., -3 to -6V to the control gate 54 with the source contact 70 at 3-6V and the drain contact 72 is allowed to float, to thereby create a cell current Irl that corresponds with an erased state (“on” state) of the RAM flash cell 50. In addition, RAM Flash control electronics 20 may read the programmed/erased status of the RAM Flash cell 50 by applying a defined read voltage, e.g. 1.8V, to the control gate 54, a defined bitline voltage, e.g., 1.8V to the drain contact 72, and holding the source contact 70 at 0V.Accordingly, Table 2 shows example voltages that may be applied to the various contacts of IT RAM Flash cell 50 shown in Figure 3, e.g., by RAM Flash control electronics 20 shown in Figure 2, to perform program, erase, and read functions, according to example embodiments of the present invention. Table 2. Example bias conditions for operation of IT RAM Flash cell 50 (Fig. 3).The example voltages of 3-6V for program and erase operations of the IT RAM Flash cell 50 compare favorably with required voltages in the 10-15 V range for a conventional IT flash memory cell. Thus, IT RAM Flash cell 50 according to the present invention may substantially reduce the required operational voltages, as compared with conventional flash memory cells.Figure 4 illustrates side cross-sectional view of a known 1.5T split-gate flash memory cell 100. The example split-gate flash memory cell 100 may be a SuperFlash™ memory cell (e.g., a SuperFlash™ ESF1+ cell) available from Microchip Technology Inc., Chandler, Arizona.Flash memory cell 100 includes a pair of floating gates 102 A and 102B formed over a substrate 104, word line terminals 106A and 106B extending over floating gates 102A and 102B, respectively, and a control gate 110 extending over both floating gates 102A and 102B. An oxide region 108 A, 108B is respectively formed over each floating gate 102A, 102B. Word line terminals 106A and 106B may couple, for example, to an odd row word line and an even row word line 106B. A doped source region or junction 124 may be formed in substrate 104 below the control gate 110 and extending partially below each floating gate 102A and 102B, and a pair of doped bit line regions or junctions 124A and 124B may be formed in substrate 104 respectively adjacent word line terminals 106 A and 106B.Split-gate flash memory cell 100 may also include electrically conductive contact regions in contact with word line terminals 106A and 106B, control gate 110, source region 124, and bit line regions 124A and 124B, for applying voltages to the various cell components to provide various memory cell functions, e.g., program, erase, and read functions. As shown, these contacts may include word line contacts 130A and 130B coupled to word line voltages Vwl Odd and Vwl even, respectively, a control gate contact 132 coupled to control gate voltage Vcg, a source contact 134, and respective bit line contacts 136A and 136B. The source contact 134 may be located into or out of the page relative to the illustrated cross-section, e.g., at a location of a break in the control gate 110.Each floating gate 102A, 102B is spaced apart from the underlying channel region 124 by a floating gate oxide layer 140. In addition, each floating gate 102A, 102B is spaced apart from the shared control gate 110 by an inter-poly oxide region 142 A, and each floating gate 102A, 102B is spaced apart from a respective wordline terminal 106A, 106B by an inter-poly oxide region 142B.Floating gate oxide layer 140 and inter-poly oxide regions 142A and 142B are formed with thicknesses that allow for the conventional operation of the split-gate flash memory cell 100 (e.g., SuperFlash™ memory cell) as a non-volatile, high-voltage cell. For example, each floating gate oxide layer 140 may have a thickness of about 100 A, while each inter-poly oxide region 142 A and 142B may each have a thickness of about 130A.Split-gate flash memory cell 100 may be programmed and erased by applying defined voltages to one or more of the following: a selected word line contact 130A or 130B (coupled respectively to word line voltages VwLOdd and VWL Even, the control gate contact 132 (coupled to control gate voltage VCG), the source contact 134 (coupled to source line voltage VSL), and/or a selected bit line contact 136A or 136B (coupled to bit line voltages VBL) for a defined time to provide either (a) a cell current IrO that corresponds with a programmed state (“off’ state) of the cell or (b) a cell current Irl that corresponds with an erased state (“on” state) of the cell. In addition, the programmed/erased status of the cell may be read by applying defined voltages to a selected word line contact 130A or 130B (VWL) and the adjacent bit line contact 136A or 136B (VBL).Table 3 shows example voltages that may be applied to the various contacts of the split- gate flash memory cell 100 shown in Figure 4 to perform program, erase, and read functions, according to a conventional cell operation. As shown, a read function is performed via the word line 106 A or 106B and associated bit line 124 A or 124B, by applying a defined VWL and VBL to a selected word line contact 130A or 130B and associated bit line contact 136A or 136B, with no voltage applied to the source contact 134 (VSL=0) or control gate contact 132 (VCG=0). Table 3. Example bias conditions for operation of conventional split-gate flash memory cell.Figure 5 illustrates an example 1.5T split-gate RAM Flash cell 200 (e.g., a modified SuperFlash™ memory cell) according to example embodiments of the present invention. Split- gate RAM Flash memory cell 200 includes a pair of floating gates 202A and 202B formed over a substrate 204, word line terminals 206A and 206B extending over floating gates 202A and 202B, respectively, and a control gate 210 extending over both floating gates 202 A and 202B. An oxide region 208 A, 208B is respectively formed over each floating gate 202A, 202B. Word line terminals 206A and 206B may couple, for example, to an odd row word line and an even row word line. A doped source region or junction 224 may be formed in substrate 204 below the control gate 210 and extending partially below each floating gate 202 A and 202B, and a pair of doped bit line regions or junctions 224A and 224B may be formed in substrate 204 respectively adjacent word line terminals 206 A and 206B.Split-gate RAM Flash cell 200 may also include electrically conductive contact regions in contact with word line terminals 206A and 206B, control gate 210, source region 224, and bit line regions 224A and 224B, for applying voltages to the various cell components to provide various memory cell functions, e.g., program, erase, and read functions. As shown, these contacts may include word line contacts 230 A and 230B coupled to word line voltages Vwl Odd and Vwl even, respectively, a control gate contact 232 coupled to control gate voltage Vcg, a source contact 234, and respective bit line contacts 236 A and 236B. The source contact 234 may be located into or out of the page relative to the illustrated cross-section, e.g., at a location of a break in the control gate 210.Each floating gate 202A, 202B is spaced apart from the underlaying channel region 224 by a floating gate dielectric region (e.g., floating gate oxide layer) 240. In addition, each floating gate 202A, 202B is spaced apart from the shared control gate 210 by an inter-poly dielectric region 242A, and spaced apart from a respective wordline terminal 206A, 206B by an inter-poly dielectric region 242B.In some embodiments, floating gate dielectric region 240 and inter-poly dielectric regions 242A and/or 242B are formed with respective thicknesses that allow split-gate RAM Flash cell 200 to be operated (e.g., including program, erase, and read functions) in a low voltage (<6V) domain compatible with advanced, high density, low energy data computational applications. In some embodiments, one, some, or all floating gate dielectric region 240 and inter-poly dielectric regions 242A and/or 242B have a reduced thickness as compared with respective dielectric layers/regions of conventional split-gate flash memory cells (e.g. oxide layers/regions 140, 142 A, and 142B of the conventional split-gate flash memory cell 100 shown in Figure 4). For example, the respective floating gate dielectric region 240 between each floating gate 202 A, 202B and the underlying channel region 124 may have a thickness of less than 80 A, or less than 60 A e.g., in the range of 40-60 A. As another example, the respective inter-poly dielectric region 242A between each floating gate 202A, 202B and the respective word line terminal 206 A, 206B and between the shared control gate 210 may have a thickness of less than lOOA, or less than 50A, e.g., in the range of 25-50A. As another example, the respective inter-poly dielectric region 242B between each floating gate 202A, 202B and its adjacent wordline terminal 206 A, 206B may have a thickness of less than 100 A, or less than 50A, e.g., in the range of 25-50A. In some embodiments, (a) each floating gate dielectric region 240 has a thickness of less than 80A, or less than 60A e.g., in the range of 40-60A, and (b) each inter-poly dielectric region 242 A and 242B has a thickness of less than 100 A, or less than 50A, e.g., in the range of 25-50A.Each dielectric region (e.g., oxide layer or region), including floating gate dielectric region 240, inter-poly dielectric region 242A, and/or inter-poly dielectric region 242B, may be thermally grown or deposited on the structure, depending on the particular embodiment. In some embodiments, the thickness of each dielectric region 240, 242A, and 242B may be controlled by selecting or adjusting parameters related to the growth or deposition of the respective region 240, 242A, 242B, for example, time, temperature, and/or gas flow parameters for each respective dielectric (e.g., oxide) growth or deposition process.Split-gate flash memory cell 200 may be programmed and erased, e.g., by RAM Flash control electronics 20 shown in Figure 2, by applying defined voltages for a defined time to one or more of the following: a selected word line contact 230A or 230B (coupled respectively to word line voltages VwLOdd and VWL Even), the control gate contact 232 (coupled to control gate voltage VCG), the source contact 234 (coupled to source line voltage VSL), and/or a selected bit line contact 236 A or 236B (coupled to bit line voltages VBL), to thereby provide either (a) a cell current IrO corresponding with a programmed state (“off’ state) of the cell or (b) a cell current Irl corresponding with an erased state (“on” state) of the cell. In addition, the programmed/erased status of the cell may be read by applying defined voltages to a selected word line contact 230 A or 230B (VWL) and adjacent bit line contact 236 A or 236B (VBL).Due to the reduced thickness of dielectric regions 240, 242A, and/or 242B (as compared with conventional flash memory cells), program and erase functions on the RAM Flash cell 200 may be performed using lower voltages than conventional flash memory cells, such as split-gate flash memory cell 100. For example, in some implementations of RAM Flash cell 200: (a) programming is performed by source-side hot electron injection; thus, a reduced thickness of the floating gate dielectric region (e.g., floating gate oxide layer) 240 creates a higher field; and (b) erase is performed through Fowler-Nordheim tunneling between an upper tip of floating gate 202 A, 202B and the adjacent wordline 206 A, 206B; thus, a reduced thickness of the respective inter-poly oxide region 242B creates a higher field. Further, low voltage (<6V) program or erase operations may allow for advanced low-k dielectric metal pitch to be used without leakage and reliability concerns.For a respective RAM flash cell, the thickness of dielectric regions 240, 242A, and 242B, the voltages applied during erase and program voltages, and data retention of the cell are all interrelated to each other. For example, decreasing the thickness of dielectric regions 240, 242A, and 242B may allow for lower-voltage program and erase functions, but may reduce the data retention of the cell, and vice versa. As another example, for a cell with particular dielectric region thicknesses, reducing the operational voltages for program and erase functions generally reduces the data retention of the cell, and vice versa. Thus, for any particular RAM flash cell or array of RAM flash cells, the various factors discussed above (e.g., dielectric region thicknesses, program and erase voltages, and data retention) may be selected or tuned to provide the desired functionality of the cell(s), e.g., depending on the particular application, device, or product in which the cell(s) are provided.Tables 4A-4D below show example voltages that may be applied to the various contacts of the example RAM Flash cell 200 shown in Figure 5, e.g., by RAM Flash control electronics 20 shown in Figure 2, for performing program, erase, and read functions, according to four example embodiments. The example embodiments of Tables 4A-4D may correspond with cells having selected thicknesses of dielectric regions 240, 242A, and 242B, for desired data retention characteristics of the respective cells. As shown, a read function is performed via the word line 206A or 206B and associated bit line 224A or 224B, by applying a defined VWL and VBL to a selected word line contact 230A or 230B and associated bit line contact 236A or 236B, with no voltage applied to the source contact 234 (VSL=0) or control gate contact 232 (VCG=0).Table 4A. Example bias conditions for operation of split-gate RAM Flash cell 200 (Fig.5), example embodiment A.Table 4B. Example bias conditions for operation of split-gate RAM Flash cell 200 (Fig.5), example embodiment B.Table 4C. Example bias conditions for operation of split-gate RAM Flash cell 200 (Fig.5), example embodiment C.Table 4D. Example bias conditions for operation of split-gate RAM Flash cell 200 (Fig.5), example embodiment D.In some embodiments, RAM Flash control electronics 20 may select the voltages or range of voltages to apply for effective program and erase operations in the cell 200 based at least on the thickness of floating gate dielectric region 240 and inter-poly dielectric regions 242A and/or 242B, e.g., wherein the applied voltages can be decreased for decreased dielectric thicknesses. As shown by comparison of Tables 4A-4D (example operation of the split-gate RAM Flash cell 200) with Table 3 (example operation of conventional split-gate flash memory cell 100, the example split-gate RAM Flash cell 200 shown in Figure 5 allows for significantly lower program and erase voltages than the conventional split-gate flash memory cell 100.In addition to the inventive structure of RAM Flash cells that allows for low voltage operation, another aspect of the invention provides methods of operating RAM Flash cells (e.g., example IT RAM Flash cell 50 or example split-gate RAM Flash cell 200 described above) to improve retention characteristics in a customizable and/or dynamically controllable or tunable manner based on critical paths for retention loss. As discussed below, in some embodiments, data retention of RAM Flash cells can be increased (e.g., to a time frame of days, months, or years) by (a) applying a static holding voltage at selected nodes of the cell, and/or (b) refreshing/restoring data stored in RAM Flash cells on a periodic basis, e.g., from external memory (e.g., external flash memory or external DRAM).Table 5 shows one example method for increasing the storage retention of the example split-gate RAM Flash cell 200 shown in Figure 5, according to one example embodiment. As shown in Table 5, suitable control electronics, e.g., RAM Flash control electronics 20 shown in Figure 2, may apply a non-zero static holding voltage Vh to the word line terminals 206A, 206B and control gate (CG) 210 to lower the field across the dielectrics regions surrounding the floating gate (FG) 202A, 202B, and thereby improve retention. In some embodiments, a low static current is drawn under such hold condition, which may create a low field across the floating gate oxide to the adjacent poly region (e.g., poly2 region). Table 5 shows example bias conditions for applying a static holding voltage Vh to RAM Flash cell 200.Table 5. Example bias conditions for improved retention for split-gate RAM Flash (Fig. 5).In some embodiments, RAM Flash control electronics 20 may be configured to select and/or dynamically adjust (or“tune”) the holding voltage Vh, for example, based on particular cell operation and/or performance characteristics determined or monitored over time. Figure 6 is a graph illustrating one example method for selecting a holding voltage Vh based on the performance of an example RAM Flash cell, according to example embodiments. In the example shown in Figure 6, the voltage threshold of the cell during an erased state is +x V, while the voltage threshold of the cell in the programmed state is -y V, which voltages may be measured by any suitable electronics (e.g., provided on the same microcontroller or computer as the RAM Flash cell) and averaged over a number of cycles, for example. In some embodiments, a holding voltage Vh to be applied to the RAM Flash cell may be determined as a mathematical function of the erase state voltage threshold (+x V) and program state voltage threshold (-y V). For example, the holding voltage Vh may be selected as the midpoint between the erase state voltage threshold (+x V) and program state voltage threshold (-y V), which may be expressed as |x+y|/2 as shown in Figure 6, and which may be referred to as the“erased state/programmed state midpoint voltage.”In some embodiments, the retention charge loss of a RAM Flash cell or group of RAM Flash cells in a given state (erased state and/or programmed state) can be characterized at the time of manufacture, either during a product testing or product characterization process, and the weaker state (program or erase) can be compensated for by applying an appropriate holding voltage Vh. For example, if it is determined for a particular RAM Flash cell that the erase state retention charge loss is dominant (as compared with the program state retention charge loss), RAM Flash control electronics 20 may apply a positive Vh (e.g., using a value determined as described above) on all nodes surrounding the floating gate. Conversely, RAM Flash control electronics 20 may apply a negative Vh to enhance retention in the programmed state, e.g., for a RAM Flash cell in which the programmed state retention charge loss is dominant.In addition, for RAM Flash cells in which the erase state retention charge loss is dominant, RAM Flash control electronics 20 may also apply a positive voltage to the source line (e.g., in addition to the holding voltage Vh applied to the word line (WL) and control gate (CG)) to further improve data retention in the cells. The positive voltage applied to the source line may be the same, less than, or greater than the holding voltage Vh, depending on the particular implementation. In some embodiments, the erased state voltage and/or programmed state voltage of a RAM Flash cell may change over time (e.g., after N program/erase cycles), and thus RAM Flash control electronics 20 may recalculate and dynamically adjust the holding voltage Vh accordingly, e.g., by recalculating and dynamically adjusting the erased state/programmed state midpoint voltage on at a defined recurring frequency.In some embodiments, RAM Flash cells may experience significantly less variability than certain conventional memory cells, e.g., filament-based resistive RAM, due to larger variability inherent in a filament formation process as compared with the dielectric storage- based RAM Flash cells. Thus, in some embodiments, tight control of manufacturing and operating variables may enable RAM Flash control electronics 20 to apply a common holding voltage Vh for an array of RAM Flash cells, rather than applying a different holding voltage Vh to different cells within an array. In addition, the holding mode may allow a lower current draw compared with resistive memories, thus permitting low overall power draw in a dense application.In some embodiments, RAM Flash control electronics 20 may be configured to further increase data storage retention in RAM Flash cells by implementing a controlled data restoration/refresh protocol. The expected retention for a RAM Flash cell as disclosed herein may be days to months, depending on the particular RAM Flash cell structure, applied voltages, and the process used for forming the relevant dielectric layers (e.g., the specific processes used to form the floating gate dielectric region(s) and/or inter-poly dielectric region(s)). In some embodiments, RAM Flash control electronics 20 may be programmed to restore data stored in RAM Flash cells on a periodic basis (e.g., after every N hours or days) from external data storage (e.g., from DRAM, NAND, or HDD external to the relevant RAM Flash array) to further increase the data retention of the RAM Flash cells, e.g., to a retention period of years. RAM Flash control electronics 20 may be programmed to perform a data refresh on a RAM Flash cell or group of RAM Flash cells, for example, by first reading the contents of the RAM Flash cell(s) to other memory (e.g., SRAM, DRAM, or other flash memory) and then re-erasing or re-programming each RAM Flash cell.In some embodiments, RAM Flash configured for extended data retention (e.g., by applying a holding voltage Vh and/or by implementing a data refresh protocol, e.g., as discussed above) can be used in applications requiring extended data retention and a low data refresh rate, e.g., as compared with conventional DRAM (which typically requires a refresh frequency of 64ms) or other conventional memory cells. The reduced refresh rate may extend battery life for the respective electronic device 10 (e.g., computer or microcontroller).Further, as discussed above, in some embodiments, RAM Flash data retention can be extended by providing a holding voltage Vh at selected nodes, e.g., at the word line (WL) and control gate (CG) electrodes in a split-gate RAM Flash cell. In some embodiments, a value of the holding voltage Vh can be determined and/or dynamically tuned as a function of retention loss characteristics of each respective RAM Flash cell or RAM Flash array, such as whether the RAM Flash cell(s) exhibit greater retention charge loss in the erase state or the programmed state.The concepts disclosed herein can be applied to any suitable types of flash memory cells, by forming RAM Flash cells having a modified structure of various types of conventional flash memory cells (e.g., by reducing the thickness of floating gate dielectric region(s) and/or inter-poly dielectric region(s) of such flash memory cells), including single-transistor (IT) flash memory cells and multiple-transistor flash memory cells (e.g., 1.5T split-gate flash cells), including for example a range of NOR flash memory cells such as SuperFlash™ ESF1, ESF2, ESF3, or ESF4 cells covering a wide range of process geometries.As mentioned above, RAM Flash cells as disclosed herein may be configured for low voltage (<6V) program/erase operations. Thus, RAM Flash cells may be compatible with high density advanced logic flows at minimum or small metal pitch rules. Figures 7 and 8 illustrate a comparison between an example of a conventional microcontroller (e.g., a controller for an Internet of things (IOT) application) using conventional memory devices (Figure 7), and a corresponding controller incorporating RAM Flash memory according to the present invention (Figure 8), which may reduce or replace SRAM, DRAM, and/or conventional flash memory needed in the conventional controller.The example conventional controller 300 shown in Figure 7 includes a chip 302 including a CPU 304, SRAM 306, conventional flash memory 308, logic/analog devices 310, and/or other various electronics. The chip 302 is powered by a battery 312, and interfaces with SRAM 316, DRAM 318, external sensor(s), and communication protocols (e.g., WiFi, Ethernet, etc.). The flash memory 308 is a high retention memory (typically >10 years) and requires high voltage (e.g., > 10V) for program/erase operations, and executes such program/erase operations at low speed. As shown in Figure 7, for advanced node applications, the CPU 304, SRAM 306, and logic/analog devices 310 on chip 302 may be low voltage devices and thus compatible with small metal pitch structure, while the flash memory 308 requires a much larger metal pitch due to the high voltage (>10 V) operational requirements.In contrast, the example controller 400 shown in Figure 8, according to one example embodiment of the present invention, includes a chip 402 including a CPU 404, SRAM 406, RAM Flash memory 420, conventional flash memory (optional) 408, logic/analog devices 410, and/or other various electronics. The chip 402 may be powered by a battery 412, and may interface with external sensor(s) and communication protocols (e.g., WiFi, Ethernet, etc.) as with the conventional chip 302, and may (optionally) interface with external SRAM 416 and/or DRAM 418. As shown, the inclusion of the RAM Flash 420 on chip 402 may reduce or eliminate the need for one or more other types of memory in the controller 400, as compared with a conventional controller, e.g., controller 300 discussed above. For example, the RAM Flash 420 may (a) allow the on-chip SRAM 406 to be reduced, (b) allow the on-chip conventional flash memory 408 to be reduced or eliminated, and/or (c) allow the external SRAM 416 and/or DRAM 418 to be reduced or eliminated. The RAM Flash 420 may utilize chip footprint space typically used by SRAM or DRAM in a conventional controller, may use less power and space (because flash memory is smaller than SRAM, for example), and with increased speed, because the operational speed is defined by the RAM Flash access time instead of obtaining data from external DRAM as in the conventional controller 300. Also, because the RAM Flash 420 is configured to low voltage operation, it may be produced with small metal pitch, e.g. pitch used by the chip CPU 404, SRAM 406, and logic/analog devices 408, thus reducing the required footprint on the chip 402.
An apparatus includes an array of bit cells (202, 204, 206, 208) that include a first row of bit cells and a second row of bit cells. The apparatus also includes a first global read word line (240) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus further includes a second global read word line (244) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus also includes a global write word line (242) configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer (M4).
WHAT IS CLAIMED IS;1. An apparatus comprising:an array of bit cells comprising a first row of bit cells and a second row of bit cells;a first global read word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells; anda second global read word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells;wherein the first global read word line and the second global read word line are located in a common metal layer.2. The apparatus of claim 1, further comprising a global write word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells, wherein the global write word line is located in the common metal layer.3. The apparatus of claim 2, further comprising row select logic that is configured to:receive a selection signal;couple the first global read word line, the second global read word line, and the global write word line to the first row of bit cells if the selection signal has a first logical value; andcouple the first global read word line, the second global read word line, and the global write word line to the second row of bit cells if the selection signal has a second logical value.4. The apparatus of claim 1, wherein the common metal layer is a fourth metal layer.5. The apparatus of claim 1, wherein the array of bit cells is manufactured using a semiconductor manufacturing process of less than 14 nanometers (nm).6. The apparatus of claim 5, wherein the semiconductor manufacturing process is a 10 nm process.7. The apparatus of claim 6, wherein a pitch of the first global read word line is approximately 80 nm, wherein a pitch of the second global read word line is approximately 80 nm, and wherein a pitch of the global write word line isapproximately 80 nm.8. The apparatus of claim 5, wherein the semiconductor manufacturing process is a 7 nm process.9. The apparatus of claim 2, further comprising:a first local read word line coupled to the first row of bit cells, the first local read word line formed in a second metal layer;a second local read word line coupled to the first row of bit cells, the second local read word line formed in the second metal layer; and a first local write word line coupled to the first row of bit cells, the first local write word line formed in a third metal layer.10. The apparatus of claim 9, further comprising:a third local read word line coupled to the second row of bit cells, the third local read word line formed in the second metal layer;a fourth local read word line coupled to the second row of bit cells, the fourth local read word line formed in the second metal layer; and a second local write word line coupled to the second row of bit cells, the second local write word line formed in the third metal layer.1 1. The apparatus of claim 1 , wherein the first row of bit cells includes a three- port static random access memory (SRAM) bit cell.12. A method comprising:receiving, at row select logic, a selection signal;coupling a first global read word line and a second global read word line to a first row of bit cells if the selection signal has a first logical value; and coupling the first global read word line and the second global read word line to a second row of bit cells if the selection signal has a second logical value; wherein the first global read word line and the second global read word line are located in a common metal layer.13. The method of claim 12, further comprising:coupling a global write word line to the first row of bit cells if the selection signal has the first logical value; andcoupling the global write word line to the second row of bit cells if the selection signal has the second logical value;wherein the global write word line is located in the common metal layer.14. The method of claim 12, wherein the common metal layer is a fourth metal layer.15. The method of claim 12, wherein the first row of bit cells and the second row of bit cells are manufactured using a semiconductor manufacturing process of less than 14 nanometers (nm).16. The method of claim 14, wherein the semiconductor manufacturing process is a 7 nm process or a 10 nm process.17. A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to:initiate coupling a first global read word line and a second global read word line to a first row of bit cells if a received selection signal has a first logical value; andinitiate coupling the first global read word line and the second global read word line to a second row of bit cells if the received selection signal has a second logical value;wherein the first global read word line and the second global read word line are located in a common metal layer.18. The non-transitory computer-readable medium of claim 17, further comprising instructions that, when executed by the processor, cause the processor to: initiate coupling a global write word line to the first row of bit cells if thereceived selection signal has the first logical value; andinitiate coupling the global write word line to the second row of bit cells if the received selection signal has the second logical value;wherein the global write word line is located in the common metal layer.19. The non-transitory computer-readable medium of claim 17, wherein the common metal layer is a fourth metal layer.20. The non-transitory computer-readable medium of claim 17, wherein the first row of bit cells and the second row of bit cells are manufactured using a manufacturing process of less than 14 nanometers (nm).21. The non-transitory computer-readable medium of claim 17, wherein the first row of bit cells includes a three-port static random access memory (SRAM) bit cell.22. The non-transitory computer-readable medium of claim 21, wherein the three-port SRAM bit cell includes a first read port, a second read port, and a write port.23. The non-transitory computer-readable medium of claim 22, wherein a first local read word line couples the first global read word line to the first read port, wherein a second local read word line couples the second global read word line to the second read port, and wherein a local write word line couples the global write word line to the write port.24. The non-transitory computer-readable medium of claim 23, wherein the first local read word line and the second local read word line are located in a second metal layer, and wherein the local write word line is located in a third metal layer.25. An apparatus comprising:first means for performing a read operation configured to be selectively coupled to a first row of bit cells and to a second row of bit cells; and second means for performing the read operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells; wherein the first means for performing the read operation and the second means for performing the read operation are located in a common metal layer.26. The apparatus of claim 25, further comprising means for performing a write operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells, wherein the means for performing the write operation is located in the common metal layer.27. The apparatus of claim 25, wherein the common metal layer is a fourth metal layer.28. The apparatus of claim 26, wherein the first row of bit cells includes a three- port static random access memory (SRAM) bit cell.29. The non-transitory computer-readable medium of claim 21, wherein the three-port SRAM bit cell includes a first read port, a second read port, and a write port.30. The non-transitory computer-readable medium of claim 22, wherein a first local read word line couples the first means for performing the read operation to the first read port, wherein a second local read word line couples the second means for performing the read operation to the second read port, and wherein a local write word line couples the means for performing the write operation to the write port.
SHARED GLOBAL READ AND WRITE WORD LINESI. Claim of Priority[0001] The present application claims priority from commonly owned U.S. Non-Provisional Patent Application No. 14/546,980 filed November 18, 2014, the contents of which are expressly incorporated by reference in their entirety.//. Field[0002] The present disclosure is generally related to read and write word lines for bit cells.III. Description of Related Art[0003] Advances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless telephones, such as mobile and smart phones, tablets, and laptop computers, which are small, lightweight, and easily carried by users. These devices can communicate voice and data packets over wireless networks. Further, many such devices incorporate additional functionalities such as a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such devices can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these devices can include significant computing capabilities.[0004] Electronic devices, such as wireless telephones, may include memories that include a memory array made of one or more memory cells. One type of memory cell that may be used for the memory (e.g., a memory cache) is a 3 -port bit cell. A 3 -port bit cell may include two read ports and one write port and may be used in static random access memory (SRAM) devices. In 14 nanometer (nm) complementary metal oxide semiconductor (CMOS) technology, a 3-port SRAM bit cell may be manufactured by a two-mask litho-etch-litho-etch (LELE) process using fin field effect transistors(FinFETs) and overlaying of two metal layers, referred to as Ml and M2 layers. The top metal layer, M2, may be patterned in a non-linear fashion and may include "jogs" (e.g., turns). For manufacturing processes less than 14nm (e.g., 10 nm or 7 nm), self- aligned double patterning (SADP) may be preferable to LELE for forming Ml and M2, due to decreased cost and improved process control (e.g., more precise line width and line spacing control) provided by SADP as compared to LELE. However, SADP may not support non-linear patterns that include jogs.IV. Summary[0005] The present disclosure provides a design that includes an array of bit cells that share common global word lines in a single metal layer. For example, the array of bit cells may include a first bit cell and a second bit cell. The first bit cell may be in a first row of the array of bit cells, and the second bit cell may be in a second row of the array of bit cells. The first row may include two local read word lines and a local write word line. The second row may also include two local read word lines and a local write word line. The local read word lines may be in a second metal layer (M2), and the local write word lines may be in a third metal layer (M3). In a particular example, each bit cell (e.g., each row) may have a width of approximately 132 nm (e.g., approximately twice the contacted poly pitch (CPP) or twice the distance between contacted poly (gate) lines of the bit cell).[0006] A first global read word line, a second global read word line, and a global write word line may be in a common metal layer (e.g., a fourth metal layer (M4)). The pitch of each global word line may be approximately 80 nm. The global word lines may be placed in M4 across the width of the first bit cell and the width of the second bit cell (e.g., a combined width of approximately 264 nm). Row select logic may be coupled to the global word lines to control whether the global word lines are coupled to the first bit cell (e.g., the first row) or to the second bit cell (e.g., the second row). Thus, all of the global word lines may be located in a single metal layer (M4), as opposed to one global word line per metal layer, which may improve routing between different components within the bit cells. For example, a sixth metal layer (M6) and an eighth metal layer (M8) may be relatively open to routing because each global word line is in M4.Additionally, the global word lines may have a relatively large pitch (e.g., 80 nm) which may decrease read/write latency due to decreased word line resistive-capacitive (RC) impedance. [0007] In a particular aspect, an apparatus includes an array of bit cells that include a first row of bit cells and a second row of bit cells. The apparatus also includes a first global read word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus further includes a second global read word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus also includes a global write word line configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.[0008] In another particular aspect, a method includes receiving a selection signal at row select logic. The method also includes coupling a first global read word line, a second global read word line, and a global write word line to a first row of bit cells if the selection signal has a first logical value. The method also includes coupling the first global read word line, the second global read word line, and the global write word line to a second row of bit cells if the selection signal has a second logical value. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.[0009] In another particular aspect, a non-transitory computer-readable mediumincludes instructions that, when executed by a processor, cause the processor to initiate coupling a first global read word line, a second global read word line, and a global write word line to a first row of bit cells if a received selection signal has a first logical value. The instructions are also executable to cause the processor to initiate coupling the first global read word line, the second global read word line, and the global write word line to a second row of bit cells if the received selection signal has a second logical value. The first global read word line, the second global read word line, and the global write word line are located in a common metal layer.[0010] In another particular aspect, an apparatus includes first means for performing a read operation configured to be selectively coupled to a first row of bit cells and to a second row of bit cells. The apparatus also include second means for performing the read operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The apparatus further includes means for performing a write operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. The first means for performing the read operation, the second means for performing the read operation, and the means for performing the write operation are located in a common metal layer.[0011] One particular advantage provided by at least one of the disclosed embodiments is improved routing between different components within bit cells. For example, upper metal layers (M6 and M8) may be relatively open to routing because global word lines (e.g., two read global word lines and one write global word line) are placed in a single metal layer (M4). Additionally, because the global word lines are placed across the width of two bit cells (as opposed to one), the global word lines may have a relatively large width which may decrease read/write latency due to decreased word line RC impedance. Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.V. Brief Description of the Drawings[0012] FIG. 1A and FIG. IB are circuit diagrams of an illustrative embodiment of a 3- port bit cell;[0013] FIG. 2 is a layout diagram of a 3 -port SRAM array having shared global read and write word lines;[0014] FIG. 3 is an illustrative embodiment of row select logic for a 3 -port SRAM array having shared global read and write word lines;[0015] FIG. 4 is a flowchart of a particular illustrative embodiment of a method of operating a 3 -port SRAM array having shared global read and write word lines;[0016] FIG. 5 is a block diagram of an electronic device including a 3 -port SRAM array having shared global read and write word lines; and[0017] FIG. 6 is a data flow diagram of a particular illustrative embodiment of amanufacturing process to manufacture electronic devices that include a 3 -port SRAM array having shared global read and write word lines. VI. Detailed Description[0018] Scaling down from 14 nm technology may present challenges. For example, for technology nodes 14 nm and larger, the width of a 3 -port bit cell may be restricted to being less than or equal to twice contacted poly pitch (CPP, the distance between contacted poly (gate) lines). For 14 nm, CPP may be approximately 80-90 nm. As used herein, cell "width" may be perpendicular to a poly direction and along a fin direction. For technology nodes smaller than 14 nm, CPP is reduced, which results in decreased bit cell width (e.g., a bit cell width of approximately 132 nm). When the bit cell width is reduced (i.e., narrowed), write and read word lines in the bit cell may also be narrowed, resulting in increased read/write latency due to increased word line resistor- capacitor (RC) impedance.[0019] In conventional bit cells, global word lines may be located in a fourth metal layer (M4), a sixth metal layer (M6), and an eighth metal layer (M8). For example, each global word line may have a width of approximately 80 nm, which may result in a single global word line per metal layer. To illustrate, a first global read word line may be located in M4, a second global read word line may be located in M6, and a global write word line may be located in M8. Placing a global word line in M4, M6, and M8 may reduce routing capabilities within the bit cell. For example, routing between different components and layers within the bit cell using M4, M6, and M8 may be degraded because the layers include relatively large global word lines.[0020] To circumvent this problem, the present disclosure provides global word lines(e.g., the first global read word line, the second global read word line, and the global write word line) in a common metal layer (e.g., M4). The pitch of each global word line may be approximately 80 nm, and the global word lines may be placed in the common metal layer across the width of two bit cells (e.g., 132 nm X 2 = 264 nm). Row select logic may be coupled to the global word lines to control whether the global word lines are coupled to a first bit cell (e.g., a first row) or to a second bit cell (e.g., a second row).[0021] Particular embodiments of the present disclosure are described below withreference to the drawings. In the description and the drawings, common features are designated by common reference numbers for clarity of the embodiments as depicted and described. [0022] Referring to FIG. 1A and IB, circuit diagrams of a first illustrative embodiment of a bit cell 100 are shown. The bit cell 100 includes a storage latch 1 10. The storage latch 1 10 may include a pair of cross-coupled inverters 1 12, 1 14. Each of the inverters 1 12, 1 14 may include a p-type metal oxide semiconductor (PMOS) transistor and an n- type metal oxide semiconductor (NMOS) transistor, as shown in FIG. IB.[0023] The storage latch 1 10 may be connected (e.g., coupled) to a first write transistor121 and to a second write transistor 122. The write transistors 121, 122 may be NMOS transistors, as shown. The first write transistor 121 may be connected to a first write bit line (WBL1) 135 and to a write word line (WWL) 137, and the second write transistor122 may be connected to a second write bit line (WBL2) 136 and to the write word line (WWL) 137. The first write transistor 121 and the second write transistor 122 may be complementary write transistors of a write port of the bit cell 100. The write port may be used to write a logic zero (e.g., low) value into the storage latch 110 when the write word line 137 and one of the write bit lines 135 or 136 is asserted. The write port may be used to write a logic one (e.g., high) value into the storage latch 110 when the write word line 137 and the other of the write bit lines 135 or 136 is asserted.[0024] The storage latch 110 may also be connected to a first read drive transistor 123 and to a second read drive transistor 124. The first read drive transistor 123 may be connected to a first read transistor 125 and the second read drive transistor 124 may be connected to a second read transistor 126. The read drive transistors 123, 124 and the read transistors 125, 126 may be NMOS transistors, as shown. The first read transistor 125 may be connected to a first read bit line (RBL1) 131 and to a first read word line (RWLl) 133. The second read transistor 126 may be connected to a second read bit line (RBL2) 132 and to a second read word line (RWL2) 134. The transistors 123 and 125 may correspond to a first read port of the bit cell 100, and the transistors 124 and 126 may correspond to a second read port of the bit cell 100. The read word lines 133 and/or 134 may be asserted during a read operation and the read ports may be complementary read ports. For example, when a data value at the first read port is logic zero, a data value at the second read port is logic one, and vice versa. In the example of FIG. IB, the first read port (on the left) is shown as reading a logic zero value ("0") and the second read port (on the right) is shown as reading a logic one ("1") value. [0025] The bit cell 100 may thus include two read ports and one write port, and may alternatively be referred to as a "3-port" bit cell. Because the bit cell 100 includes ten transistors, the bit cell 100 may also be referred to as a "10T" bit cell. In a particular embodiment, the bit cell 100 is included in a static random access memory (SRAM) device and provides high-speed parallel memory access. As an illustrative non-limiting example, an SRAM device that includes the bit cell 100 may be used in an LI and/or L2 cache of a processor. The SRAM device may include one or more arrays of bit cells arranged in a grid-like fashion, including multiple rows of bit cells and multiple columns of bit cells.[0026] As further described herein, the bit cell 100 has a height (H) and a width (W). In accordance with the described techniques, the width (W) may be approximately twice a contacted poly pitch (CPP) associated with the bit cell 100, where CPP corresponds to a distance between contacted poly (gate) lines. CPP may alternately be referred to as gate pitch. For example, CPP is the distance from an edge of a poly line to a corresponding edge of an adjacent poly line (e.g., top-edge to top-edge or bottom-edge to bottom- edge). CPP may therefore also be considered as being equal to a sum of one poly width and one poly spacing. In a 10 nm semiconductor manufacturing process (e.g., a process that has a smallest available line distance/feature size of 10 nm), CPP may be approximately equal to 60-66 nm. For comparative purposes, CPP for a 14 nm process (e.g., a process that has a smallest available line distance/feature size of 14 nm) may be approximately 80-90 nm.[0027] To maintain a bit cell width at 2*CPP (e.g., 132 nm) or less for sub- 14 nmprocesses (e.g., 10 nm processes or 7 nm processes) and to improve routing between different components of the bit cell, the techniques of the present disclosure (as further described with reference to FIG. 2) describe multiple bit cell rows (e.g., a first bit cell row and a second bit cell row) that share common global word lines in a single metal layer. For example, a first global read word line, a second global read word line, and a global write word line may be located in a fourth metal layer (M4). The pitch of each global word line may be approximately 80 nm. Because the width of two bit cell rows is approximately 264 nm (e.g., 2* 132 nm), the three global word lines may be patterned using a width that is less than the width of two bit cells. For example, the total width occupied by the three global word lines (e.g., 3*80 nm = 240 nm) is less than the width of the two bit cell rows.[0028] As further described with respect to FIG. 2, selection logic may selectively couple the global word lines to the first bit cell row or to the second bit cell row. Thus, all of the global word lines may be located in a single metal layer (M4), as opposed to one global word line per metal layer, which may improve routing between different components within the bit cells. For example, a sixth metal layer (M6) and an eighth metal layer (M8) may be relatively open to routing because each global word line is in the fourth metal layer (M4). Additionally, the global word lines may have a relatively large pitch (e.g., 80 nm) which may decrease read/write latency due to decreased word line resistive-capacitive (RC) impedance.[0029] Referring to FIG. 2, a layout diagram 200 of a 3 -port SRAM array having shared global read and write word lines is shown. The layout diagram 200 includes a first bit cell 202, a second bit cell 204, a third bit cell 206, and a fourth bit cell 208. Each bit cell 202-208 may have the circuit layout shown in FIGS. 1A and IB. The first bit cell 202 and the third bit cell 206 may be included in a first array of the 3 -port SRAM array, and the second bit cell 204 and the fourth bit cell 208 may be included in a second array of the 3 -port SRAM array. The first array (e.g., the first and third bit cells 202, 206) may have a width that is equal to twice the CPP of one of the bit cells 202-208, and the second array (e.g., the second and fourth bit cells 204, 208) may also have a width that is equal to twice the CPP of one of the bit cells 202-208. For example, in a lOnm semiconductor manufacturing process, the first array and the second array may each have a width of approximately 132 nm. Thus, the combined width of the first array and the second array may be approximately equal to 264 nm.[0030] When manufactured, the bit cells 202-208 may include variouscomponents/layers, such as fins (FinFETs including source/drain regions), transistor gates (alternatively referred to as poly lines), middle-of-line contacts (e.g., local interconnects) for transistor source/drain regions (MD), middle-of-line contacts (e.g., local interconnects) for gates/poly lines (MP), a first metal layer (Ml), vias connecting MD and MP to Ml (ViaO), a second metal layer (M2), vias connecting Ml to M2 (Vial), a third metal layer (M3), and vias connecting M2 to M3 (Via2). [0031] FIG. 2 illustrates the second metal layer (M2) and the third metal layer (M3).The second metal layer (M2) may be coupled to the bit cells 202-208, and the third metal layer (M3) may be patterned above the second metal layer (M2). A first local read word line 220 may be included in the second metal layer (M2). For the bit cells 202, 206 in the first array, the first local read word line 220 may correspond to the first read word line (RWLl) 133 of FIGS. 1A and IB. For example, the first local read word line 220 may be coupled to a gate of a transistor in the first bit cell 202 (that corresponds to the transistor 125 of FIGS. 1A and IB) and may be coupled to a gate of a transistor in the third bit cell 206 (that corresponds to the transistor 125).[0032] A first local write word line 222 may be included in the third metal layer (M3).For the bit cells 202, 206 in the first array, the first local write word line 222 may correspond to the write word line (WWL) 137 of FIGS. 1A and IB. For example the first local write word line 222 may be coupled to gates of transistors in the first bit cell 202 (that correspond to the transistors 121, 122 of FIGS. 1A and IB) and may be coupled to gates of transistors in the third bit cell 206 (that correspond to the transistors 121, 122).[0033] A second local read word line 224 may also be included in the second metal layer (M2). For the bit cells 202, 206 in the first array, the second local read word line 224 may correspond to the second read word line (RWL2) 134 of FIGS. 1A and IB. For example, the second local read word line 224 may be coupled to a gate of a transistor in the first bit cell 202 (that corresponds to the transistor 126 of FIGS. 1A and IB) and may be coupled to a gate of a transistor in the third bit cell 206 (that corresponds to the transistor 126).[0034] A third local read word line 230 may also be included in the second metal layer(M2). For the bit cells 204, 208 in the second array, the third local read word line 230 may correspond to the first read word line (RWLl) 133 of FIGS. 1A and IB. For example, the third local read word line 230 may be coupled to a gate of a transistor in the second bit cell 204 (that corresponds to the transistor 125 of FIGS. 1A and IB) and may be coupled to a gate of a transistor in the fourth bit cell 208 (that corresponds to the transistor 125). [0035] A second local write word line 232 may also be included in the third metal layer(M3). For the bit cells 204, 208 in the second array, the second local write word line 232 may correspond to the write word line (WWL) 137 of FIGS. 1A and IB. For example the second local write word line 232 may be coupled to gates of transistors in the second bit cell 204 (that correspond to the transistors 121, 122 of FIGS. 1A and IB) and may be coupled to gates of transistors in the fourth bit cell 208 (that correspond to the transistors 121, 122).[0036] A fourth local read word line 234 may also be included in the second metal layer(M2). For the bit cells 204, 208 in the second array, the fourth local read word line 234 may correspond to the second read word line (RWL2) 134 of FIGS. 1A and IB. For example, the fourth local read word line 234 may be coupled to a gate of a transistor in the second bit cell 204 (that corresponds to the transistor 126 of FIGS. 1A and IB) and may be coupled to a gate of a transistor in the fourth bit cell 208 (that corresponds to the transistor 126).[0037] In a standard bit cell that includes a poly-gate having a length oriented in the horizontal direction, a first metal layer may have a length oriented in a vertical direction, a second metal layer may have a length oriented in a horizontal direction (as illustrated in the embodiment of FIG. 2), and a third metal layer may have a length oriented in a vertical direction. However, because the length of the third metal layer (M3) of FIG. 2 is oriented in the horizontal direction, the third metal layer (M3) is a "wrong direction layer." Thus, the pitch of the third metal layer (M3) may be approximately equal to 126 nm). Because the first metal layer (Ml) (not shown) and the second metal layer (M2) of FIG. 2 are "right direction layers" (e.g., layers having lengths that are oriented in a similar direction as corresponding layers in a standard bit cell), the first metal layer (Ml) and the second metal layer (M2) have a relatively low pitch (e.g., approximately equal to 42 nm).[0038] When migrating from a 14 nm process to a 10 nm process, SADP may bepreferable for patterning metal layers of the bit cells 202-208. Because SADP may be ill-suited for jogs/turns, the metal layers of the bit cells 202-208 may correspond to linear-only patterns. When using linear-only patterns at 10 nm, three independently accessible word lines (2 read word lines and 1 write word line) may be patterned in the second and third metal layers (M2, M3) for each bit cell 202-208.[0039] As described above, the second metal layer (M2) is a "right direction layer" and has a relatively low pitch. Thus, the two read word lines (RWLl, RWL2) 133, 134 may be patterned in the second metal layer (M2) without expanding the width of the bit cells 202-208. For example, each read word line (RWLl, RWL2) 133, 134 may have a width of approximately 23 nm (satisfying the pitch requirement of the second metal layer (M2)) and may accommodate the width of the bit cells 202-208 (e.g., 2*CPP or 132 nm).[0040] As described above, the third metal layer (M3) is a "wrong direction layer" and has a relatively high pitch. Thus, a single write word line (WWL) 137 may be patterned in the third metal layer (M3) for each bit cell 202-208 without expanding the width of the bit cells 202-208. Because a single write word line (WWL) 137 is patterned in the third metal layer (M3) (as opposed to the two read word lines (RWLl, RWL2) 133, 134 which would increase the width of the bit cells 202-208), the write word line (WWL) 137 may have a relatively large width. For example, the write word line (WWL) 137 may have a width of approximately 66 nm (satisfying the pitch requirement of the third metal layer (M3)) and may accommodate the width of the bit cells 202-208. The relatively large width of the write word line (WWL) 137 may reduce write latency for the bit cells 202-208. For example, an increased width of the write word line (WWL) 137 may reduce the RC impedance of the write word line (WWL) 137, resulting in reduced latency.[0041] FIG. 2 also illustrates a fourth metal layer (M4). A first global read word line240, a global write word line 242, and a second global read word line 244 may be included in the fourth metal layer (M4). The fourth metal layer (M4) may be a "right direction layer" (e.g., oriented in a similar manner as a corresponding layer in a standard bit cell) and may have a relatively low pitch requirement. For example, in a 10 nm manufacturing process, the pitch requirement for the fourth metal layer (M4) may be approximately 80 nm. Thus, the pitch of each global word line 240-244 may be approximately 80 nm. Because the combined width of the first array and the second array is approximately 264 nm (e.g., 2* 132 nm), the three global word lines 240-244 may be patterned using a width that is less than the combined width of the first array and the second array. For example, the total width occupied by the three global word lines 240-244 (e.g., 3*80 nm = 240 nm) is less than the combined width of the first and second array.[0042] Row select logic 250 may be configured to control whether the global word lines240-244 are coupled to the first array or to the second array. For example, based on a logical value (e.g., a voltage level) of a selection signal, the row select logic 250 may couple one of the global word lines 240-244 to a corresponding local word line 220-224 in the first array or to a corresponding local word line 230-234 in the second array. The operations of the row select logic 250 are described in greater detail with respect to FIG. 3.[0043] The layout diagram 200 of FIG. 2 may provide improved routing betweendifferent components within bit cells 202-208. For example, compared to bit cell architectures that have one global word line in a fourth metal layer (M4), one global word line in a sixth metal layer (M6), and one global word line in an eighth metal layer (M8), the layout diagram 200 includes three global word lines 240-244 in the fourth metal layer (M4). Thus, the upper metal layers (e.g., the sixth metal layer (M6) and the eighth metal layer (M8)) may be relatively open to routing because the global word lines 240-244 are placed in a single metal layer (e.g., the fourth metal layer (M4)). Additionally, because the global word lines 240-244 are placed across the width of two arrays (as opposed to a typical bit cell architecture where global word lines are placed across the width of a single array), the global word lines 240-244 may have a relatively large width which may decrease read/write latency due to decreased word line RC impedance.[0044] Referring to FIG. 3, a particular illustrative embodiment of the row select logic250 of FIG. 2 is shown. The row select logic 250 includes a first logical NAND gate 302, a second logical NAND gate 304, a third logical NAND gate 306, a first logical AND gate 312, a second logical AND gate 314, and a third logical AND gate 316.[0045] The row select logic 250 may be configured to control whether the global word lines 240-244 are coupled to the first array of bit cells (e.g., the first and third bit cells 202, 206 of FIG. 2) or to the second array of bit cells (e.g., the second and fourth bit cells 204, 208 of FIG. 2). To illustrate, a selection signal 320 may be provided to a first input of each of the logical NAND gates 302-306 and to a second input of each of the logical AND gates 312-316. The first global read word line 240 may be coupled to a second input of the first logical NAND gate 302 and to a first input of the first logical AND gate 312. The global write word line 242 may be coupled to a second input of the second logical NAND gate 304 and to a first input of the second logical AND gate 314. The second global read word line 244 may be coupled to a second input of the third logical NAND gate 306 and to a first input of the third logical AND gate 316.[0046] If the first global read word line 240 has a logical high voltage level and the selection signal 320 has a logical low voltage level, the first logical NAND gate 302 provides a logical high voltage level to the first local read word line 220 (e.g., to "couple" the first global read word line 240 to the first local read word line 220) and the first logical AND gate 312 provides a logical low voltage level to the third local read word line 230 (e.g., to "decouple" the first global read word line 240 from the third local read word line 230). If the first global read word line 240 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the first logical NAND gate 302 provides a logical low voltage level to the first local read word line 220 (e.g., to "decouple" the first global read word line 240 from the first local read word line 220) and the first logical AND gate 312 provides a logical high voltage level to the third local read word line 230 (e.g., to "couple" the first global read word line 240 to the third local read word line 230).[0047] If the global write word line 242 has a logical high voltage level and theselection signal 320 has a logical low voltage level, the second logical NAND gate 304 provides a logical high voltage level to the first local write word line 222 (e.g., to "couple" the global write word line 242 to the first local write word line 222) and the second logical AND gate 314 provides a logical low voltage level to the second local read word line 232 (e.g., to "decouple" the global write word line 242 from the fourth local read word line 234). If the global write word line 242 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the second logical NAND gate 304 provides a logical low voltage level to the first local write word line 222 (e.g., to "decouple" the global write word line 242 from the first local write word line 222) and the second logical AND gate 314 provides a logical high voltage level to the second local write word line 232 (e.g., to "couple" the global write word line 242 to the second local write word line 232).[0048] If the second global read word line 244 has a logical high voltage level and the selection signal 320 has a logical low voltage level, the third logical NAND gate 306 provides a logical high voltage level to the second local read word line 224 (e.g., to "couple" the second global read word line 244 to the second local read word line 224) and the third logical AND gate 316 provides a logical low voltage level to the fourth local read word line 234 (e.g., to "decouple" the second global read word line 244 from the fourth local read word line 234). If the second global read word line 244 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the third logical NAND gate 306 provides a logical low voltage level to the second local read word line 224 (e.g., to "decouple" the second global read word line 244 from the second local read word line 224) and the third logical AND gate 316 provides a logical high voltage level to the fourth local read word line 234 (e.g., to "couple" the second global read word line 244 to the fourth local read word line 234).[0049] The row select logic 250 may enable the global word lines 240-244 to beselectively coupled to the respective local word lines 220-224, 230-224. The row select logic 250 may enable the global word lines 240-244 to be placed in the fourth metal layer (M4) as opposed to three distinct metal layers (e.g., the fourth metal layer (M4), the sixth metal layer (M6), and the eighth metal layer (M8)). Thus, the upper metal layers (e.g., the sixth metal layer (M6) and the eighth metal layer (M8)) may be relatively open to routing because the global word lines 240-244 are placed in a single metal layer (e.g., the fourth metal layer (M4)). Thus, the row select logic 250 may also enable the global word lines 240-244 to have a relatively large width which may decrease read/write latency due to decreased word line RC impedance.[0050] Referring to FIG. 4, a flowchart of a particular illustrative embodiment of a method 400 of operating a 3 -port SRAM array having shared global read and write word lines is shown. The method may be performed using the row select logic 250 of FIGS. 2 and 3.[0051] The method 400 includes receiving a selection signal, at 402. For example, referring to FIG. 3, the row select logic 250 may receive the selection signal 320. The selection signal 320 may be provided to a first input of each of the logical NAND gates 302-306 and to a second input of each of the logical AND gates 312-316.[0052] A first global read word line, a second global read word line, and a global write word line may be coupled to a first row of bit cells if the selection signal has a first logical value, at 404. For example, referring to FIGS. 2 and 3, if the first global read word line 240 has a logical high voltage level and the selection signal 320 has a logical low voltage level, the first logical NAND gate 302 provides a logical high voltage level to the first local read word line 220 (e.g., to "couple" the first global read word line 240 to the first local read word line 220) and the first logical AND gate 312 provides a logical low voltage level to the third local read word line 230 (e.g., to "decouple" the first global read word line 240 from the third local read word line 230). The first local read word line 220 is coupled to the first row of bit cells (e.g., the first array of bit cells in FIG. 2).[0053] As another example, if the global write word line 242 has a logical high voltage level and the selection signal 320 has a logical low voltage level, the second logical NAND gate 304 provides a logical high voltage level to the first local write word line 222 (e.g., to "couple" the global write word line 242 to the first local write word line 222) and the second logical AND gate 314 provides a logical low voltage level to the second local read word line 232 (e.g., to "decouple" the global write word line 242 from the fourth local read word line 234). The first local write word line 222 is coupled to the first row of bit cells (e.g., the first array of bit cells in FIG. 2). As another example, if the second global read word line 244 has a logical high voltage level and the selection signal 320 has a logical low voltage level, the third logical NAND gate 306 provides a logical high voltage level to the second local read word line 224 (e.g., to "couple" the second global read word line 244 to the second local read word line 224) and the third logical AND gate 316 provides a logical low voltage level to the fourth local read word line 234 (e.g., to "decouple" the second global read word line 244 from the fourth local read word line 234). The second local read word line 224 is coupled to the first row of bit cells (e.g., the first array of bit cells in FIG. 2).[0054] The first global read word line, the second global read word line, and the global write word line may be coupled to a second row of bit cells if the section signal has a second logical value, at 406. For example, referring to FIGS. 2 and 3, if the first global read word line 240 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the first logical NAND gate 302 provides a logical low voltage level to the first local read word line 220 (e.g., to "decouple" the first global read word line 240 from the first local read word line 220) and the first logical AND gate 312 provides a logical high voltage level to the third local read word line 230 (e.g., to "couple" the first global read word line 240 to the third local read word line 230). The third local read word line 230 is coupled to the second row of bit cells (e.g., the second array of bit cells in FIG. 2).[0055] As another example, if the global write word line 242 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the second logical NAND gate 304 provides a logical low voltage level to the first local write word line 222 (e.g., to "decouple" the global write word line 242 from the first local write word line 222) and the second logical AND gate 314 provides a logical high voltage level to the second local write word line 232 (e.g., to "couple" the global write word line 242 to the second local write word line 232). The second local write word line 232 is coupled to the second row of bit cells (e.g., the second array of bit cells in FIG. 2). As another example, if the second global read word line 244 has a logical high voltage level and the selection signal 320 has a logical high voltage level, the third logical NAND gate 306 provides a logical low voltage level to the second local read word line 224 (e.g., to "decouple" the second global read word line 244 from the second local read word line 224) and the third logical AND gate 316 provides a logical high voltage level to the fourth local read word line 234 (e.g., to "couple" the second global read word line 244 to the fourth local read word line 234). The fourth local read word line 234 is coupled to the second row of bit cells (e.g., the second array of bit cells in FIG. 2).[0056] The first global read word line 240, the global write word line 242, and the second global read word line 244 are located in a common metal layer (e.g., the fourth metal layer (M4) of FIG. 2). Thus, the method 400 of FIG. 4 may provide a technique for coupling the global word lines 240-244 to the respective local word lines 220-224, 230-234 so that the global word lines 240-244 may be placed in the common metal layer. [0057] Referring to FIG. 5, a block diagram of a particular illustrative embodiment of an electronic device is depicted and generally designated 500. The electronic device 500 includes a processor 510, such as a digital signal processor (DSP) or a central processing unit (CPU), coupled to a memory 532.[0058] The processor 510 may be coupled to an SRAM device 564 that includes an array of bit cells with shared global word lines. For example, the SRAM device 564 may include the bit cells 202-208 of FIG. 2 and may include the metal layer configuration as described with respect to FIG. 2. In a particular embodiment, the SRAM device 564 may also include the row select logic 250 of FIGS. 2-3. In another particular embodiment, functions of the row select logic 250 may be implemented by the processor 510. It should be noted that although FIG. 5 illustrates use of the SRAM device 564 coupled to the processor 510, this is not to be considered limiting. SRAM devices in accordance with the present disclosure, such as the SRAM device 564, may be included in any type of memory of any type of electronic device.[0059] FIG. 5 shows a display controller 526 that is coupled to the processor 510 and to a display 528. A coder/decoder (CODEC) 534 can also be coupled to the processor 510 A speaker 536 and a microphone 538 can be coupled to the CODEC 534. FIG. 5 also indicates that a wireless controller 540 can be coupled to the processor 510 and to an antenna 542. In a particular embodiment, the processor 510, the display controller 526, the memory 532, the CODEC 534, and the wireless controller 540 are included in a system- in-package or system-on-chip device (e.g., mobile station modem (MSM)) 522. In a particular embodiment, an input device 530 and a power supply 544 are coupled to the system-on-chip device 522. Moreover, in a particular embodiment, as illustrated in FIG. 5, the display 528, the input device 530, the speaker 536, the microphone 538, the antenna 542, and the power supply 544 are external to the system-on-chip device 522. However, each of the display 528, the input device 530, the speaker 536, the microphone 538, the antenna 542, and the power supply 544 can be coupled to a component of the system-on-chip device 522, such as an interface or a controller.[0060] Although the SRAM device 564 is depicted in the wireless device 500 of FIG. 5, in other embodiments, the SRAM device 564 may be included in other devices. As non-limiting examples, the SRAM device 564 may be included in a set top box, an entertainment unit, a navigation device, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a video player, a digital video player, a digital video disc (DVD) player, a portable digital video player, or any other device.[0061] In conjunction with the described embodiments, an apparatus includes first means for performing a read operation configured to be selectively coupled to a first row of bit cells and to a second row of bit cells. For example, the first means for performing the read operation may include the first global read word line 240 of FIGS. 2-3, the SRAM device 564 of FIG. 5, one or more other devices configured to perform the read operation, or any combination thereof.[0062] The apparatus also includes second means for performing the read operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. For example, the second means for performing the read operation may include the second global read word line 244 of FIGS. 2-3, the SRAM device 564 of FIG. 5, one or more other devices configured to perform the read operation, or any combination thereof.[0063] The apparatus also include means for performing a write operation configured to be selectively coupled to the first row of bit cells and to the second row of bit cells. For example, the means for performing the writer operation may include the global write word line 242 of FIGS. 2-3, the SRAM device 564 of FIG. 5, one or more other devices configured to perform the write operation, or any combination thereof. The first means for performing the read operation, the second means for performing the read operation, and the means for performing the write operation may be located in a common metal layer (e.g., the fourth metal layer (M4) of FIG. 2).[0064] The foregoing disclosed devices and functionalities may be designed andconfigured into computer files (e.g. RTL, GDSII, GERBER, etc.) stored on computer readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products include semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips may be employed in electronic devices. FIG. 6 depicts a particular illustrative embodiment of an electronic device manufacturing process 600. For example, the manufacturing process 600 may be used to manufacture electronic devices that include an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3.[0065] Physical device information 602 is received at the manufacturing process 600, such as at a research computer 606. The physical device information 602 may include design information representing at least one physical property of an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3. For example, the physical device information 602 may include physical parameters, material characteristics, and structure information that is entered via a user interface 604 coupled to the research computer 606. The research computer 606 includes a processor 608, such as one or more processing cores, coupled to a computer-readable medium (e.g., a non-transitory computer-readable medium), such as a memory 610. The memory 610 may store computer-readable instructions that are executable to cause the processor 608 to transform the physical device information 602 to comply with a file format and to generate a library file 612.[0066] In a particular embodiment, the library file 612 includes at least one data file including the transformed design information. For example, the library file 612 may include a library of bit cells, including an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, that is provided for use with an electronic design automation (EDA) tool 620.[0067] The library file 612 may be used in conjunction with the EDA tool 620 at a design computer 614 including a processor 616, such as one or more processing cores, coupled to a memory 618. The EDA tool 620 may be stored as processor executable instructions at the memory 618 to enable a user of the design computer 614 to design a circuit including an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, of the library file 612. For example, a user of the design computer 614 may enter circuit design information 622 via a user interface 624 coupled to the design computer 614. The circuit design information 622 may include design information representing at least one physical property of an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3. To illustrate, the circuit design property may include identification of particular circuits and relationships to other elements in a circuit design, positioning information, feature size information, interconnection information, or other information representing a physical property of an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3.[0068] The design computer 614 may be configured to transform the designinformation, including the circuit design information 622, to comply with a file format. To illustrate, the file format may include a database binary file format representing planar geometric shapes, text labels, and other information about a circuit layout in a hierarchical format, such as a Graphic Data System (GDSII) file format. The design computer 614 may be configured to generate a data file including the transformed design information, such as a GDSII file 626 that includes information describing an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, in addition to other circuits or information. To illustrate, the data file may include information corresponding to a system-on-chip (SOC) that includes an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, and that also includes additional electronic circuits and components within the SOC.[0069] The GDSII file 626 may be received at a fabrication process 628 to manufacture an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, according to transformed information in the GDSII file 626. For example, a device manufacture process may include providing the GDSII file 626 to a mask manufacturer 630 to create one or more masks, such as masks to be used with photolithography processing, illustrated as a representative mask 632. The mask 632 may be used during the fabrication process to generate one or more wafers 633, which may be tested and separated into dies, such as a representative die 636. The die 636 includes a circuit including a device that includes an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3.[0070] For example, the fabrication process 628 may include a processor 634 and a memory 635 to initiate and/or control the fabrication process 628. The memory 635 may include executable instructions such as computer-readable instructions or processor-readable instructions. The executable instructions may include one or more instructions that are executable by a computer such as the processor 634.[0071] The fabrication process 628 may be implemented by a fabrication system that is fully automated or partially automated. For example, the fabrication process 628 may be automated according to a schedule. The fabrication system may include fabrication equipment (e.g., processing tools) to perform one or more operations to form a semiconductor device. For example, the fabrication equipment may be configured to deposit one or more materials using chemical vapor deposition (CVD) and/or physical vapor deposition (PVD), pattern materials using a single-mask or multi-mask litho-etch process (e.g., two-mask LELE), pattern materials using a litho-freeze-litho-etch (LFLE) process, pattern materials using a self-aligned double patterning (SADP) process, epitaxially grow one or more materials, conformally deposit one or more materials, apply a hardmask, apply an etching mask, perform etching, perform planarization, form a dummy gate stack, form a gate stack, perform a standard clean 1 type, etc. In a particular embodiment, the fabrication process 628 corresponds to a semiconductor manufacturing process associated with a technology node smaller than 14 nm (e.g., 10 nm, 7 nm, etc.). The specific process or combination of processes used to manufacture a device (e.g., including an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3) may be based on design constraints and available materials/equipment. Thus, in particular embodiments, different processes may be used than described herein during manufacture of the device.[0072] The fabrication system (e.g., an automated system that performs the fabrication process 628) may have a distributed architecture (e.g., a hierarchy). For example, the fabrication system may include one or more processors, such as the processor 634, one or more memories, such as the memory 635, and/or controllers that are distributed according to the distributed architecture. The distributed architecture may include a high-level processor that controls or initiates operations of one or more low-level systems. For example, a high-level portion of the fabrication process 628 may include one or more processors, such as the processor 634, and the low-level systems may each include or may be controlled by one or more corresponding controllers. A particular controller of a particular low-level system may receive one or more instructions (e.g., commands) from a particular high-level system, may issue sub-commands to subordinate modules or process tools, and may communicate status data back to the particular high-level. Each of the one or more low-level systems may be associated with one or more corresponding pieces of fabrication equipment (e.g., processing tools). In a particular embodiment, the fabrication system may include multiple processors that are distributed in the fabrication system. For example, a controller of a low-level system component may include a processor, such as the processor 634.[0073] Alternatively, the processor 634 may be a part of a high-level system,subsystem, or component of the fabrication system. In another embodiment, the processor 634 includes distributed processing at various levels and components of a fabrication system.[0074] The executable instructions included in the memory 635 may enable theprocessor 634 to form (or initiate formation of) an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3. The die 636 may be provided to a packaging process 638 where the die 636 is incorporated into a representative package 640. For example, the package 640 may include the single die 636 or multiple dies, such as a system- in-package (SiP) arrangement. The package 640 may be configured to conform to one or more standards or specifications, such as Joint Electron Device Engineering Council (JEDEC) standards.[0075] Information regarding the package 640 may be distributed to various product designers, such as via a component library stored at a computer 646. The computer 646 may include a processor 648, such as one or more processing cores, coupled to a memory 650. A printed circuit board (PCB) tool may be stored as processor executable instructions at the memory 650 to process PCB design information 642 received from a user of the computer 646 via a user interface 644. The PCB design information 642 may include physical positioning information of a packaged semiconductor device on a circuit board, the packaged semiconductor device corresponding to the package 640 including an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3.[0076] The computer 646 may be configured to transform the PCB design information642 to generate a data file, such as a GERBER file 652 with data that includes physical positioning information of a packaged semiconductor device on a circuit board, as well as layout of electrical connections such as traces and vias, where the packaged semiconductor device corresponds to the package 640 including an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3. In other embodiments, the data file generated by the transformed PCB design information may have a format other than a GERBER format.[0077] The GERBER file 652 may be received at a board assembly process 654 and used to create PCBs, such as a representative PCB 656, manufactured in accordance with the design information stored within the GERBER file 652. For example, the GERBER file 652 may be uploaded to one or more machines to perform various steps of a PCB production process. The PCB 656 may be populated with electronic components including the package 640 to form a representative printed circuit assembly (PCA) 658.[0078] The PCA 658 may be received at a product manufacture process 660 andintegrated into one or more electronic devices, such as a first representative electronic device 662 and a second representative electronic device 664. For example, the first representative electronic device 662, the second representative electronic device 664, or both, may include or correspond to the electronic device 500 of FIG. 5, or a component thereof, such as the SRAM device 564. As an illustrative, non-limiting example, the first representative electronic device 662, the second representative electronic device 664, or both, may include a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a satellite phone, a computer, a tablet, a portable computer, a processor (or other electronic device) within a vehicle, or a desktop computer. Alternatively or additionally, the first representative electronic device 662, the second representative electronic device 664, or both, may include a set top box, an entertainment unit, a navigation device, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a video player, a digital video player, a digital video disc (DVD) player, a portable digital video player, any other device that stores or retrieves data or computer instructions, or a combination thereof, into which an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, is integrated. As another illustrative, non-limiting example, one or more of the electronic devices 662 and 664 may include remote units, such as mobile phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, global positioning system (GPS) enabled devices, navigation devices, fixed location data units such as meter reading equipment, or any other device that stores or retrieves data or computer instructions, or any combination thereof. Although FIG. 6 illustrates remote units according to teachings of the disclosure, the disclosure is not limited to these illustrated units.Embodiments of the disclosure may be suitably employed in any device which includes active integrated circuitry including memory and on-chip circuitry.[0079] A device that includes an array of bit cells according to the shared global word line techniques described with respect to FIGS. 2-3, may be fabricated, processed, and incorporated into an electronic device, as described in the illustrative process 600. One or more aspects of the embodiments disclosed with respect to FIGS. 1A-6 may be included at various processing stages, such as within the library file 612, the GDSII file 626 (e.g., a file having a GDSII format), and the GERBER file 652 (e.g., a file having a GERBER format), as well as stored at the memory 610 of the research computer 606, the memory 618 of the design computer 614, the memory 650 of the computer 646, the memory of one or more other computers or processors (not shown) used at the various stages, such as at the board assembly process 654, and also incorporated into one or more other physical embodiments such as the mask 632, the die 636, the package 640, the PCA 658, other products such as prototype circuits or devices (not shown), or any combination thereof. Although various representative stages of production from a physical device design to a final product are depicted, in other embodiments fewer stages may be used or additional stages may be included. Similarly, the process 600 may be performed by a single entity or by one or more entities performing various stages of the process 600.[0080] Although one or more of FIGS. 1A-6 may illustrate systems, apparatuses, and/or methods according to the teachings of the disclosure, the disclosure is not limited to these illustrated systems, apparatuses, and/or methods. Embodiments of the disclosure may be suitably employed in any device that includes integrated circuitry including memory, a processor, and on-chip circuitry. One or more functions or components of any of FIGS. 1A-6 as illustrated or described herein may be combined with one or more other portions of another of FIGS. 1A-6. Accordingly, no single embodiment described herein should be construed as limiting and embodiments of the disclosure may be suitably combined without departing form the teachings of the disclosure.[0081] Those of skill would further appreciate that the various illustrative logicalblocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software executed by a processor, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or processor executable instructions depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0082] The steps of a method or algorithm described in connection with theembodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disc read-only memory (CD-ROM), or any other form of non-transient storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.[0083] The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
A method for forming a rough ruthenium-containing layer on the surface of a substrate assembly includes providing a ruthenium-containing precursor into the reaction chamber. A rough ruthenium layer may be deposited on the surface of the substrate assembly at a rate of about 100 ANGSTROM /minute to about 500 ANGSTROM /minute using the ruthenium-containing precursor. Further, a rough ruthenium oxide layer may be formed by providing a ruthenium-containing precursor and an oxygen-containing precursor into the reaction chamber to deposit the rough ruthenium oxide layer on the surface of the substrate assembly at a rate of about 100 ANGSTROM /minute to about 1200 ANGSTROM /minute. An anneal of the layers may be performed to further increase the roughness. In addition, conductive structures including a rough ruthenium layer or a rough ruthenium oxide layer are provided. Such layers may be used in conjunction with non-rough ruthenium and/or non-rough ruthenium oxide layers to form conductive structures. For example, such structures may be part of a capacitor structure, e.g., bottom electrode of a capacitor.
What is claimed is: 1. A method for forming a rough conductive layer in the fabrication of integrated circuits, the method comprising: providing a substrate assembly in a reaction chamber, the substrate assembly including a surface; maintaining the substrate assembly surface at a temperature in a range of about 100 C to about 400 C ; maintaining the pressure of the reaction chamber in a range of about 0.4 torr to about 10 torr ; and providing a carrier gas at a flow rate of about 100 sccm to about 500 seem through a ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C into the reaction chamber to deposit a rough ruthenium layer on the surface of the substrate assembly.2. The method of claim 1, wherein the method further includes providing a diluent gas at a flow rate of about 100 sccm to about 500 sccm.3. The method of claim 1, wherein maintaining the substrate assembly surface at a temperature includes maintaining the substrate assembly surface at a temperature in the range of about 150 C to about 250 C. 4. The method of claim 1, wherein the rough ruthenium layer is deposited at a rate of about 100 A/minute to about 500 A/minute. 5. The method of claim 4, wherein the rough ruthenium layer is deposited at a rate of about 200 A/minute to about 300 A/minute. 6. The method of claim 4, wherein the RMS roughness of the rough ruthenium layer is in a range of about 50 A to about 600 A. 7. The method of claim 4, wherein a nominal center cross-section area of grains at a surface of the rough ruthenium layer is in a range of about 100 A to about 800 A. 8. The method of claim 1, wherein the method further includes annealing the rough ruthenium layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes.9. The method of claim 8, wherein annealing the rough ruthenium layer further includes annealing the rough ruthenium layer at a pressure in a range of about 0.1 millitorr to about 5 atmospheres in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture.10. The method of claim 9, wherein the gas atmosphere is selected from one of oxygen, ozone, nitrogen, argon or a combination thereof, and further wherein the glow discharge is created by applying a radio frequency electromagnetic field of 13.56 megahertz at a power density of 0 to about 5 kW/cmacross the gas atmosphere.11. A method for forming a rough conductive layer in the fabrication of integrated circuits, the method comprising: providing a substrate assembly in a reaction chamber, the substrate assembly including a surface ; providing a ruthenium-containing precursor into the reaction chamber ; depositing a rough ruthenium layer on the surface of the substrate assembly at a rate of about 100 A/minute to about 500 A/minute. 12. The method of claim 11, wherein the rough ruthenium layer is deposited at a rate of about 200 A/minute to about 300 A/minute. 13. The method of claim 11, wherein providing a ruthenium-containing precursor into the reaction chamber includes providing a carrier gas at a flow rate of about 100 sccm to about 500 sccm through the ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C and into the reaction chamber to deposit the rough ruthenium layer on the surface of the substrate assembly.14. The method of claim 11, wherein the method further includes maintaining the substrate assembly surface at a temperature in a range of about 100 C to about 400 C. 15. The method of claim 11, wherein the method further includes maintaining the pressure of the reaction chamber in a range of about 0.4 torr to about 10 torr.16. The method of claim 11, wherein the method further includes annealing the rough ruthenium layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes.17. The method of claim 16 wherein annealing the rough ruthenium layer further includes annealing the rough ruthenium layer at a pressure in a range of about 0.1 millitorr to about 5 atmospheres in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture.18. The method of claim 11, wherein providing the substrate assembly surface includes providing non-rough ruthenium, the rough layer of ruthenium formed on the non-rough ruthenium.19. The method of claim 11, wherein providing the substrate assembly surface includes providing non-rough ruthenium oxide, the rough layer of ruthenium formed on the non-rough ruthenium oxide.20. A method for forming a rough conductive layer in the fabrication of integrated circuits, the method comprising: providing a substrate assembly in a reaction chamber, the substrate assembly including a surface ; providing a ruthenium-containing precursor into the reaction chamber; providing an oxygen-containing precursor into the reaction chamber ; depositing a rough ruthenium oxide layer on the surface of the substrate assembly at a rate of about 100 A/minute to about 1200 A/minute, wherein theRMS roughness of the rough ruthenium oxide layer is in a range of about 50 A to about 600 A or a nominal center cross-section area of grains at a surface of the rough ruthenium oxide layer is in a range of about 100 A to about 800A.21. The method of claim 20, wherein the rough ruthenium oxide layer is deposited at a rate of about 300 A/minute to about 600 A/minute. 22. The method of any of claims 20-21, wherein providing a rutheniumcontaining precursor into the reaction chamber includes providing a carrier gas at a flow rate of about 100 sccm to about 500 sccm through the ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C and into the reaction chamber, and further wherein providing the oxygen-containing precursor into the reaction chamber includes providing an oxygen-containing precursor into the reaction chamber at a flow rate of about 100 sccm to about 2000 sccm.23. The method of any of claims 20-22, wherein the method further includes maintaining the substrate assembly surface at a temperature in a range of about 100 C to about 400 C. 24. The method of any of claims 20-23, wherein the method further includes annealing the rough ruthenium oxide layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas.25. The method of claim 24, wherein the method further includes maintaining the pressure of the reaction chamber in a range of about 0.4 torr to about 100 torr. 26. The method of any of claims 20-25, wherein providing the substrate assembly surface includes providing non-rough ruthenium, the rough layer of ruthenium oxide is formed on the non-rough ruthenium.27. A conductive structure comprising at least a rough ruthenium layer, wherein a surface of the rough ruthenium layer has a surface area greater than about 1. 2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium layer.28. The conductive structure of claim 27, wherein the surface of the rough ruthenium layer has a surface area greater than about 1.5 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium layer.29. The conductive structure of any of claims 27-28, wherein an RMS roughness of the surface of the rough ruthenium layer is in a range of about 50 A to about 600 A.30. The conductive structure of any of claims 27-29, wherein a nominal center cross-section area of grains at the surface of the rough ruthenium layer is in a range of about 100 A to about 800 A. 31. The conductive structure of any of claims 27-30, further comprising nonrough ruthenium having a surface region upon which the layer of rough ruthenium is formed.32. The conductive structure of any of claims 27-30, further comprising nonrough ruthenium oxide having a surface region upon which the layer of rough ruthenium is formed.33. A conductive structure comprising at least a rough ruthenium oxide layer, wherein a surface of the rough ruthenium oxide layer has a surface area greater than about 1.2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium oxide layer.34. The conductive structure of claim 33, wherein the surface of the rough ruthenium oxide layer has a surface area greater than about 1.2 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium oxide layer.35. The conductive structure of any of claims 33-34, wherein the RMS roughness of the surface of the rough ruthenium oxide layer is in a range of aboutSO A to about 600 A. 36. The conductive structure of any of claims 33-35, wherein a nominal crosssection grain size of grains at the surface of the rough ruthenium oxide layer is in a range of about 100 A to about 800 A.37. The conductive structure of any of claims 33-36, further comprising nonrough ruthenium-containing material having a surface region upon which the layer of rough ruthenium oxide is formed. 38. A method of forming a conductive structure comprising: forming non-rough ruthenium-containing material at a first deposition rate ; and forming rough ruthenium-containing material on the non-rough rutheniumcontaining material at a second deposition rate, wherein the second deposition rate is greater than the first deposition rate.39. The method of claim 38, wherein the rough ruthenium-containing material is formed of ruthenium and the non-rough ruthenium-containing material is formed of ruthenium. 40. The method of claim 38, wherein the rough ruthenium-containing material is formed of ruthenium oxide and the non-rough ruthenium-containing material is formed of ruthenium.41. The method of claim 38, wherein the rough ruthenium-containing material is formed of ruthenium and the non-rough ruthenium-containing material is formed of ruthenium oxide.42. The method of claim 38, wherein the rough ruthenium-containing material is formed of ruthenium oxide and the non-rough ruthenium-containing material is formed of ruthenium oxide.43. A method for use in forming a capacitor, the method comprising: providing a substrate assembly in a reaction chamber, the substrate assembly including at least one surface; and forming an electrode on the at least one surface of the substrate assembly, wherein forming the electrode comprises: providing a ruthenium-containing precursor into the reaction chamber, and depositing a rough ruthenium layer on the surface of the substrate assembly from the ruthenium precursor at a rate of about 100 A/minute to about 500 4/minute. 44. The method of claim 43, wherein the substrate assembly includes an opening defined therein, wherein the opening is defined by a bottom surface of the substrate assembly and at least one side wall extending therefrom.45. The method of claim 43, wherein providing a ruthenium-containing precursor into the reaction chamber includes providing a carrier gas at a flow rate of about 100 sccm to about 500 sccm through a ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C into the reaction chamber to deposit the rough ruthenium layer on the surface of the substrate assembly. 46. The method of claim 45, wherein the method further includes maintaining the substrate assembly surface at a temperature in a range of about 100 C to about 400 C and maintaining the pressure of the reaction chamber in a range of about 0.4 torr to about 10 torr.47. The method of claim 45, wherein the method further includes annealing the rough ruthenium layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes.48. The method of claim 47, wherein annealing the rough ruthenium layer further includes annealing the rough ruthenium layer at a pressure in a range of about 0.1 millitorr to about 5 atmospheres in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture.49. The method of claim 43, wherein providing the substrate assembly surface includes providing non-rough ruthenium, the rough layer of ruthenium formed on the non-rough ruthenium.50. The method of claim 43, wherein providing the substrate assembly surface includes providing non-rough ruthenium oxide, the rough layer of ruthenium formed on the non-rough ruthenium oxide.51. A method for use in forming a capacitor, the method comprising: providing a substrate assembly in a reaction chamber, the substrate assembly including at least one surface; and forming an electrode on the at least one surface of the substrate assembly, the forming of the electrode comprising: providing a ruthenium-containing precursor into the reaction chamber, providing an oxygen-containing precursor into the reaction chamber, and depositing a rough ruthenium oxide layer on the surface of the substrate assembly at a rate of about 100 A/minute to about 1200 *A/niinute, wherein the RMS roughness of the rough ruthenium oxide layer is in a range of about 50 A to about 600 A or a nominal center cross-section area of grains at a surface of the rough ruthenium oxide layer is in a range of about 100 A to about 800A.52. The method of claim 51, wherein the substrate assembly includes an opening defined therein, wherein the opening is defined by a bottom surface of the substrate assembly and at least one side wall extending therefrom.53. The method of any of claims 51-52, wherein providing a rutheniumcontaining precursor into the reaction chamber includes providing a carrier gas at a flow rate of about 100 seem to about 500 seem through the ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C into the reaction chamber, and further wherein providing the oxygen-containing precursor into the reaction chamber includes providing an oxygen-containing precursor into the reaction chamber at a flow rate of about 100 seem to about 2000 seem, 54. The method of any of claims 51-53, wherein the method further includes maintaining the substrate assembly surface at a temperature in a range of about 100 C to about 400 C and maintaining the pressure of the reaction chamber in a range of about 0.4 torr to about 100 torr.55. The method of any of claims 51-54, wherein the method further includes annealing the rough ruthenium oxide layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes.56. The method of any of claims 51-55, wherein annealing the rough ruthenium oxide layer further includes annealing the rough ruthenium layer at a pressure in a range of about 0.1 millitorr to about 5 atmospheres in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture. 57. The method of any of claims 51-56, wherein providing the substrate assembly surface includes providing non-rough ruthenium, the rough layer of ruthenium formed on the non-rough ruthenium.58. A capacitor structure comprising: a first electrode formed of at least a rough ruthenium layer, wherein a surface of the rough ruthenium layer has a surface area greater than about 1.2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium layer; a dielectric layer formed on at least a portion of the first electrode; and a second conductive layer formed on the dielectric layer.59. The capacitor structure of claim 58, wherein the surface of the rough ruthenium layer has a surface area greater than about 1.5 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium layer 60. The capacitor structure of claim 58, wherein the first electrode further comprises non-rough ruthenium upon which the layer of rough ruthenium is formed.61. The capacitor structure of claim 58, wherein the first electrode further comprises non-rough ruthenium oxide upon which the layer of rough ruthenium is formed.62. A capacitor structure comprising: a first electrode formed of at least a rough ruthenium oxide layer, wherein a surface of the rough ruthenium oxide layer has a surface area greater than about 1.2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium oxide layer; a dielectric layer formed on at least a portion of the first electrode; and a second conductive layer formed on the dielectric layer. 63. The capacitor structure of claim 62, wherein the surface of the rough ruthenium layer has a surface area greater than about 1.5 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium layer.64. The capacitor structure of claim 62, wherein the first electrode further comprises non-rough ruthenium upon which the layer of rough ruthenium oxide is formed.65. The capacitor structure of claim 62, wherein the first electrode further comprises non-rough ruthenium oxide upon which the layer of rough ruthenium oxide is formed.
METHODS FOR FORMING ROUGH RUTHENIUM-CONTAININGLAYERS AND STRUCTURES/METHODS USING SAMEField of the InventionThe present invention relates to semiconductor devices and the fabrication thereof. More particularly, the present invention pertains to rough conductive layers of ruthenium and/or ruthenium oxide. Background of the InventionIn the fabrication of integrated circuits, various conductive layers are used. For example, during the formation of semiconductor devices, such as dynamic random access memories (DRAMs), conductive materials are used in the formation of storage cell capacitors and also may be used in interconnection structures, e. g., conductive layers of contact holes, vias, etc. As memory devices become more dense, it is necessary to decrease the size of circuit components forming such devices. One way to retain storage capacity of storage cell capacitors of the memory devices and at the same time decrease the memory device size is to increase the dielectric constant of the dielectric layer of the storage cell capacitor. Therefore, high dielectric constant materials are used in such applications interposed between two electrodes.One or more layers of various conductive materials may be used as the electrode material. Further, to the increase the capacitance for a storage cell capacitor of a memory device without increasing the occupation area of the storage cell capacitor, various techniques have been used to increase the surface area of the lower electrode of the capacitor. For example, hemispherical grains (HSG) have been used to enhance such surface area of the lower electrode of a capacitor of a memory device. In one illustrative HSG technique, an HSG silicon surface is used as an underlayer for a metal layer to form a lower electrode having an increased surface area. For example, such a coextensive conductive layer formed over the hemispherical grain silicon surface may be formed of titanium nitride. However, in many cases, the use of HSG to enhance surface area of the lower electrode is problematic. For example, when an HSG silicon surface is used as an underlayer for a metal in a container capacitor (e. g., a container capacitor such as described in U. S. Patent No. 5,270,241 to Dennison, et al., entitled"Optimized Container Stack Capacitor DRAM Cell UtilizingSacrificial Oxide Deposition and Chemical Mechanical Polishing,"issuedDecember 14,1993) there is a possibility of forming silicon dioxide between the HSG silicon surface and the metal layer of which the electrode is formed when the dielectric layer is being formed due to the diffusion of oxygen through the metal layer. Further, there is the possibility of silicon dioxide formation between the metal layer and the dielectric being formed due to the diffusion of silicon through the metal layer. Such silicon dioxide formation is likely due to the oxygen anneal required for formation of high dielectric constant materials, e. g., Ta205 or BaSrTiO3, over the lower electrode. However, reliable electrode connections are necessary. Formation of silicon dioxide as described above decreases the reliability of the electrode connection. Further, such silicon dioxide formation may result in a decreased series capacitance, thus degrading the storage capacity of the cell capacitor. To prevent the diffusion of oxygen to the HSG silicon surface, or the diffusion of silicon through the metal layer, the use of a diffusion barrier, such as titanium nitride, may be used between the HSG silicon surface to form the lower electrode. The use of a diffusion barrier over the HSG silicon surface, however, also has problems associated therewith. For example, the container size of a container capacitor is relatively small. With use of multiple layers, such as HSG silicon surface, a diffusion barrier, and then a lower metal electrode layer, a container having an undesirably large size may be required. Further, formation of a diffusion barrier layer over an increased surface area of an HSG silicon surface, and thereafter a lower metal electrode layer thereon, will decrease the effectiveness of the HSG layer to increase the surface area of the lower electrode. In other words, the surface area of theHSG silicon surface is decreased by application of the diffusion barrier layer, and then further decreased by the application of the lower electrode layer. In such a manner, the effectiveness of increasing the lower electrode surface area with use of HSG is diminished. Further, grain size of an HSG silicon surface is somewhat limited. For example, such grain size is typically less than 200 A in nominal diameter. As such, the increase in surface area provided through use of HSG is limited accordingly. Generally, various metals and metallic compounds, for example, metals such as ruthenium and platinum, and conductive metal oxides, such as ruthenium oxide, have been proposed as the electrodes for at least one of the layers of an electrode stack for use with high dielectric constant materials.Ruthenium oxide and ruthenium electrodes have been employed as electrode materials because of the ability to easily etch such materials. For example, the article entitled," (Ba, Sr) Ti03 Films Prepared by Liquid Source ChemicalVapor Deposition on Ru Electrodes,"by Kawahara et al., Jpn. J. Appl. Ph s., Vol. 35 (1996), Part 1, No. 9B (September 1996), pp. 4880-4885, describes the use of ruthenium and ruthenium oxide for forming electrodes in conjunction with high dielectric constant materials. As described therein, surface roughening of such materials is believed to attribute to degradation of the structures being formed. Further, as described therein, ruthenium and ruthenium oxide materials were deposited by physical vapor deposition (PVD) processing, e. g., reactiveRF sputtering processes. Many storage cell capacitors are fabricated which include electrode layers that are formed of a conductive material within a small high aspect ratio opening. Typically, sputtering does not provide a sufficiently conforma layer adequate for formation of an electrode layer within such a small high aspect ratio opening. Summary of the InventionThere is a need in the art to increase the surface area of a lower electrode structure without increasing the occupation area of the capacitor structure. Further, it is desirable that such an increase in surface area does not have one or more of the problems described above associated with the use of HSG. To overcome the problems described above, and others that will be readily apparent from the description below, rough conductive layers of ruthenium and/or ruthenium oxide are formed according to the present invention. For example, a rough conductive layer including ruthenium can be used as the lower or bottom electrode of a capacitor structure increasing the surface area of the lower electrode without increasing the occupation area and without the need for HSG silicon formation. As HSG silicon is not used, there is less danger of silicon dioxide formation. Further, the use of a rough conductive layer of ruthenium and/or ruthenium oxide may reduce processing costs by eliminating the need for HSG silicon formation and possibly formation of a diffusion barrier. A method for forming a rough conductive layer (e. g., a layer having anRMS surface roughness in a range of about SO A to about 600 A) in the fabrication of integrated circuits according to the present invention includes providing a substrate assembly in a reaction chamber with the substrate assembly including a surface. The substrate assembly surface is maintained at a temperature in a range of about 100 C to about 400 C and the pressure of the reaction chamber is maintained in a range of about 0.4 torr to about 10 torr. A carrier gas at a flow rate of about 100 scem to about 500 seem is provided through a ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C into the reaction chamber to deposit a rough ruthenium layer on the surface of the substrate assembly. In various embodiments of the method, the method may include providing a diluent gas at a flow rate of about 100 sccm to about 500 seem into the reaction chamber, the substrate assembly surface may be maintained at a temperature in the range of about 150 C to about 250 C, the rough ruthenium layer may be deposited at a rate of about 100 A/minute to about 500 A/minute, and the method may further include annealing the rough ruthenium layer at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes. Further, the anneal may be performed in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture. In another method according to the present invention for forming a rough conductive layer in the fabrication of integrated circuits, a substrate assembly including a surface is provided in a reaction chamber. A rutheniumcontaining precursor is provided into the reaction chamber and a rough ruthenium layer is deposited on the surface of the substrate assembly at a rate of about 100 A/minute to about 500 A/minute, preferably at a rate of about 200 A/minute to about 300 A/minute. Another method for forming a rough conductive layer (e. g., a layer having an RMS surface roughness in a range of about 50 A to about 600 A) in the fabrication of integrated circuits according to the present invention includes providing a substrate assembly having a surface into a reaction chamber. A ruthenium-containing precursor is provided into the reaction chamber along with an oxygen-containing precursor. A rough ruthenium oxide layer is deposited on the surface of the substrate assembly at a rate of about 100 A/minute to about 1200 A/minute, preferably, at a rate of about 300 A/minute to about 600 A/minute. In one embodiment of the method, the ruthenium-containing precursor is provided into the reaction chamber by providing a carrier gas at a flow rate of about 100 sccm to about 500 sccm through the ruthenium-containing precursor maintained at a temperature of about 15 C to about 100 C, and then into the reaction chamber. Further, the oxygen-containing precursor is provided into the reaction chamber at a flow rate of about 100 scem to about 2000 sccm. In other embodiments of the method, the substrate assembly surface may be maintained at a temperature in a range of about 100 C to about 400 C, the pressure of the reaction chamber may be maintained in a range of about 0.4 torr to about 100 torr, and the rough ruthenium oxide layer may be annealed at a temperature in a range of about 300 C to about 900 C for a time period in a range of about 30 seconds to about 30 minutes. Further, the anneal may be done in a gas atmosphere subjected to a glow discharge created by applying an electromagnetic field across the gas mixture. A conductive structure according to the present invention includes at least a rough ruthenium layer. A surface of the rough ruthenium layer has a surface area greater than about 1.2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium layer. Preferably, the surface area is greater than about 1.5 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium layer. In other embodiments of the conductive structure, the conductive structure may include non-rough ruthenium upon which the rough ruthenium is formed. Likewise, the conductive structure may include non-rough ruthenium oxide upon which the rough ruthenium is formed. Another conductive structure according to the present invention includes at least a rough ruthenium oxide layer. A surface of the rough ruthenium oxide layer has a surface area greater than about 1.2 times a surface area of a completely smooth surface having a substantially identical shape as the surface of the rough ruthenium oxide layer. Preferably, the surface area is greater than about 1.5 times the surface area of the completely smooth surface having the substantially identical shape as the surface of the rough ruthenium oxide layerIn other embodiments of the conductive structure, the conductive structure may include non-rough ruthenium upon which the rough ruthenium oxide is formed. In other embodiments of the present invention, the above methods and conductive structures may be formed as part of a capacitor structure. For example, the conductive structure may be formed as a first electrode or bottom electrode of a capacitor. Brief Description of the DrawingsThe present invention will be better understood from reading the following description of illustrative embodiments with reference to the attached drawings, wherein below:Figure 1A generally shows a rough conductive layer of either ruthenium or ruthenium oxide formed on a substrate assembly according to the present invention. Figure 1B is a detailed diagram of a portion of a surface of the rough conductive layer of Figure 1A. Figures 2A-2D illustrate a multiple step method of forming a rough lower electrode according to the present invention for use in a capacitor structure. Figure 3 is an illustrative diagram of a container capacitor structure using a rough lower electrode formed according to the present invention in a storage cell capacitor application. Figures 4-5 are example ruthenium layers for illustrating the comparison of rough and non-rough ruthenium layers. Figures 6-7 are example ruthenium oxide layers for illustrating the comparison of non-rough and rough ruthenium oxide layers. Detailed Description of the EmbodimentsThe present invention shall be described generally with reference toFigures 1-2. Thereafter, an illustration of a capacitor structure application of the present invention shall be described with reference to Figure 3 andExamples of forming rough ruthenium and rough ruthenium oxide layers are given with reference to Figures 4-7. Figure 1A shows a structure 10 including a substrate assembly 12 and a roughened conductive layer 14, i. e., a layer having a rough surface 19. The present invention describes methods of forming roughened conductive layers by chemical vapor deposition and annealing. Generally, the roughened conductive layer 14 is formed of ruthenium and/or ruthenium oxide. Roughened surfaces of conductive materials formed according to the present invention are particularly useful as a lower electrode of a capacitor structure for a memory device such as a DRAM. However, it should be understood that the methods of providing rough conductive layers including rough ruthenium and/or ruthenium oxide layers can be used in any application or structure in which a rough conductive layer would be useful. As used in this application, substrate assembly refers to either a semiconductor substrate such as the base semiconductor layer, e. g., the lowest layer of a silicon material on a wafer, or a silicon layer deposited on another material, such as silicon on sapphire, or a semiconductor substrate having one or more layers or structures formed thereon or regions formed therein. When reference is made to a substrate assembly in the following description, various process steps may have been previously used to form or define regions, junctions, various structures or features, and openings such as vias, contact openings, high aspect ratio openings, etc. For example, as used herein, substrate assembly may refer to a structure upon which a lower electrode of a capacitor structure is formed. It will be understood that the methods of the present invention are typically performed in chemical vapor deposition (CVD) chambers of the type used to process semiconductor wafers, although any equipment and method for depositing layers according to the present invention may be used. For example, the CVD processes described herein may be carried out in a chemical vapor deposition reactor, such as a reaction chamber available under the trade designation of 7000 from Genus, Inc. (Sunnyvale, CA), a reaction chamber available under the trade designation of 5000 from Applied Materials, Inc.(Santa Clara, CA), or a reaction chamber available under the trade designation of Prism from Novelus, Inc. (San Jose, CA). However, any reaction chamber suitable for performing CVD may be used. Chemical vapor deposition is defined as the formation of a nonvolatile solid film on a substrate by reaction of vapor phase reactants, i. e., reacting gases, that contain desired components. The reacting gases are introduced into the reaction chamber. The gas is decomposed and reacted at a heated wafer surface to form the desired layer. Chemical vapor deposition is just one process of providing thin layers on semiconductor wafers, such as films of elemental metals or compounds, e. g., platinum, ruthenium, ruthenium oxide, etc. Chemical vapor deposition processes are capable of providing highly conforma layers even within deep contacts, container openings, and other openings. Thus, as described further below with reference to the figures, CVD processing is preferably used to provide high conforma layers within openings such as for lower electrodes of storage cell capacitors, e. g., container capacitors. It will be readily apparent to one skilled in the art that althoughCVD is the preferred process, the CVD process may be enhanced by various related techniques such as plasma assistance, photo assistance, laser assistance, as well as other techniques. As used herein, the term"deposition temperature"will typically refer to the surface temperature of the substrate assembly or layer on which a material is being deposited; the term"flow rate"as used in connection with gas flow rates will typically refer to the gas flow rate into the CVD reaction chamber; and the term"deposition pressure"will typically refer to the pressure in theCVD chamber. Further, it will be understood that as used in connection with the present invention, the term"annealing"may be performed in the CVD chamber and includes exposing a structure being formed to any combination of temperature and pressure for a predetermined time which will enhance the surface area of the rough conductive layer deposited. Such annealing may be performed in a gas atmosphere and with or without plasma enhancement. One preferred method of forming a rough conductive layer 14 is by depositing a ruthenium layer by CVD. The CVD process is conducted with a ruthenium-containing precursor being delivered to a reaction chamber. Diluent gases may also optionally be provided to the reaction chamber. The ruthenium-containing precursor may be a liquid or a solid at room temperature. Typically, however, such precursors are liquids. If they are solids, they are preferably sufficiently soluble in an organic solvent or have melting points below their decomposition temperature such that they can be used in flash vaporization, bubbling, microdroplet formation techniques, etc.However, they may also be sufficiently volatile that they can be vaporized or sublimed from the solid state using known chemical vapor deposition techniques. Thus, the precursor composition of the present invention can be in solid or liquid form. As used herein,"liquid"refers to a solution or a neat liquid (a liquid at room temperature or a solid at room temperature that melts at an elevated temperature). As used herein, a"solution"does not require complete solubility of the solid ; rather, the solution may have some undissolved material. Preferably, however, there is a sufficient amount of the material that can be carried by the organic solvent into the vapor phase for chemical vapor deposition processing. Preferably, the ruthenium-containing precursor is generally a liquid precursor. The ruthenium-containing precursor may be, for example, tricarbonyl (1, 3-cyclohexadiene) Ru, (C11H190z) 2 (C8Hl2) Ru, or any other suitable ruthenium-containing precursor. If the ruthenium-containing precursor is a liquid, it may be delivered through use of bubbling techniques. Generally, the liquid precursor is contained in a bubble reservoir through which a carrier gas, such as helium or any other inert gas, i. e., a gas that is nonreactive with other gases in the process (e. g., nitrogen, argon, neon, and xenon) is passed. In other words, the carrier gas is bubbled through the reservoir containing the precursor to deliver the precursor to the reaction chamber. One skilled in the art will recognize that the manner in which the gases are introduced into the reaction chamber may include one of various techniques. For example, in addition to provision by bubbler techniques, the introduction may be accomplished with the use of compounds which are gases at room temperature or by heating a volatile compound and delivering the volatile compound to the reaction chamber using a carrier gas. Further, solid precursors and various methods of vaporizing such solid precursors may also be used for introduction of reactant compounds into the chamber. As such, the present invention is not limited to any particular technique. Further, typically, the reacting gases are admitted at separate inlet ports.In addition to the other gases provided to the reaction chamber, an optional diluent gas, i. e., a gas that is nonreactive with the reacting gases, may also be introduced in the chamber such as to change the partial pressures of the gases therein. For example, argon or nitrogen may be introduced into the chamber at a varied flow rate. To achieve the desired roughness for the rough conductive layer 14 formed of ruthenium, a relatively high deposition rate is used. However, step coverage must be maintained at the high deposition rate. To maintain such step coverage, a high concentration of ruthenium-containing precursor to the reaction chamber must be provided. Preferably, the deposition rate for forming a rough ruthenium layer 14 while maintaining step coverage is a deposition rate in the range of about 100 A/minute to about 500 A/minute. More preferably, the deposition rate is in a range of about 200 A/minute to about 300 A/minute. Yet further, to maintain the step coverage with a high concentration of ruthenium-containing precursor provided to the reaction chamber, preferably, a flow rate of about 100 sccm to about 500 sccm of carrier gas (e. g., He, 02, or any other gas that is non-reactive with the precursor) through a ruthenium-containing precursor held in a bubbler reservoir at a temperature of about 15 C to about 100 C is provided to the chamber. More preferably, the flow rate of carrier gas through the rutheniumcontaining precursor to the reaction chamber is at a rate in the range of about 150 sccm to about 250 sccm. Further, to achieve the desired higher deposition rate as described above, various other parameters of the CVD process may be varied.Preferably, the deposition pressure of the CVD process is in the range of about . 4 torr to about 10 torr. More preferably, the pressure is in the range of about 2 torr to about 4 torr. Further, the deposition temperature of the CVD process is preferably in a range of about 100 C to about 400 C. More preferably, the deposition temperature is in the range of about 150 C to about 250 C. Preferably, the CVD process is performed without any plasma enhancement. Further, preferably, a diluent gas is provided into the reaction chamber at a rate of about 100 sccm to about 500 sccm. Preferably, the diluent gas is one of nitrogen or argon. The flow rate of the ruthenium-containing precursor into the reaction chamber can be increased in a number of manners. For example, a higher bubbler temperature can be used when the ruthenium-containing precursor is provided to the reaction chamber through the use of a bubbler. However, preferably the ruthenium-containing precursor is held at room temperature.Further, shorter and/or larger gas lines may also be used to increase the concentration of the ruthenium-containing precursor in the reaction chamber, e. g., a shorter gas line allows the pressure in chamber holding the rutheniumcontaining precursor to be lower which increases the amount of precursor in the carrier gas. According to the present invention, the roughness of the ruthenium layer 14 is greater at increased deposition pressures within the range described above. Further, the roughness of the ruthenium layer 14 is greater at increased deposition temperatures. Such CVD parameters may be varied to attain a desired roughness within the ranges as described herein. The roughness of the surface of a rough ruthenium layer 14 useful in accordance with the present invention may be characterized in one or more different manners as described below. One manner of characterizing a rough ruthenium layer is based on the RMS (root mean square) surface roughness of the rough ruthenium layer. Preferably, a rough ruthenium layer has an RMS surface roughness in a range of about 50 A to about 600 A. RMS (root mean square) surface roughness may be determined using, for example, AtomicForce Microscopy (AFM), Scanning Tunneling Microscopy (STM), orScanning Electron Microscopy (SEM) and is based on a statistical mean of anR-range, wherein the R-range is a range of the radius (r) (as shown in Figure 1B) of a grain size. The determination of RMS (root mean square) surface roughness is known to those skilled in the art. Alternatively, or in addition to other manners of characterizing the rough layer, the rough ruthenium layer may be characterized based on the cross-section grain size of the grains of the layer being deposited. Preferably, the nominal cross-section grain size is represented by the nominal diameter through the center of the grains. The nominal diameter for a rough ruthenium layer is preferably in the range of about 100 A to about 800 A. More preferably, the cross-section nominal diameter through the center is in the range of about 200 A to about 500 A. Alternatively, or in addition to other manners of characterizing the rough layer, a rough ruthenium surface may be characterized based on a comparison of the surface area of the rough ruthenium surface relative to the surface area of a completely smooth surface (i. e., a surface with no grain structure, e. g., valleys, peaks, etc.) having a substantially identical shape as the rough surface, e. g., the shape of the structure upon which the rough layer is deposited. Preferably, a rough surface (e. g., all or a portion of a conductive layer), wherein preferably the rough surface is a generally homogenous surface (i. e., a surface structure without any substantial irregularities from one part of the surface to another part of the surface such as, for example, deep depressions, large spikes, unusually large grains compared to the other grains of the layer, etc.), has a surface area greater than about 1.2 times the surface area of a completely smooth surface having a substantially identical shape (i. e., substantially identical shapes having the same base dimensional characteristics, e. g., in the case of a planar surface the occupancy area of both the completely smooth and rough surface are equivalent). The surface shape may be of a planar shape, a curved shape, a container shaped structure such as in a container capacitor, or any other shape. More preferably, the roughness of the surface has a surface area that is greater than about 1.5 times the surface area of a completely smooth surface having a substantially identical shape. For example, as shown in Figure 1A, the rough surface 19 of conductive layer 14 has a generally planar shape. The surface area of the rough surface 19 of the conductive layer 14 can be compared to a surface area (XY) of a completely smooth surface having a planar shape, i. e., a shape identical to the shape of the rough surface 19. Therefore, preferably, the surface area of rough surface 19 of the conductive layer 14 is greater than about 1.2 (XY). As shown in Figure 1B, the rough surface 19 includes regions 21, i. e., grains, projecting from the layer 14. As such, peaks and valleys are formed, e. g., peaks 23 and valleys 25. One skilled in the art will recognize that HSG silicon have grains with similar peaks and valleys like that of the formed rough surface 19. It is the valleys 25 which when covered by a diffusion barrier layer in HSG techniques tend to decrease the effectiveness of increasing the surface area using such HSG silicon. For example, with such valleys being filled by diffusion barrier material, they are no longer available to provide effective increased surface area. Since the rough ruthenium layer 14 does not necessarily require any other layer formation over the surface 19, such valleys are generally available to provide increased surface area, such as for increased capacitance in a capacitor structure. The grain size for ruthenium is typically less than about 800 A in nominal diameter. After deposition of the rough ruthenium layer 14, an optional anneal of the structure may be used to further enhance and/or increase the surface area at rough surface 19 of the rough ruthenium layer 14. For example, the cross section grain size of the deposited ruthenium may grow by about 100 percent with use of an annealing process. Preferably, the structure is annealed at a pressure of about 0.1 millitorr to about 5 atmospheres. Preferably, the anneal is performed at a pressure in the range of about 0.1 millitorr to 5 atmospheres.More preferably, the anneal is performed at a pressure of about 1 torr to about 800 torr. Further, the anneal is performed at a temperature in the range of about 300 C to about 900 C. More preferably, the anneal is performed at a temperature in the range of about 500 C to about 700 C. The anneal is preferably performed for a time period of between 30 seconds to 30 minutes. Further, the anneal may be performed while the structure is present in a gas environment. Preferably, the gas environment is an atmosphere of oxygen, ozone, argon, nitrogen, etc., or any other combination thereof, such as, for example, oxygen and nitrogen or oxygen and argon. The anneal may be a plasma enhanced anneal wherein the gas atmosphere is subjected to a glow discharge created by applying an electromagnetic field across the gas. Use of a plasma process allows the structure to be kept at a somewhat lower temperature during the anneal while still achieving increased grain size. Any suitable power source may be used to generate the plasma in the reaction chamber. Suitable power sources, for example, include an RF generator, a microwave (e. g., 2.5 gigahertz microwave source) generator, or an electron cyclotron resonance (ECR) source. A preferred power source is an RF generator operating as a standard 13.56 MHz source. Preferably, the gas is subjected to the glow discharge or plasma created by applying the radio frequency electromagnetic field of 13.56 MHz at a power density of 5 kilowatts/cm2 or less across the gas. It is preferable that a plasma enhancement be used so that the intersection between the ruthenium-containing layer and the underlying layer, e. g., silicon, is minimized. However, plasma enhancement is an optional annealing technique. The anneal may be performed as a furnace anneal or a rapid thermal processing (RTP) anneal may be used. Further, such anneals may be performed in one or more annealing steps within the time periods, temperature ranges, and other parameters set forth above. Another preferred method of forming a rough conductive layer 14 is by depositing a rough ruthenium oxide layer by CVD. The CVD process is conducted with a ruthenium-containing precursor being delivered to a reaction chamber along with an oxygen-containing precursor being delivered to the reaction chamber. Further, diluent gases may also optionally be provided to the reaction chamber. Preferably, the oxygen-containing precursor is oxygen, ozone, N2O, CO, CO2, or any other suitable oxygen-containing precursor. To achieve the desired roughness for the rough conductive layer 14 formed of ruthenium oxide, a relatively high deposition rate is used. However, step coverage must be maintained at the high deposition rate. To maintain such step coverage, a high concentration of ruthenium-containing precursor and oxygen-containing precursor must be provided to the reaction chamber. Preferably, the deposition rate for forming a rough ruthenium oxide layer 14 while maintaining step coverage is a deposition rate in the range of about 100 A/minute to about 1200 A/minute. More preferably, the deposition rate is in a range of about 300 A/minute to about 600 A/minute. Yet further, to maintain the step coverage with a high concentration of ruthenium-containing precursor provided to the reaction chamber, preferably, a flow rate of about 100 sccm to about 500 sccm of carrier gas (e. g., He, 2, or any other gas that is non-reactive with the precursor) through a ruthenium-containing precursor held in a bubbler reservoir at a temperature of about 15 C to about 100 C is provided to the chamber. More preferably, the flow rate of the carrier gas is at a rate in the range of about 200 sccm to about 300 sccm. Further, preferably, a flow rate of about 100 sccm to about 2000 sccm of the oxygen-containing precursor is provided to the chamber. More preferably, the flow rate of the oxygen-containing precursor to the reaction chamber is at a rate in the range of about 500 sccm to about 1000 sccm. Further, to achieve the desired higher deposition rate for ruthenium oxide as described above, various other parameters of the CVD process may be varied. Preferably, the deposition pressure of the CVD process is in the range of about. 4 torr to about 100 torr. More preferably, the pressure is in the range of about 1 torr to about 10 torr. Further, the deposition temperature of theCVD process is preferably in a range of about 100 C to about 400 C. More preferably, the deposition temperature is in the range of about 100 C to about 250 C. Preferably, the CVD process is performed without any plasma enhancement. Further, preferably, a diluent gas is provided into the reaction chamber at a rate of about 100 seem to about 1000 seem. Preferably, the diluent gas is one of nitrogen or argon. According to the present invention, the roughness of the ruthenium oxide layer 14 is greater at increased deposition pressures within the range described above. Further, the roughness of the ruthenium oxide layer 14 is greater at increased deposition temperatures. Such CVD parameters may be varied to attain a desired roughness within the ranges as described herein. The roughness of the rough ruthenium oxide layer useful in accordance with the present invention may be characterized in one or more different manners just like the rough ruthenium layer as described above. The same ranges for the same roughness characteristics set forth above for the ruthenium layer are applicable as well to the ruthenium oxide layers. After deposition of the rough ruthenium oxide layer 14, an optional anneal of the structure may be used to further enhance and/or increase the surface area at rough surface 19 of the rough ruthenium layer 14. Substantially the same parameters and ranges as set forth above for the ruthenium layer are applicable as well to formation of the ruthenium oxide layer. When a rough ruthenium or ruthenium oxide layer is formed as the lower electrode of a capacitor structure, the thickness of the rough ruthenium lower electrode or the rough ruthenium oxide lower electrode is generally in the range of about 100 A to about 600 A. One skilled in the art will recognize that the rough ruthenium oxide and/or ruthenium layers described above may be used in addition to other layers of a structure. For example, an electrode may be a multi-layer electrode formed of other metals with an upper layer formed of a rough ruthenium layer or a rough ruthenium oxide layer. Further, for example, such a layer used with the rough ruthenium layer or rough ruthenium oxide layer may be a barrier layer as described below with reference to Figure 3. Although the rough ruthenium layer and/or the rough ruthenium oxide layer described above may be used for one or more numerous applications, e. g., interconnection applications, capacitor applications, etc., the present invention is useful when forming layers in small high aspect ratio openings.As described herein, small high aspect ratio openings have feature sizes or critical dimensions below about 1 micron (e. g., such as a diameter or width of an opening being less than about 1 micron), and aspect ratios greater than about 1. Such aspect ratios are applicable to contact holes, vias, trenches, and any other configured openings, such as container or trench openings for formation of capacitor structures. For example, a trench having an opening of 1 micron and a depth of 3 microns has an aspect ratio of 3. Figures 2A-2D illustrate a method of forming a lower electrode for a container structure according to a multiple step method of the present invention. The lower electrode 33 of capacitor structure 37, as shown in Figure 2D, is formed using a rough conductive layer according to the present invention, e. g., such as those described with reference to Figure 1. The lower electrode 33 is preferably formed according to two steps. However, more than two steps may be used if additional layers are desired. Preferably, the two steps are used to form a ruthenium layer that is not rough over which a rough ruthenium oxide layer or a rough ruthenium layer is formed. Likewise, preferably, the two steps may be used to form a ruthenium oxide layer that is not rough over which a rough ruthenium layer or a rough ruthenium oxide layer is formed. For simplicity purposes, the multiple step method according to the present invention shall be described with reference to Figures 2A-2D with formation of a container capacitor structure wherein, first, a ruthenium layer that is not rough is formed, and thereafter a rough ruthenium layer is formed according to the previous description herein. The other possible combinations of layers and formation thereof will be readily apparent from reading the simplistic description using a non-rough ruthenium layer and a rough ruthenium layer. It will be recognized that the multiple ruthenium-containing layers may be described as a single layer graded in grain size, e. g., rough ruthenium formed over non-rough ruthenium, as opposed to multiple layers. In other words, for example, as opposed to forming layers, non-rough ruthenium is formed over which rough ruthenium is deposited without a particular layer transition. As such, the entire graded layer may be formed in a single continuous step, e. g., with a change in process parameters. Figure 2A shows a substrate assembly 30 which includes a first substrate portion 32 and a second substrate portion 34. Substrate portion 34 is formed on substrate portion 32 and includes an opening 36 defined therein by a bottom surface 42 of first substrate portion 32 and one or more side walls 40 of second substrate portion 32. The second portion 34 of substrate assembly 30 includes a region to which a lower electrode of capacitor structure 37 is electrically connected. The second portion 34 of the substrate assembly 30 is an insulative layer such as an oxide layer, e. g., silicon dioxide, BPSG, PSG, etc. As such, opening 36, defined in substrate assembly 30 by bottom surface 42 and the one or more side walls 40, includes surfaces upon which a bottom lower electrode for a storage cell capacitor is formed, such as for use in a memory cell. Such a container capacitor is also described further herein with reference to Figure 3. The capacitor structure 37 is formed with a rough lower electrode 33 as illustrated in Figures 2A-2D by first forming an optional barrier layer 38 in the defined opening 36 and on surfaces such as upper surface 39. For example, such a barrier layer may have a thickness of about 50 A to about 300 A. One example of a barrier layer includes the formation of a titanium nitride layer having a thickness of about 100 A to about 200 A. Preferably, according to the present invention, the barrier layer and the other layers herein are deposited using CVD processes such that conforma coverage or step coverage within the defined opening 36 and at various other portions of the structure, such as corners 43, are conformally covered with the material being deposited. After formation of the barrier layer 38, a ruthenium layer 46 is deposited as shown in Figure 2B. The ruthenium layer 46 is formed by CVD processing under conditions necessary to form a layer that is not rough. For example, such conditions will include at least one condition that is different than the conditions used to form a rough ruthenium layer as previously described herein. Preferably, according to the present invention, a non-rough ruthenium layer may be formed using conditions in the following ranges: a flow rate of about 100 sccm to about 500 sccm of carrier gas (e. g., He, 02, or any other gas that is non-reactive with the precursor) through a rutheniumcontaining precursor held in a bubbler reservoir at a temperature of about 15 C to about 100 C ; a deposition pressure in the range of about. 4 torr to about 10 torr; a deposition temperature in a range of about 100 C to about 400 C ; and a diluent gas (e. g., nitrogen and/or argon) provided into the reaction chamber at a rate of about 100 sccm to about 500 seem. Thereafter, by changing only one or more conditions of the deposition process (with no additional precursors being required) a rough ruthenium layer 50 is deposited over the ruthenium layer 46 as shown in Figure 2C. For example, the ruthenium layer 46 may have a thickness in the range of about 50 A to about 300 A, and the rough ruthenium layer 50 may have a thickness in the range of about 100 A to about 500 A. The rough ruthenium layer 50 is formed according to the present invention as previously described herein. When the combination of layers of rough lower electrode 33 includes a ruthenium oxide layer that is not rough, the non-rough ruthenium oxide layer is generally deposited within the following condition ranges : a flow rate of about 100 sccm to about 500 sccm of carrier gas (e. g., He, 02, or any other gas that is non-reactive with the precursor) through a ruthenium-containing precursor held in a bubbler reservoir at a temperature of about 15 C to about 100 C ; a flow rate of about 100 sccm to about 2000 scem of the oxygen-containing precursor; a deposition pressure in the range of about. 4 torr to about 100 torr; a deposition temperature in a range of about 100 C to about 400 C ; and a diluent gas (e. g., nitrogen and/or argon) at a rate of about 100 sccm to about 1000 sccm. After the optional anneal for the rough ruthenium layer 50, the resultant structure is as shown in Figure 2C. The layers for forming the lower electrode 33 are then planarized to the upper surface 39 of second portion 34 of substrate assembly 30 such that the opening 36 is lined with the rough conductive electrode 33. Thereafter, as shown in Figure 2D, a dielectric layer 52 is then formed relative to the rough conductive electrode 33. For example, the dielectric layer may be any suitable material having a suitable dielectric constant. Preferably, a suitable dielectric constant is a high dielectric constant material such as those materials having a dielectric constant of greater than about 25. For example, a suitable dielectric constant material for forming dielectric layer 52 may include, but is clearly not limited to, tantalum pentoxide (Ta205), BaSrTiOg [BST], BaTi03, SrTiO3, PbTiO3, Pb (Zr, Ti) 03 [PZT], (Pb, La) (Zr, Ti) 03 [PLZT], (Pb, La) Ti03 [PLT], KN03, and LiNbO3. Further, after formation of the dielectric layer 52, a second electrode 54 is formed relative to the dielectric material 52. For example, the second electrode 54 may be formed of a material such as tungsten nitride, titanium nitride, tantalum nitride, platinum metals and alloys thereof, or any suitable electrode material including ruthenium and/or ruthenium oxide. Such a dielectric layer 52 and top electrode material 54 are then etched to form the desired capacitor structure 37. A more specific illustration of using the above-described processes is described below with reference to Figure 3 wherein a rough conductive lower electrode 187 is formed according to one of the processes described herein for a high dielectric capacitor of a storage cell. There are other semiconductor processes and structures for various devices, e. g., CMOS devices, memory devices, etc., that would benefit from the present invention and in no manner is the present invention limited to the illustrative embodiments described herein, e. g., an electrode structure. As shown in Figure 3, a device structure 100 is fabricated in accordance with conventional processing techniques through the formation of an opening 184. Such processing is performed prior to depositing a bottom electrode structure 187 on the surfaces defining the opening 184 using the methods in accordance with the present invention. Although any of the methods described previously herein may be used to form the bottom electrode structure 187 on the surfaces defining the opening 184, for simplicity, this particular illustration shall be only described with the use of a single rough ruthenium layer.However, one skilled in the art will recognize that any of the single or multiple step electrode formation processes described herein may be used to form the bottom electrode structure 187. As such, and as further described in U. S. Patent No. 5,392,189 to Fazan et al., entitled"Capacitor Compatible with High Dielectric Constant Materials Having Two Independent Insulative Layers and the Method for Forming Same,"issued February 21,1995, the device structure100 includes field oxide regions 105 and active regions, i. e., those regions of the substrate 107 not covered by field oxide. A word line 121 and a field effect transistor (FET) 122 are formed relative to the field oxide 105. Suitable source/drain regions 125,130 are created in silicon substrate 107. An insulative conforma layer of oxide material 140 is formed over regions of FET122 and word line 121. A polysilicon plug 165 is formed to provide electrical communication between substrate 107 and a storage cell capacitor to be formed thereover. Various barrier layers are formed over the polysilicon plug165, such as, for example, layers 167 and 175. For example, such layers may be titanium nitride, tungsten nitride, or any other metal nitride which acts as a barrier. Thereafter, another insulative layer 183 is formed and the opening 184 is defined therein. According to one embodiment of the present invention, a rough ruthenium layer is formed according to the present invention on the structure including bottom surface 185 and the one or more side walls 186 defining opening 184. The roughened ruthenium layer is then planarized or etched back resulting in the rough ruthenium layer 187 lining the opening 184. A dielectric layer 191 formed of material such as described above is then formed relative to the rough ruthenium layer 187. Further, thereafter, a second electrode 192 is formed relative to the dielectric material 191. In each of the following Examples, no anneal was performed on the films formed. Example 1A rough ruthenium layer as shown in Figure 4 was formed on an HF cleaned silicon wafer in a single wafer reaction chamber under the following conditions: -a flow rate of about 200 sccm of helium carrier gas through a tricarbonyl (1, 3-cyclohexadiene) Ru precursor held in a bubbler reservoir at room temperature (i. e., about 25 C) ; -a deposition pressure of 3.0 torr ; and -a deposition temperature of 225 C. Example 2A non-rough ruthenium layer as shown in Figure 5 was formed on HF cleaned BPSG of a silicon wafer in a single wafer reaction chamber under the following conditions: -a flow rate of about 200 sccm of helium carrier gas through a tricarbonyl (1, 3-cyclohexadiene) Ru precursor held in a bubbler reservoir at room temperature (i. e., about 25 C) ; -a deposition pressure of 1.5 torr ; and -a deposition temperature of 250 C. Example 3A ruthenium oxide layer that is not rough as shown in Figure 6 was formed on an HF cleaned silicon wafer in a single wafer reaction chamber under the following conditions: -a flow rate of about 225 sccm of helium carrier gas through a tricarbonyl (1, 3-cyclohexadiene) Ru precursor held in a bubbler reservoir at room temperature (i. e., about 25 C) ; -a flow rate of about 250 sccm of oxygen gas; -a deposition pressure of 2.5 torr; and -a deposition temperature of 210 C. Example 4A rough ruthenium oxide layer as shown in Figure 7 was formed on anHF cleaned silicon wafer in a single wafer reaction chamber under the following conditions: -a flow rate of about 200 sccm of helium carrier gas through a tricarbonyl (1, 3-cyclohexadiene) Ru precursor held in a bubbler reservoir at room temperature (i. e., about 25 C) ; -a flow rate of about 100 sccm of oxygen gas; -a deposition pressure of 3 torr; and -a deposition temperature of 200 C. All patents and references cited herein are incorporated in their entirety as if each were incorporated separately. This invention has been described with reference to illustrative embodiments and is not meant to be construed in a limiting sense. As described previously, one skilled in the art will recognize that various other illustrative applications may utilize the rough rutheniumcontaining layers as described herein. Various modifications of the illustrative embodiments, as well as additional embodiments of the invention, will be apparent to persons skilled in the art upon reference to this description. It is therefore contemplated that the appended claims will cover any such modifications or embodiments that may fall within the scope of the present invention as defined by the accompanying claims.
Aspects include apparatuses and methods for secure, fast and normal virtual interrupt direct assignment managing secure and non-secure, virtual and physical interrupts by processor having a plurality of execution environments, including a trusted (secure) and a non-secure execution environment. An interrupt controller may identify a security group value for an interrupt and direct secure interrupts to the trusted execution environment. The interrupt controller may identify a direct assignment value for the non-secure interrupts indicating whether the non-secure interrupt is owned by a high level operating system (HLOS) Guest or a virtual machine manager (VMM), and whether it is a fast or a normal virtual interrupt. The interrupt controller may direct the HLOS Guest owned interrupt to the HLOS Guest while bypassing the VMM. When the HLOS Guest in unavailable, the interrupt may be directed to the VMM to attempt to pass the interrupt to the HLOS Guest until successful.
CLAIMSWhat is claimed is:1. A method for assigning one or more interrupts in a computing device, comprising:routing the interrupt to a trusted execution environment when aconfiguration of an interrupt identifier indicates an associated security level;correlating an interrupt direct assignment value with the interrupt, wherein the interrupt direct assignment value indicates an owner of the interrupt;routing the interrupt to a high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt; androuting the interrupt to a virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.2. The method of claim 1, further comprising checking for an available spot in an interrupt list when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt, wherein routing the interrupt to the high level operating system guest virtual machine comprises:routing the interrupt to the high level operating system guest virtual machine when there is the available spot in the interrupt list bypassing the virtual machine monitor; androuting the interrupt to the virtual machine monitor when the interrupt list is occupied.3. The method of claim 2, further comprising disabling correlating the interrupt direct assignment value to the interrupt when the interrupt list is occupied.4. The method of claim 1, wherein:the interrupt direct assignment value further indicates a priority of the interrupt;routing the interrupt to the high level operating system guest virtual machine comprises routing the interrupt as a virtual interrupt corresponding to a physical interrupt, the virtual interrupt having a virtual interrupt identification being the same as a physical interrupt identification of the corresponding physical interrupt; androuting the interrupt to the virtual machine monitor comprises routing the interrupt as the physical interrupt.5. The method of claim 4, wherein:the priority of the interrupt comprises a fast interrupt and a normal interrupt; routing the interrupt to the high level operating system guest virtual machine further comprises:routing the interrupt to a first interrupt interface dedicated for fast virtual interrupts when the interrupt is the fast interrupt; androuting the interrupt to a second interrupt interface dedicated for normal virtual interrupts when the interrupt is the normal interrupt.6. The method of claim 1, wherein the configuration of the interrupt identifier comprises an interrupt security group value, the method further comprising:correlating the interrupt security group value with the interrupt, wherein the interrupt security group value indicates an interrupt type; anddetermining whether the interrupt is a secure interrupt type or a non-secure interrupt type,wherein routing the interrupt to the trusted execution environment when the configuration of the interrupt identifier indicates the associated security level comprises routing the interrupt to the trusted execution environment on a processor when the interrupt security group value indicates the interrupt is of the secure interrupt type, andwherein correlating the interrupt direct assignment value with the interrupt comprises correlating the interrupt direct assignment value with the interrupt when the interrupt security group value indicates the interrupt is of the non-secure interrupt type.7. The method of claim 6, wherein:routing the interrupt to the high level operating system guest virtual machine as the fast virtual interrupt or the normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt comprises routing the interrupt to a normal execution environment on the processor; androuting the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt comprises routing the interrupt to the normal execution environment on the processor.8. A computing device, comprising:a first processor configured to run a high level operating system guest virtual machine;a second processor configured to run a virtual machine monitor;an interrupt direct assignment control register configured to store an direct assignment control value;an interrupt direct assignment register configured to store interrupt direct assignment values of interrupts; andan interrupt distributor coupled to the interrupt direct assignment control register, the interrupt direct assignment register, the first processor, and the second processor, wherein the interrupt distributor is configured to perform operations comprising:routing the interrupt to a trusted execution environment when a configuration of an interrupt identifier indicates an associated security level; correlating an interrupt direct assignment value with the interrupt, wherein the interrupt direct assignment value indicates an owner of the interrupt;routing the interrupt to the high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt; androuting the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.9. The computing device of claim 8, further comprising an interrupt list register connected to the interrupt distributor configured to hold interrupt information, including an interrupt identifier,wherein the interrupt distributor is further configured to perform operations comprising checking for an available spot in the interrupt list register when the interrupt direct assignment value is configured to indicate the high level operating system guest is the owner of the interrupt, andwherein routing the interrupt to the high level operating system guest virtual machine comprises:routing the interrupt to the high level operating system guest virtual machine when there is the available spot in the interrupt list bypassing the virtual machine monitor; androuting the interrupt to the virtual machine monitor when the interrupt list is occupied.10. The computing device of claim 9, further comprising a control interface configured to connect the second processor and the interrupt direct assignment control register, wherein the second processor is further configured to perform operations comprising disabling the interrupt direct assignment register when the interrupt list register is occupied.11. The computing device of claim 8, wherein:the interrupt direct assignment value is further configured to indicate a priority of the interrupt; andthe interrupt distributor is further configured to perform operations comprising:routing the interrupt to the high level operating system guest virtual machine as a virtual interrupt corresponding to a physical interrupt, the virtual interrupt having a virtual interrupt identification being the same as a physical interrupt identification of the corresponding physical interrupt; and routing the interrupt to the virtual machine monitor as the physical interrupt.12. The computing device of claim 11, further comprising:a first interrupt interface connected to the interrupt distributor and the first processor, and configured to route the interrupt to the first processor; anda second interrupt interface connected to the interrupt distributor and the first processor, and configured to route the interrupt to the first processor;wherein:the priority of the interrupt comprises a fast interrupt and a normal interrupt; andthe interrupt distributor is further configured to perform operations such that routing the interrupt to the high level operating system guest virtual machine comprises routing the fast interrupt to the first interrupt interface, wherein the first interrupt interface is dedicated for fast virtual interrupts, and routing the normal interrupt to the second interrupt interface, wherein the second interrupt interface is dedicated for normal virtual interrupts.13. The computing device of claim 8, wherein:the configuration of the interrupt identifier comprises an interrupt security group value;the interrupt distributor is further configured to perform operations comprising:correlating the interrupt security group value with the interrupt, wherein the interrupt security group value indicates an interrupt type;determining whether the interrupt is a secure interrupt type or a nonsecure interrupt type;the interrupt distributor is further configured to perform operations such that routing the interrupt to the trusted execution environment when the configuration of the interrupt identifier indicates the associated security level comprises routing the interrupt to the trusted execution environment of the first processor or the second processor when the interrupt security group value indicates the interrupt is of the secure interrupt type; andthe interrupt distributor is further configured to perform operations such that correlating the interrupt direct assignment value with the interrupt comprises correlating the interrupt direct assignment value with the interrupt when the interrupt security group value indicates the interrupt is of the non-secure interrupt type.14. The computing device of claim 13, wherein the interrupt distributor is further configured to perform operations such that:routing the interrupt to the high level operating system guest virtual machine as the fast virtual interrupt or the normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt comprises routing the interrupt to a normal execution environment of the first processor; androuting the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt comprises routing the interrupt to the normal execution environment of the second processor.15. A computing device, comprising:means for routing an interrupt to a trusted execution environment when a configuration of an interrupt identifier indicates an associated security level;means for correlating the interrupt direct assignment value with the interrupt, wherein the interrupt direct assignment value indicates an owner of the interrupt;means for routing the interrupt to a high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt; andmeans for routing the interrupt to a virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.16. The computing device of claim 15, further comprising means for checking for an available spot in an interrupt list when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt,wherein means for routing the interrupt to the high level operating system guest virtual machine comprises:means for routing the interrupt to the high level operating system guest virtual machine when there is the available spot in the interrupt list bypassing the virtual machine monitor; andmeans for routing the interrupt to the virtual machine monitor when the interrupt list is occupied.17. The computing device of claim 16, further comprising means for disabling the means for correlating the interrupt direct assignment value to the interrupt when the interrupt list is occupied.18. The computing device of claim 15, wherein: the interrupt direct assignment value further indicates a priority of the interrupt;means for routing the interrupt to the high level operating system guest virtual machine comprises means for routing the interrupt as a virtual interrupt corresponding to a physical interrupt, the virtual interrupt having a virtual interrupt identification being the same as a physical interrupt identification of thecorresponding physical interrupt; andmeans for routing the interrupt to the virtual machine monitor comprises means for routing the interrupt as the physical interrupt.19. The computing device of claim 18, wherein:the priority of the interrupt comprises a fast interrupt and a normal interrupt; andmeans for routing the interrupt to the high level operating system guest virtual machine further comprises:means for routing the fast interrupt to an first interrupt interface dedicated for fast virtual interrupts; andmeans for routing the normal interrupt to a second interrupt interface dedicated for normal virtual interrupts.20. The computing device of claim 15, wherein the configuration of the interrupt identifier comprises an interrupt security group value, the computing device further comprising:means for correlating an interrupt security group value with the interrupt, wherein the interrupt security group value indicates an interrupt type;means for determining whether the interrupt is a secure interrupt type or a non-secure interrupt type; andwherein means routing the interrupt to the trusted execution environment when the configuration of the interrupt identifier indicates the associated security level comprises means for routing the interrupt to the trusted execution environment on a processor when the interrupt security group value indicates the interrupt is of the secure interrupt type,wherein means for correlating the interrupt direct assignment value with the interrupt comprises means for correlating the interrupt direct assignment value with the interrupt when the interrupt security group value indicates the interrupt is of the non-secure interrupt type.21. The computing device of claim 20, wherein:means for routing the interrupt to the high level operating system guest virtual machine as the fast virtual interrupt or the normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt comprises means for routing the interrupt to a normal execution environment on the processor; andmeans for routing the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt comprises means for routing the interrupt to the normal execution environment on the processor.
TITLESecure, Fast and Normal Virtual Interrupt Direct Assignment in a Virtualized Interrupt Controller in a Mobile System-On-ChipBACKGROUND[0001] Modern interrupt controllers typically are designed to support device virtualization with the assumption that there is a scheduler or schedulerarchitecture whereby the Virtual Machine Monitor (VMM) software traps every interrupt in order to make a scheduler decision regarding the high level operating system (HLOS) Guest to which the incoming interrupt should be routed. The VMM software routes the physical interrupt to the selected HLOS Guest as a virtual interrupt signal. The overhead associated with this VMM software routing step has been known to slowdown interrupt response time.[0002] The mobile phone market sometimes deploys device virtualization as an access control infrastructure for a single guest HLOS, or as a virtualization solution with small number of guest HLOS instances (typically two). It is common in the mobile device virtualization environment that the interrupts, if not owned by the VMM, are owned by the current HLOS Guest. It is also common that the access control requirements allow the virtual processor identifier and virtual interrupt identifier to stay the same as the physical processor identifier and the physical interrupt identifier, respectively.SUMMARY[0003] The methods and apparatuses of various aspects provide circuits and methods for assigning one or more interrupts in a computing device including routing the interrupt to a trusted execution environment when a configuration of an interrupt identifier indicates an associated security level, correlating an interrupt direct assignment value with the interrupt, in which the interrupt direct assignment value indicates an owner of the interrupt, routing the interrupt to a high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt, and routing the interrupt to a virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.[0004] An aspect method may further include checking for an available spot in an interrupt list when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt, in which routing the interrupt to the high level operating system guest virtual machine includes routing the interrupt to the high level operating system guest virtual machine when there is the available spot in the interrupt list bypassing the virtual machine monitor, and routing the interrupt to the virtual machine monitor when the interrupt list is occupied. An aspect method may further include disabling correlating the interrupt direct assignment value to the interrupt when the interrupt list is occupied.[0005] An aspect method in which the interrupt direct assignment value further indicates a priority of the interrupt, in which routing the interrupt to the high level operating system guest virtual machine includes routing the interrupt as a virtual interrupt corresponding to a physical interrupt, the virtual interrupt having a virtual interrupt identification being the same as a physical interrupt identification of the corresponding physical interrupt, and in which routing the interrupt to the virtual machine monitor includes routing the interrupt as the physical interrupt. An aspect method in which the priority of the interrupt comprises a fast interrupt and a normal interrupt, and in which routing the interrupt to the high level operating system guest virtual machine further includes routing the interrupt to a first interrupt interface dedicated for fast virtual interrupts when the interrupt is the fast interrupt, and routing the interrupt to a second interrupt interface dedicated for normal virtual interrupts when the interrupt is the normal interrupt. [0006] An aspect in which the configuration of the interrupt identifier comprises an interrupt security group value, the method may further include correlating the interrupt security group value with the interrupt, in which the interrupt security group value indicates an interrupt type, determining whether the interrupt is a secure interrupt type or a non-secure interrupt type, and in which routing the interrupt to the trusted execution environment when the configuration of the interrupt identifier indicates the associated security level comprises includes routing the interrupt to the trusted execution environment on a processor when the interrupt security group value indicates the interrupt is of the secure interrupt type, in which correlating the interrupt direct assignment value with the interrupt includes correlating the interrupt direct assignment value with the interrupt when the interrupt security group value indicates the interrupt is of the non-secure interrupt type. An aspect method in which routing the interrupt to the high level operating system guest virtual machine as the fast virtual interrupt or the normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt includes routing the interrupt to a normal execution environment on the processor, and in which routing the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt includes routing the interrupt to the normal execution environment on the processor.[0007] An aspect includes a computing device, including a first processorconfigured to run a high level operating system guest virtual machine, a second processor configured to run a virtual machine monitor, an interrupt direct assignment control register configured to store an direct assignment control value, an interrupt direct assignment register configured to store interrupt direct assignment values of interrupts, and an interrupt distributor coupled to the interrupt direct assignment control register, the interrupt direct assignment register, the first processor, and the second processor, in which the interrupt distributor isconfigured to perform operations including routing the interrupt to a trusted execution environment when a configuration of an interrupt identifier indicates an associated security level, correlating an interrupt direct assignment value with the interrupt, in which the interrupt direct assignment value indicates an owner of the interrupt, routing the interrupt to the high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt, and routing the interrupt to the virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.[0008] An aspect includes a computing device, including means for routing an interrupt to a trusted execution environment when a configuration of an interrupt identifier indicates an associated security level, means for correlating the interrupt direct assignment value with the interrupt, in which the interrupt direct assignment value indicates an owner of the interrupt, means for routing the interrupt to a high level operating system guest virtual machine as a fast virtual interrupt or a normal virtual interrupt when the interrupt direct assignment value indicates the high level operating system guest is the owner of the interrupt, and means for routing the interrupt to a virtual machine monitor when the assignment value indicates the virtual machine monitor is the owner of the interrupt.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary aspects of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention.[0010] FIG. 1 is a component block diagram illustrating an apparatus having a plurality of execution environments configured to process secure and non-secure, physical and virtual interrupts, respectively, in accordance with an aspect. [0011] FIG. 2 is a component block diagram illustrating an apparatus having a plurality of execution environments configured to process secure and non-secure, physical and virtual interrupt, respectively, in accordance with an aspect.[0012] FIG. 3 is a component block diagram illustrating an apparatus configured to directly assign virtual interrupts to a processor running the HLOS Guest owner of the interrupt, in accordance with an aspect.[0013] FIG. 4 is a schematic process flow diagram illustrating an aspect method for virtual interrupt direct assignment managing non-secure and secure interrupts.[0014] FIG. 5 is a schematic process flow diagram illustrating an aspect method for virtual interrupt direct assignment managing interrupts owned by the HLOS Guest.[0015] FIG. 6 is a schematic process flow diagram illustrating an aspect method for virtual interrupt direct assignment managing interrupts owned by the VMM and interrupts owned by the HLOS Guest.[0016] FIG. 7 is component block diagram illustrating an exemplary mobile device suitable for use with the various aspects.[0017] FIG. 8 is component block diagram illustrating an exemplary mobile device suitable for use with the various aspects.DETAILED DESCRIPTION[0018] The various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.[0019] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over otherimplementations .[0020] The terms "computing device" and "mobile device" are usedinterchangeably herein to refer to any one or all of cellular telephones,smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, smartbooks, ultrabooks, palm- top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, wireless gaming controllers, and similar personal electronic devices which include a memory, and a programmable processor. While the various aspects are particularly useful for mobile computing devices, such as smartphones, which have limited resources, the aspects are generally useful in any electronic device that implements a virtual machine or high level operating system Guest, and routes and process interrupt requests for the mobile device hardware and the high level operating system Guest.[0021] The terms "system-on-chip" (SoC) and "integrated circuit" are used interchangeably herein to refer to a set of interconnected electronic circuits typically, but not exclusively, including a hardware core, a memory, and a communication interface. A hardware core may include a variety of different types of processors, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), an auxiliary processor, a single-core processor, and a multi-core processor. A hardware core may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), an application-specific integrated circuit (ASCI), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon. Such a configuration may also be referred to as the IC components being on a single chip. [0022] For ease of reference, the appropriate machine/process is referred to as the "owner" of the interrupt and interrupts are referred to as "owned by" the appropriate machine/process.[0023] Mobile computing systems may be configured to execute operations in a standard execution environment and in a trusted execution environment. In a mobile device the trusted execution environment may be implemented to provide processing for applications including, for example, secured PIN entry for enhanced user authentication in mobile payments and banking, digital rights management (DRM), enterprise and web-based services, anti-malware applications that are protected from software attack, software license management, loyalty-based applications, access control of cloud-based documents, and e-Ticketing Mobile TV. A trusted execution environment enabled system may be achieved by partitioning SoC hardware and software resources so that they exist in one of two worlds; the secure world for the security subsystem, and the normal world for everything else.[0024] Interrupts on a mobile device implementing a trusted executionenvironment may be divided into secure and non-secure categories. To manage interrupts in such implementations, conventional systems may use a virtual machine monitor or hypervisor software to route secure interrupts to the appropriate virtual machine or processor. However, the overhead involved in such conventional routing of interrupts can delay responses to interrupts, reduce system responsiveness, and consume system resources in a manner that can impact the user experience.[0025] In an aspect a virtual interrupt direct assignment method and apparatus can alleviate slowdowns in secure interrupt response time caused by an overhead associated with virtual machine monitor (VMM) or hypervisor software routing of secure interrupts to an appropriate virtual machine/processor. The method and apparatus can remove the VMM software overhead for secure interrupts owned by the trusted execution environment of the SoC by designating assignment values to interrupts, and based on the assignment values, routing the secure interrupts to the trusted execution environment of the SoC, thereby bypassing the VMM software in the normal environment.[0026] In an aspect a virtual interrupt direct assignment method and apparatus can alleviate slowdowns in interrupt response time caused by the overhead associated with VMM or hypervisor software routing of interrupts to an appropriate virtual machine/processor. The method and apparatus can remove the VMM software overhead for interrupts owned by a high level operating system (HLOS) Guest by designating assignment values to interrupts, and based on the assignment values, routing the interrupts to the HLOS Guest, thereby bypassing the VMM software. In some aspects the HLOS Guest may own the majority of the interrupts and in some instances a high majority, for example approximately 90% of the interrupts.[0027] In an aspect, the apparatus and method implement virtual interrupt direct assignment, designating interrupts assignment values to identify theowners/intended processes (i.e., the process that should respond to the interrupt) and types of interrupts, and routing the interrupts to the owners according to the assignment values. A virtual interrupt direct assignment method may be implemented in hardware that is discussed in greater detail below.[0028] In an aspect the apparatus and method implement virtual interrupt direct assignment, checking whether the processor running the HLOS Guest is available to accept the interrupt, and assigning the interrupt to the VMM when the processor is unavailable. The VMM may continue to try to provide the interrupt to the processor until the processor accepts the interrupt.[0029] FIG. 1 illustrates an apparatus having a plurality of execution environments configured to process secure and non-secure, physical and virtual interrupts, respectively, in accordance with an aspect. An SoC 100 of a mobile device may include a processor 102 and an interrupt controller hardware 104. The processor 102 may include a number of execution environments, which may result, in part, from portioning the processor components, resources, and hardware to create separate processing spaces one the SoC 100. In an aspect the executionenvironments may be implemented on the same or separate cores of a processor 102, or on the same or separate processors 102 of the SoC 100.[0030] The execution environments may have different characteristics and purposes. In an aspect, an execution environment may be a normal (or non-secure) execution environment 106, intended for execution of normal (or non-secure) processing tasks. Another execution environment may be a trusted (or secure) execution environment 108, intend for execution of processes dealing with sensitive processes and/or information, such as personal or sensitive information, exposing system vulnerabilities, and/or legally restricted information and processes.[0031] The normal execution environment 106 may include one or more virtual machines 1 10, 1 12, such as an HLOS Guest virtual machine, for managing the processing tasks in the normal execution environment 106. A VMM 1 14 may also be included for directing processes to the virtual machine 1 10, 1 12 that owns the process. The VMM 1 14 may also direct processes to the processor 102 itself when the processes are not owned by one of the virtual machines 1 10, 1 12. The trusted execution environment may include a trusted execution environment processing space 1 16, which may be configured to process secure processes. These execution environments 106, 108 and their components may also be configured to manage non-secure and secure interrupts, respectively.[0032] The interrupt controller hardware 104 may be configured to receive interrupts from various sources. The interrupt controller hardware 104 may identify the type of interrupt for a received interrupt and route the interrupt to the interrupt owner. The interrupt types may include non-secure/secure andphysical/virtual interrupts. In an aspect, any secure interrupt may be routed directly to the trusted execution environment processing space 1 16. Similarly, a non-secure physical interrupt may be routed directly to the VMM 1 14, and a non- secure virtual interrupt may be routed directly to the appropriate virtual machine 1 10, 1 12 which owns the non-secure virtual interrupt. Aspects of identifying the type of interrupt and routing the interrupts by the interrupt controller hardware are further described below.[0033] FIG. 2 illustrates an apparatus having a plurality of execution environments configured to process secure and non-secure, physical and virtual interrupt, respectively, in accordance with an aspect. The interrupt controller 104 may include an interrupt distributor 200, which may be configured to determine the type of interrupt received from an interrupt source 202, such as hardware or software of the mobile device, hardware of a device connected to the mobile device, like a peripheral device, or software running on the connected device. The interrupt distributor 200 may retrieve one or more interrupt identifiers associated with a received interrupt, for example, by looking up the interrupt number and finding the identifiers associated with the specific interrupt. The interrupt distributor 200 may interpret the identifiers to route the received interrupt to the appropriate owner for processing.[0034] In an aspect, a configuration of the interrupt identifier of the received interrupt may indicate to the interrupt distributor 200 a security level associating the received interrupt with one of the execution environments. In an aspect, an interrupt security group identifier may be a value 204 that identifies the received interrupt as either a non-secure interrupt or a secure interrupt. The interrupt security group value 204 may include any of a variety or known data types or a variety of known codes that can represent a finite number of characteristics. For example, the interrupt security group value may be represented by a one bit binary code representing the two options of secure and non-secure interrupts. In other aspects there may be more than two security characteristics for the received interrupt, and more robust representations of the characteristics may be used.Continuing with the example illustrated in FIG. 2, the interrupt distributor 200 may retrieve the interrupt security group identifier having a value of "0" and interpret the value 204 to indicate that the received interrupt is a secure interrupt.Alternatively, the interrupt distributor 200 may retrieve the interrupt security group identifier having a value of "1" and interpret the value 204 to indicate that the received interrupt is a non-secure interrupt.[0035] When the interrupt security group identifier has the value of "0" in the example illustrated in FIG. 2, the interrupt distributor 200 may route the interrupt to a physical interrupt interface 208, which may route the interrupt to the processor 102, and specifically to the trusted execution environment processing space 1 16. Because, the interrupt security group identifier has the value of "0" the interrupt controller 104 knows that the interrupt is to be processed in the trusted execution environment 108, and therefore may bypass sending the interrupt to the VMM 1 14 to determine the owner of and route the interrupt. The hardware implementation or the interrupt controller 104 and the avoidance of the VMM 1 14 software reduces the time need to route the interrupt to the known owner, in this case the trusted execution environment processing space 1 16.[0036] In an aspect, the secure interrupt may be received by the processor 102 and checked by a secure monitor 210 for the interrupt security group identifier. The secure monitor 210 may manage when the processor is in a secure state or a nonsecure state, so the secure monitor 210 may check the interrupt security group identifier to determine whether the processor needs to switch states to handle the interrupt. When not already in a secure state, the secure monitor 210 may change the state of the processor 102 and allow the interrupt to pass to the trusted execution environment processing space 1 16. When already in a secure state, the secure monitor 210 may make no changes to the state of the processor and allow the interrupt to pass to the trusted execution environment processing space 1 16.[0037] When the interrupt security group identifier has the value of "1" in the example illustrated in FIG. 2, the interrupt distributor 200 may retrieve an interrupt direct assignment identifier. The interrupt distributor 200 may retrieve an interrupt direct assignment value 206 of the identifier that identifies the received interrupt as either a regular virtual interrupt, a fast virtual interrupt, a physical interrupt, or an unrecognized signal. The interrupt direct assignment value 206 may include any of a variety of known data types or a variety of known codes that can represent a finite number of characteristics. For example, the interrupt security group value may be represented by a two bit binary code representing four options, like the options noted above. In other aspects, there may be more or less than four direct assignment characteristics for the received interrupt, including a variety or speeds or priorities of interrupts, and more or less robust representations of thecharacteristics may be used.[0038] Continuing with the example illustrated in FIG. 2, the interrupt distributor 200 may retrieve the interrupt direct assignment identifier 206 and use the value to determine the type of interrupt. For example, an interrupt direct assignment identifier value of "00" may indicate that the received interrupt is a regular virtual interrupt. As another example, an interrupt direct assignment identifier value of "01" may indicate that the received interrupt is a fast virtual interrupt. As another example, an interrupt direct assignment identifier value of "10" may indicate that the received interrupt is a physical interrupt. As another example, an interrupt direct assignment identifier value of "1 1" may indicate that the received interrupt is an unrecognized signal.[0039] Continuing with the example illustrated in FIG. 2, when an interrupt has an interrupt direct assignment identifier value of "00" or "01," the interrupt distributor 200 may route the interrupt to a virtual interrupt interface 212, which may route the interrupt to the processor 102, and specifically to the HLOS Guest 1 10.Because the interrupt security group identifier has the value of "1" the interrupt controller 104 knows that the interrupt does not need to be processed in the trusted execution environment 108. And, because the interrupt direct assignment identifier has the value of "00" or "01" the interrupt controller 104 knows that the interrupt is owned by the HLOS Guest 1 10 by virtue of being a virtual interrupt. Therefore, the interrupt controller 104 may bypass sending the interrupt to the VMM 1 14 to determine the owner of and route the interrupt. The hardware implementation or the interrupt controller 104 and the avoidance of the VMM 1 14 software reduces the time need to route the interrupt to the known owner, in this case HLOS Guest 1 10.[0040] In an aspect, the processor 102 may be busy and the HLOS Guest 1 10 may not be able to immediately accept the interrupt. In such circumstances, the interrupt controller 104 may route the interrupt to the VMM 1 14, which may operate normally to determine the owner of the interrupt and route the interrupt, in this case to the HLOS Guest 1 10. If the processor 102 continues to be busy, and the HLOS Guest 1 10 continues to be unable to accept the interrupt, the VMM 1 14 may continue to try to route the interrupt until it is successful.[0041] When the interrupt direct assignment identifier has the value of "10," the interrupt distributor 200 may route the interrupt to the physical interrupt interface 208, which may route the interrupt to the processor 102, and specifically to the VMM 1 14. Because the interrupt security group identifier has the value of "1" the interrupt controller 104 knows that the interrupt does not need to be processed in the trusted execution environment 108. And, because the interrupt assignment identifier has the value of "10" the interrupt controller 104 knows that the interrupt may pass to the VMM 1 14 to determine the owner of and route the interrupt by virtue of being a physical interrupt.[0042] In an aspect, when the interrupt is a physical interrupt, the interrupt may pass through the secure monitor 210 while routing the interrupt from the to the interrupt controller 104 to the VMM 1 14. As previously discussed, the secure monitor 210 may check the interrupt security group identifier to determine whether to switch the processor state between secure and non-secure processing. In this example, the interrupt security group identifier has the value of "1" indicating a non-secure interrupt, so the secure monitor 210 may maintain a non-secure state or switch from a secure state to a non-secure state to process the interrupt. [0043] When the interrupt direct assignment identifier has the value of "1 1," the interrupt distributor 200 may ignore the interrupt or discard it. The interrupt direct assignment identifier has the value of "1 1" may indicate an unexpected or unrecognized signal. Depending on a state or condition of the mobile device, a known interrupt number may be correlated with different interrupt direct assignment values 206 at different times. When the interrupt direct assignment identifier has the value of "1 1" for a know interrupt number, i.e. the interrupt number and the interrupt direct assignment value 206 are correlated, this may indicate that the known interrupt number is unexpected for the current state or condition of the mobile device. The interrupt distributor 200 may also retrieve interrupt direct assignment identifier has the value of "1 1" for all unknown interrupt numbers, i.e. interrupt numbers that do not have a correlated interrupt direct assignment value 206.[0044] FIG. 3 illustrates an apparatus configured to directly assign virtual interrupts to a processor running the HLOS Guest owner of the interrupt in accordance with an aspect. The interrupt controller (IC) 104 may becommunicatively connected to the processors 102 and the interrupt source device 202. The interrupt controller 104 may also include the interrupt distributor 200 and the virtual interrupt interface 212 (or virtual processor interface) as described herein. Further, the interrupt controller 104 may include one or more direct assignment control registers 300, direct assignment identifier registers 302, control interfaces 304, processor interfaces 306, and list registers 308. The interrupt distributor 200 may include an interrupt distributor interface 310. A number of these components may be connected by a memory mapped input/output (MMIO) interface 312, including, for example, the processors 102, the interrupt controller 104, the interrupt distributor 200, the interrupt distributor interface 310, the control interfaces 304, and the virtual interrupt interfaces 212.[0045] In an aspect, an interrupt may arise from an interrupt source device 202 connected to the apparatus, as a peripheral interrupt of various types, such as a private peripheral interrupt (PPI) or a shared peripheral interrupt (SPI). The private peripheral interrupt may be routed to a particular processor interface 306. The shared peripheral interrupt may be assigned to any processor interface 306. The interrupt signals may be directed to and received by the interrupt controller 104. The direct assignment control register 300 may store an interrupt controller hypervisor direct assignment control value (ICH_AssignControl), which may determine whether to allow the apparatus to retrieve an interrupt direct assignment identifier for an interrupt to implement the virtual interrupt direct assignment, or to disable the interrupt direct assignment identifier retrieval. The latter effectively disables the virtual interrupt direct assignment. The direct assignment control register 300 may also store an interrupt controller hypervisor direct assignment disable status value (ICH_AssignDisableSatuts), which may indicate whether there is an available hardware register to accept an interrupt via the virtual interrupt direct assignment. The direct assignment control register 300 may be part of the interrupt controller 104, part of the interrupt distributor 200, or a separate component from the interrupt controller 104. There may be anICH AssignControl and an ICH AssignDisableSatuts for each processor 102 of the apparatus identified by each processor's P_INDEX value.[0046] The direct assignment identifier register 302 may store a plurality of interrupt controller distributor direct assignment values (ICD_ASSIGNn), which are analogous to the interrupt direct assignment values described herein. These ICD_ASSIGNn values are the interrupt assignment values that identify whether the HLOS Guest or the VMM software owns the interrupts and the types of interrupts. The direct assignment identifier register 302 may store a relation of each interrupt number with its ICD_ASSIGNn value. The direct assignment identifier register 302 may be part of the interrupt controller 104, part of the interrupt distributor 200, or a separate component from the interrupt controller 104. [0047] The interrupt distributor 200 may be located on the interrupt controller 104 and be capable of receiving the peripheral interrupts (e.g., PPI and SPI). The interrupt distributor 200 may also include the interrupt distributor interface 310 that may receive software generated interrupts (SGI) from the VMM, HLOS Guest, or other software. The interrupt distributor interface 310 may also route the interrupts to the appropriate processor 102. The interrupt distributor 200 may accesses the direct assignment identifier register 302 to correlate the interrupts on the interrupt distributor 200 with their related ICD ASSIGNn values. In an aspect, SPIs may be correlated with their related ICD ASSIGNn values while PPIs do not need to be correlated with their related ICD_ASSIGNn value because they are correlated with a specific processor interface 306.[0048] The interrupt controller 104 may also include interrupt lists in the form of the list registers 308 which, when they have an open spot, accept the virtual normal and fast interrupts assigned to the HLOS Guest by the interrupt distributor 200 and routed by the from the interrupt distributor interface 310. The list registers 308 may store the interrupt numbers in interrupt controller hypervisor list register structures (ICH LRn). The ICH LRn may store values for identifying the virtual interrupts (VirtuallD), which are the same as the values identifying thecorresponding physical interrupts, and values for identifying the type of interrupt (Grp). The virtual processor interfaces 212 may also be a part of the interrupt controller 104, and control passing of the virtual interrupts to the processor interfaces 306, which pass the interrupts (physical and virtual) to the a processors 102 running the VMM or HLOS Guest depending on the ownership of the interrupts. Virtual processor interfaces 212 may be dedicated to handling a particular type of interrupt, such as being dedicated to handling fast virtual interrupts or normal virtual interrupts.[0049] The control interfaces 304 interface with the VMM software when there are no spots available in the list registers 304 for an interrupt. The control interfaces 304 may allow the VMM software to interface with the direct assignment control register 300 and to set the values for the ICH AssignControl and the ICH AssignDisableSatuts. Thus, the control interfaces 304 may allow switching back and forth between a virtual interrupt direct assignment mode, potentially bypassing the VMM software, and a VMM mode, including the VMM software in the interrupt assignment process.[0050] In an aspect the apparatus may implement a virtual interrupt direct assignment upon receiving an interrupt by the interrupt distributor 200. The interrupt distributor 220 may check whether the HLOS Guest's interrupt controller hypervisor hardware running on a processor 102 (having a P_INDEX value) has any available associated list registers 308. When there is availability in the list register 308 associated with the processor 102 and the HLOS Guest, the interrupt distributor 200 may check the direct assignment identifier register 302 for the ICD_ASSIGNn value associated with the received interrupt. When theICD ASSIGNn value indicates that the interrupt is owned by the HLOS Guest, the interrupt distributor 200 may set the VirtuallD value identifying the virtual interrupt to the ICH LRn at the available spot in the list register 308. When the ICD_ASSIGNn value signifies a normal virtual interrupt, the Grp value "1" may be set to the ICH LRn to signify the normal virtual interrupt. Similarly, when the ICD_ASSIGNn value signifies a fast virtual interrupt, the Grp value "0" may be set to the ICH LRn to signify the fast virtual interrupt.[0051] In an aspect, a fast virtual interrupt may have priority over a normal virtual interrupt. When a fast virtual interrupt is identified, it may be assigned to a list register 308 designated for fast virtual interrupts. Alternatively, interrupts already listed in the list register 308 may be shifted to allow for the fast virtual interrupt to be within the structure of the list register 308, for example a linked list, such that the fast virtual interrupt may be processed sooner than the normal virtual interrupts in the list register 308.[0052] When there is no availability in the list register 308 associated with the processor 102 for receiving the interrupt, the control interface 304 may connect the VMM software and the direct assignment control register 300. Through the control interface 304, the VMM software may disable the ability to bypass the VMM by writing a value, such as "0," to the ICH_AssignControl of the direct assignment control register 300 for the processor 102. The VMM software may repeatedly poll the ICH AssignDisableSatuts of the direct assignment control register 300 for the processor 102 until its value changes to signify availability in the list register 308, such as by having a value of "1." While the direct assignment control register 300 remains disabled, the VMM software may process the interrupts and assign them by common convention. In an aspect, once the VMM software assigns an interrupt, it may enable the direct assignment function by changing the direct assignment control register's values. When still no availability is found in the list register 308, the VMM software may again disable the direct assignment function. This process may be performed repeatedly.[0053] By disabling the direct assignment function, the VMM software avoids a race condition between the VMM software and the interrupt controller 104 to update the list registers 308 with interrupt information. Similarly, if the VMM software has any other reason to update the list registers 308, it will disable the direct assignment control register 300 to avoid the list registers 308 being updated by different sources.[0054] FIG. 4 illustrates an aspect method 400 for virtual interrupt direct assignment managing of non-secure and secure interrupts. The SoC, including the processors having the secure and non-secure executing environments and the interrupt controller including its components as described herein, may implement this method 400. In block 402 the SoC may receive an interrupt. The interrupt may originate from hardware or software, and be categorized as a secure, nonsecure, virtual, and/or physical interrupt. Each interrupt may also have an interrupt number or identification (such as a virtual interrupt identifier/Virtual ID or physical interrupt identifier/Physical ID) to help the processor and/or the HLOS Guest identify the interrupt and the task to which it relates. In block 404 the SoC may identify the interrupt number of the interrupt, which the SoC may use to route the interrupt and the processor may use to execute the interrupt.[0055] In block 406 the SoC may retrieve the interrupt security group value associated with the interrupt number. As described above, the interrupt security group value is the value for the interrupt security group identifier of the interrupt. The SoC may use the interrupt security group value to determine whether the interrupt is classified as a secure or non-secure interrupt indicating what type of processing environment the interrupt requires. This information may aid the SoC to determine the processor to which the interrupt should be routed and SoC components through which to route the interrupt. In determination block 408 the SoC may determine whether the interrupt security group value indicates that the interrupt is a secure interrupt or a non-secure interrupt. When the interrupt security group value indicates that the interrupt is a secure interrupt (i.e., determination block 408 = "Yes"), the SoC may route the secure interrupt to the appropriate interrupt interface in block 410. When the processor is available to receive an interrupt, the SoC may provide the secure interrupt to the trusted execution environment of the processor for processing in block 412. As previously described, the processor may implement a secure monitor to check the interrupt to determine whether the processor needs to change states between secure processing and non-secure processing in order to handle the received interrupt. When the interrupt security group value indicates that the interrupt is a non-secure interrupt (i.e., determination block 408 = "No"), the SoC may determine that the interrupt is a non-secure interrupt and may perform the operations in block 502 of method 500 described below with reference to FIG. 5.[0056] FIG. 5 illustrates an aspect method 500 for virtual interrupt direct assignment managing interrupts owned by the HLOS Guest. The SoC, including the processors having the secure and non-secure executing environments and the interrupt controller including its components as described herein, may implement this method 500. In block 502 the SoC may retrieve the interrupt direct assignment value associated with the interrupt number. As described herein, the interrupt direct assignment value is the value for the interrupt direct assignment identifier of the interrupt. The SoC may use the interrupt direct assignment value to determine whether the interrupt is owned by the VMM or by the HLOS Guest, and if owned by the HLOS Guest, to determine whether the interrupt is a normal or fast interrupt. This information may aid the SoC to determine to the processor to which the interrupt should routed and the SoC components through which to route the interrupt. In determination block 504 the SoC may determine whether the interrupt direct assignment value indicates that the HLOS Guest is the interrupt owner, or that the VMM is the interrupt owner. When the interrupt direct assignment value indicates that the interrupt is not owned by the HLOS Guest (i.e., determination block 504 = "No"), the SoC may determine that the VMM is the interrupt owner and perform the operations in block 602 of method 600 described below with reference to FIG. 6.[0057] When the SoC determines that the HLOS Guest is the interrupt owner (i.e., determination block 504 = "Yes"), in determination block 506 the SoC may determine whether the interrupt direct assignment value indicates that the interrupt is a fast interrupt, or a normal interrupt. As described herein, the application of this method 500 is not limited to just two speeds or priorities of interrupts, and the direct assignment characteristics of an interrupt may indicate a variety of speeds or priorities. In this example, a fast interrupt holds a higher priority than a normal interrupt, which may affect the routing and processing of the interrupts. When the interrupt direct assignment value indicates that the interrupt is a fast interrupt (i.e., determination block 506 = "Yes"), the SoC may determine whether a register is available to trigger the fast interrupt on the target processor. The registers may hold a number or pending interrupts and the slots of the register may provide for holding the interrupt number and the interrupt security group value. The register slots may also hold the interrupt direct assignment value, however, in some aspects this may not be necessary as the interrupt may be routed through hardware dedicated to routing the interrupt to a particular owner. When the SoC determines that the registers are not available to trigger the fast interrupt on the target processor (i.e., determination block 508 = "No"), the SoC may perform the operations in block 606 of method 600 described below with reference to FIG. 6.[0058] When the SoC determines that a register is available to trigger the fast interrupt on the target processor (i.e., determination block 508 = "Yes"), the SoC may route the fast interrupt to the appropriate interrupt interface in block 510. In an aspect, the SoC may include dedicated interfaces for the fast interrupts. In another aspect, the SoC may manage placement of the fast interrupts in the registers such that the interrupts are routed to the interfaces at a time in accordance with the interrupt speed or priority relative to the other interrupts in the register. In block 512 the SoC may provide the fast interrupt to the appropriate HLOS Guest owner of the interrupt, located on the processor, for processing the fast interrupt.[0059] When the interrupt direct assignment value indicates that the interrupt is not a fast interrupt (i.e., determination block 506 = "No"), the SoC may determine that the interrupt is a normal interrupt, and in determination block 514 the SoC may determine whether a register is available to trigger the normal interrupt on the target processor. When the SoC determines that the registers are not available to trigger the normal interrupt on the target processor (i.e., determination block 514 = "No"), the SoC may per perform operations in block 606 of method 600 described below with reference to FIG. 6. When the SoC determines that a register is available to trigger the normal interrupt on the target processor (i.e., determination block 514 = "Yes"), in block 516 the SoC may route the normal interrupt to the appropriate interrupt interface. In an aspect, the SoC may include dedicated interfaces for the normal interrupts. In another aspect, the SoC may manage placement of the normal interrupts in the registers such that the interrupts are routed to the interfaces at a time in accordance with the interrupt speed or priority relative to the other interrupts in the register. In block 518 the SoC may provide the normal interrupt to the appropriate HLOS Guest owner of the interrupt, located on the processor, for processing the normal interrupt. [0060] FIG. 6 illustrates an aspect method 600 for virtual interrupt direct assignment managing interrupts owned by the VMM and interrupts owned by the HLOS Guest. The SoC, including the processors having the secure and non-secure executing environments and the interrupt controller including its components as described herein, may implement this method 600. In determination block 602 the SoC may determine whether the interrupt direct assignment value indicates a fault (or an unexpected or unrecognized interrupt), or whether the VMM is the interrupt owner. When the SoC determines that the interrupt direct assignment value indicates a fault (i.e., determination block 602 = "Yes"), the SoC may discard the interrupt in block 604. When the SoC determines that the interrupt direct assignment value does not indicate a fault (i.e., determination block 602 = "No"), the SoC may determine that the VMM is the interrupt owner. In block 606 SoC may either have a VMM owned interrupt or an HLOS Guest owned interrupt that is not able to be placed in a register for the HLOS Guest, thus the SoC may route the interrupt to the appropriate interrupt interface such that the interrupt is provided to the VMM regardless of the owner. In an aspect, the SoC may include dedicated interfaces for the VMM owned interrupts and HLOS Guest interrupts to be routed to the VMM. In another aspect, the SoC may manage placement of the VMM owned interrupts in the registers such that the interrupts are routed to the interfaces at a time in accordance with VMM ownership of the interrupt. In block 608 the SoC may provide the interrupt, regardless of the owner, to the VMM on the processor to determine the owner of the interrupt.[0061] In determination block 610, the processor may determine whether the interrupt owner is the HLOS Guest or the VMM. When the processor determines that the interrupt is not owned by the HLOS Guest (i.e., determination block 610 = "No"), the processor may determine that the VMM is the interrupt owner and the processor may process the interrupt in block 612. When the processor determines that the HLOS Guest is the interrupt owner (i.e., determination block 610 = "Yes"), the processor may determine whether the interrupt is a fast interrupt or a normal interrupt in determination block 614. When the processor determines that the interrupt is a fast interrupt (i.e., determination block 614 = "Yes"), the processor may route the fast interrupt to the appropriate interrupt interface in block 516. In block 518 the processor may provide the fast interrupt to the appropriate HLOS Guest owner of the interrupt for processing the fast interrupt.[0062] When the processor determines that the interrupt is not a fast interrupt (i.e., determination block 614 = "No"), the processor may determine that the interrupt is a normal interrupt and the processor may route the normal interrupt to the appropriate interrupt interface in block 620. In block 622 the processor may provide the normal interrupt to the appropriate HLOS Guest owner of the interrupt for processing the normal interrupt.[0100] FIG. 7 illustrates an exemplary mobile device suitable for use with the various aspects. The mobile device 700 may include a processor 702 coupled to a touchscreen controller 704 and an internal memory 706. The processor 702 may be one or more multicore integrated circuits designated for general or specific processing tasks. The internal memory 706 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. The touchscreen controller 704 and the processor 702 may also be coupled to a touchscreen panel 712, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the computing device 700 need not have touch screen capability.[0101] The mobile device 700 may have one or more radio signal transceivers 708 (e.g., Peanut, Bluetooth, Zigbee, Wi-Fi, RF radio) and antennae 710, for sending and receiving communications, coupled to each other and/or to the processor 702. The transceivers 708 and antennae 710 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile device 700 may include a cellular network wireless modem chip 716 that enables communication via a cellular network and is coupled to the processor. [0102] The mobile device 700 may include a peripheral device connection interface 718 coupled to the processor 702. The peripheral device connection interface 718 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as USB, Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 718 may also be coupled to a similarly configured peripheral device connection port (not shown).[0063] The mobile device 700 may also include speakers 714 for providing audio outputs. The mobile device 700 may also include a housing 720, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile device 700 may include a power source 722 coupled to the processor 702, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile device 700. The mobile device 700 may also include a physical button 724 for receiving user inputs. The mobile device 700 may also include a power button 726 for turning the mobile device 700 on and off.[0064] The various aspects described above may also be implemented within a variety of mobile devices, such as a laptop computer 800 illustrated in FIG. 8. Many laptop computers include a touchpad touch surface 817 that serves as the computer's pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented on computing devices equipped with a touch screen display and described above. A laptop computer 800 will typically include a processor 81 1 coupled to volatile memory 812 and a large capacity nonvolatile memory, such as a disk drive 813 of Flash memory. Additionally, the computer 800 may have one or more antenna 808 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 816 coupled to the processor 81 1. The computer 800 may also include a floppy disc drive 814 and a compact disc (CD) drive 815 coupled to the processor 81 1. In a notebook configuration, the computer housing includes the touchpad 817, the keyboard 818, and the display 819 all coupled to the processor 811. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a USB input) as are well known, which may also be use in conjunction with the various aspects.[0065] Computer program code or "program code" for execution on aprogrammable processor for carrying out operations of the various aspects may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[0066] Many computing devices operating system kernels are organized into a user space (where non-privileged code runs) and a kernel space (where privileged code runs). This separation is of particular importance in Android and other general public license (GPL) environments where code that is part of the kernel space must be GPL licensed, while code running in the user-space may not be GPL licensed. It should be understood that the various software components/modules discussed here may be implemented in either the kernel space or the user space, unless expressly stated otherwise.[0067] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various aspects must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing aspects may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0068] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various aspects may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality isimplemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0069] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0070] In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non- transitory computer-readable or processor-readable storage medium. Non- transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor- readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non- transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer- readable medium, which may be incorporated into a computer program product.[0071] The preceding description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
An insulating layer (3) having an opening portion (3a) at a position conformable to an electrode pad (2) is formed. Next, a resin projection portion (4) is formed on the insulating layer (3). Thereafter, a resist film is formed which has opening portions made in regions conformable to the opening portion (3a), the resin projection portion (4) and the region sandwiched therebetween. A Cu plating layer (6) is formed by electrolytic copper plating, using the resist film as a mask. <IMAGE>
A semiconductor package, comprising:an insulating layer (3) formed on a semiconductor wafer (1) that is provided with an electrode (2);an opening portion (3a) made in said insulating layer (3) exposing said electrode (2);a rerouting layer (5, 6) provided on the insulating layer (3) and connected to said electrode (2) through said opening portion (3a);a sealing resin layer (8) which seals said wafer (1), said insulating layer (3), and said rerouting layer (5, 6);a projecting portion (7) penetrating through an opening (10a, 10b) in said sealing resin layer (8a, 8b); anda solder bump (11) being formed on an upper surface of said projecting portion (7);   wherein said projecting portion (7) comprises:a resin projection portion (4) formed on said insulating layer (3); anda conductive layer (5, 6) that coats at least an upper surface of said resin projection portion (4) and is connected to said rerouting layer (5, 6) and to said solder bump (11);    characterized in that said opening (10a, 10b) in said sealing resin layer (8) is larger than the area of the upper surface of the projecting portion (7) and the side surface of the projecting portion (7) is not completely covered with the sealing resin layer (8a, 8b), wherein the upper surface is that part of the projecting portion (7) that is parallel to the major surface of said wafer (1).The semiconductor package according to claim 1, wherein an inner surface of said opening (10a) in said sealing resin layer (10a) is inclined inwards to form a groove surrounding said upper surface of said projecting portion (7).The semiconductor package according to claim 1 or 2,wherein at least part of the side surface of said projecting portion (7) is coated with said sealing resin layer (8b), and said sealing resin layer (8b) is formed to have such a thickness that its upper surface remote from said projecting portion (7) is lower than said upper surface of said projecting portion (7).The semiconductor package according to one of claims 1 to 3, wherein in a plan view the a position of the center of said solder bump (11) is consistent with a position of the center of said resin projection portion (4).The semiconductor package according to any one of claims 1-4, wherein a shape of said resin projection portion (4) is that of a truncated cone.A semiconductor package according to any one of claims 1-5, comprising an integrated circuit formed in said wafer (1).A semiconductor package according to any one of claims 1 to 6,   wherein a circuit board is connected to said solder bump (11).The semiconductor package according to claim 5, wherein the height of said truncated cone is 25 to 100 µm.The semiconductor package according to any one of claims 1 to 8, wherein   the thickness of said insulating layer (3) is 5 to 50 µm.A method for producing a semiconductor package,   comprising the steps of:forming, on a semiconductor wafer (1) that is provided with an electrode (2), an insulating layer (3) provided with an opening portion (3a) exposing said electrode (2);forming a resin projection portion (4) on said insulating layer (3);forming a rerouting layer (5, 6) connected to said electrode (2) through said opening portion (3a);forming a conductive layer (5, 6) connected to said rerouting layer (5, 6) and coating said resin projection portion (4) ;forming a sealing resin layer (8a, 8b) which seals said wafer (1), said insulating layer (3) and said rerouting layer (5, 6) and has an opening portion (10a, 10b) above said conductive layer (5, 6); andforming a solder bump (11) on said conductive layer (5, 6) in said opening portion (10a, 10b) of said sealing resin layer,    characterized in that said opening portion (10a, 10b) in said sealing resin layer (8a, 8b) is larger than the area of the upper surface of a projecting portion (7) comprising said resin projection portion (4) and said conductive layer (5, 6)wherein the upper surface is that part of the projecting portion (7) that is parallel to the major surface of said wafer (1) .The method for producing a semiconductor package according to claim 10, wherein the step of forming said sealing resin layer (8a, 8b) comprises the steps of:forming a photosensitive resin layer on the entire surface; andforming an opening portion in said photosensitive resin layer, said opening portion exposing said conductive layer on said resin projection portion by photolithography,   wherein an area of a topmost portion of said opening (10a, 10b) in said sealing resin layer (8a, 8b) is formed larger than that of said upper surface of said projecting portion (7).The method for producing a semiconductor package according to claim 10 or 11,   wherein in a plan view the position of the center of said solder bump (11) is in plan view consistent with a position of said center of said resin projection portion (4).The method for producing a semiconductor package according to one of claims 10 to 12, wherein   a shape of said resin projection portion (4) is that of a truncated cone.The method for producing a semiconductor package according to claim 13, wherein   the height of said truncated cone is 25 to 100 µm.The method for producing a semiconductor package according to any one of claims 10 to 14, wherein   the thickness of said insulating layer (3) is 5 to 50 µm.
Technical FieldThe present invention relates to a semiconductor package, such as a wafer level CSP (Chip Size/Scale Package), using no wiring board (interposer), a semiconductor device, an electronic device, and a method for producing the semiconductor package; and particularly to a semiconductor package, a semiconductor device and an electronic device which can be produced with ease, and a method for producing the semiconductor package.Background ArtIn recent years, a development of small-sized semiconductor devices has been promoted. With this development, attention is paid to the miniaturization of the packages of these semiconductor devices. For instance, a variety of semiconductor packages have been proposed in the August issue (1998) and February issue (1999) of Nikkei Micro-device. Among these packages, especially a wafer level CSP using a semiconductor package called CSP has a high effect on the miniaturization of a package and a reduction in costs. This CSP is a package resin-sealed together with a wafer. Fig. 9 is a sectional view showing the structure of a conventional CSP. Incidentally, Fig. 9 shows the condition that the above CSP will be mounted on a printed circuit board and the vertically positional relation between the parts explained hereinafter is reversed with respect to those of Fig. 9.In the conventional CSP, plural Al pads 52 are formed on a wafer 51. Also a SiN layer 53 and a polyimide layer 54 which cover the Al pads 52 are formed on the entire surface of the wafer 51. In the SiN layer 53 and the polyimide layer 54, a via hole which reaches the Al pad 52 from the surface of the polyimide layer 54 is formed and a conductive layer 55 is embedded in the via hole. On the polyimide layer 54, a rerouting layer 56 connected to the conductive layer 55 is formed. The rerouting layer 56 is formed of, for example, Cu. A sealing resin layer 57 coating the rerouting layer 56 is formed on the entire surface of the polyimide layer 54. Inside the sealing resin layer 57, a Cu post 58 which reaches the rerouting layer 56 from the surface of the sealing resin layer 57 is formed as a metal post. A barrier metal layer 59 is formed on the Cu post 58 and a solder ball 60 such as a solder is formed on the barrier metal layer 59.Next, a method for producing the conventional CSP as mentioned above will be explained. Figs. 10 (a) to (e) are sectional views showing the method for producing the conventional CSP in step order. Incidentally, the rerouting layer, the polyimide layer and the like are omitted in Figs. 10 (a) to (e).Firstly, as shown in Fig. 10 (a), a wafer 61 with a flat surface is prepared. As shown in Fig. 10 (b), plural Cu posts 62 are formed on the wafer 61 by plating. Next, as shown in Fig. 10 (c), all Cu posts 62 are resin-sealed such that they are encased to form a sealing resin layer 63. Then, as shown in Fig. 10 (d), the surface of the sealing resin layer 63 is polished to expose each Cu post 62. Thereafter, as shown Fig. 10 (e), a solder ball 64 such as a solder is mounted on each Cu post 62.The CSP as described above is thus formed. This CSP is made into a given size by dicing afterwards.Since a semiconductor package is in general different from a printed circuit board or the like in thermal expansion coefficient, a stress based on the difference in thermal expansion coefficient focuses on a terminal of the semiconductor package. However, in the above-mentioned CSP, the stress is easily dispersed by making the cylindrical Cu post 62 have a large height.However, in order to disperse the stress based on the difference in thermal expansion coefficient, it is necessary for a metal post, such as a Cu post, to have a height as large as about 100µm from the rerouting layer. However, if a metal post having such a height is formed by plating, there is a problem that a remarkable long period of time is required. This further gives rise to the problems of increased production cost and a difficulty in control of the height of the metal post.From JP 11 008 250 A (cf. PATENT ABSTRACTS OF JAPAN, vol. 1999, no. 04, 30 April 1999) a semiconductor integrated circuit device is known, wherein a metallic plating pedestal layer is formed on an organic resin layer formed on a semiconductor substrate and stress is smoothed at the metallic plating pedestal part by constituting the structure as a cantilever structure.From JP 01 209 746 A (cf. PATENT ABSTRACTS OF JAPAN vol. 013, no. 518 (E-848), 20 November 1989) a flip-chip shaped semiconductor device is known, wherein the ground layer of the bump comprises a heat resisting low-stress resin layer and a conductor wiring layer which is connected in a zigzag pattern.From US-A-5 844 782 a printed wiring board is known, wherein the formation of cracks in base portions of projecting external electrodes formed on lands on the printed wiring board is prevented by providing a gap between each of the external electrodes and a pattern-protecting film.From US-A-5 874 782 a semiconductor device and a method of making the same is known, wherein the contact pads are in a raised elevational relationship relative to the surface conductors.In light of above problems, the present invention has been made. It is an object of the present invention to provide a semiconductor package, a semiconductor device and an electronic device which make it possible to disperse a stress produced when the package is mounted on a printed circuit board or the like and which can be produced for a short time, and a method for producing the semiconductor package.Disclosure of the InventionThe object is attained by a semiconductor package according to claim 1 and also attained by a method for producing a semiconductor package according to claim 10. Further developments of the invention are specified in the dependent claims.In the present invention, the post is provided with the resin projection portion wherein at least the upper surface thereof is coated with the conductive layer. Therefore, in the case that stress is generated in this post, the stress is dispersed mainly by the resin projection portion. For this reason, no thick plating layer is necessary for the post, so that the production process is shortened. Since the height of the post can be controlled by the height of the resin projection portion, the adjustment thereof is easy.By making an area of the opening portion made in the sealing resin layer through which the post penetrates larger than that of the upper surface of the post, the contact area between the solder bump and the conductive layer can be made large. Therefore, the reliability of ensuring electric conduction and adhesive strength is improved. In this case, a boundary between the post and the sealing resin layer may be present outside the upper surface of the post as is viewed in plan.In the case that the inner surface of the opening portion made in the sealing resin layer is inclined inwards to form a groove surrounding a periphery of the upper surface of the post and the boundary is divided by the groove, the flexibility of the deformation of the resin projection portion becomes large on the basis of resin-removal. Thus, the stress is still more easily dispersed.In the case that at least one part of a periphery of the post is coated with the sealing resin layer and the sealing resin layer is formed to have such a thickness that its upper surface apart from the post is lower than the upper surface of the post, the flexibility of the deformation of the resin projection portion becomes large in the same way. Thus, the stress is still more easily dispersed.The stress acting from the solder bump to the resin projection portion can be still more uniformly dispersed if a position of the center of the solder bump is consistent with a position of the center of the resin projection portion as are viewed in plan.Brief Description of the DrawingsFigs. 1 (a) to (c) are sectional views showing a method for producing a semiconductor package in step order;Figs. 2 (a) to (c) are also views showing the method for producing a semiconductor package, the views being sectional views showing steps subsequent to the steps shown in Fig. 1;Figs. 3 (a) and (b) are also views showing the method for producing a semiconductor package, the views being sectional views showing steps subsequent to the steps shown in Fig. 2;Fig. 4 is a view obtained by tracing a photograph showing a state after a seed layer 5 is removed in the method of producing the semiconductor package of Fig 3(b);Fig. 5 is a view obtained by tracing a photograph showing a state after a sealing resin layer 8 is formed in the method of producing the semiconductor package of Fig 3(b);Fig. 6 is a sectional view showing a semiconductor package produced according to a first embodiment of the present invention;Fig. 7 is a view obtained by tracing a photograph showing a state after a sealing resin layer 8a is formed in the first embodiment;Fig. 8 is a sectional view showing a semiconductor package produced according to a second embodiment of the present invention;Fig. 9 is a sectional view showing the structure of a conventional CSP; andFigs. 10 (a) to (e) are sectional views showing a method for producing the conventional CSP in step order.Best Mode for Carrying Out the InventionA method for producing a semiconductor package according to embodiments of the present invention will be hereinafter explained in detail with reference to the appended drawings. Figs. 1 (a) to (c), Figs. 2 (a) to (c), and Figs. 3 (a) and (b) are sectional views showing a method for producing a semiconductor package in step order. This method is not part of the invention but promotes the understanding of the same.In the present method, as shown in Fig. 1 (a), there is first prepared a product wherein a passivation film 9, made of SiN or the like, is directly formed on the entire surface of a Si wafer 1 in which an integrated circuit (not shown) and electrodes thereof, for example, an Al pad 2, are disposed. An opening portion is made at the position conformable to the Al pad 2 in the passivation film 9, so that the Al pad 2 is exposed.Thereafter, as shown in Fig. 1 (b), an insulating layer 3 made of a resin and having an opening portion 3a at the position conformable to the Al pad 2 is formed. The insulating layer 3 is made of, for example, a polyimide, epoxy or silicone resin. The thickness thereof is, for example, from 5 to 50 µm. The insulating layer 3 can be made by, for example, spin coating method, printing method, laminating method or the like. The opening portion 3a can be made, for example, by depositing a film that is made of polyimide or the like and constitutes the resin layer 3 on the entire surface and subsequently patterning the film by photolithography.Next, as shown in Fig. 1 (c), a projection portion 4 that is made of a resin and has a truncated cone shape (trapezoidal section; a resin projection portion having a shape obtained by removing, from a cone, its upper side) is formed at a position which is apart from the electrode above the wafer, and on the insulating layer 3. The trapezoidal projection portion 4 is made of, for example, a polyimide, epoxy or silicone resin. The thickness thereof is, for example, from 25 to 100 µm. The projection portion 4 can be formed, from polyimide or the like, by printing method, laminating method, spin coating method or the like.Subsequently, as shown in Fig. 2 (a), a thin seed layer 5 for electrolytic plating is formed on the entire surface or regions requiring it. The seed layer 5 is, for example, a laminate formed by a sputtering method and either consisting of a Cu layer and a Cr layer or consisting of a Cu layer and a Ti layer. The seed layer 5 may be either an electroless Cu plating layer or a metallic thin film layer formed by vapor deposition method, application method or chemical vapor deposition (CVD) method or the like; or a combination of these layers.Next, a resist film (not shown) for electrolytic plating is formed on the seed layer 5. This resist film is provided with the opening portion 3a, the projection portion 4, and an opening portion formed in a region conformable to the region sandwiched between these portions. The resist film may be formed, for example, using a method of laminating a film resist or a method of spin-coating a liquid resist. Thereafter, as shown in Fig. 2 (b), a Cu plating layer 6, which is a conductive layer, is formed on the exposed seed layer 5 by electrolytic copper plating, using the resist film as a mask. By the above-mentioned steps, a wiring path (a circuit pattern), made of the Cu plating layer 6, is formed on the Si wafer 1. The thickness of the Cu plating layer 6 is, for example, 5 to 50µm. Thereafter, for example, a Ni plating layer and a Au plating layer (not shown) may be formed on the Cu plating layer 6 to improve wettability of a solder bump that will be formed later.Subsequently, as shown in Fig. 2 (c), the resist film is exfoliated and the unnecessary seed layer 5 which is bare on the surface of the wafer is removed by etching so that the insulating layer 3 is made bare in a region except the conductive layer 6. In this manner, a post 7 coated with the conductive layer is formed on the Si wafer 1. Fig. 4 is a view obtained by tracing a photograph showing the surface state of the Si wafer 1 after the seed layer 5 is removed in the present method, in which the wafer is diagonally viewed from the side thereof. In Fig. 4, the trapezoidal projection portions 4, the electrodes 2 and the conductive layer 6 for connecting them to each other are shown on the wafer. The conductive layer 6 between the electrode 2 and the projection portion 4 makes the wiring path on the Si wafer 1. As shown in Fig. 4, some wiring paths do not make the shortest straight path between the electrode 2 and the resin projection portion 4, and may be bent.Subsequently, as shown in Fig. 3 (a), the entire surface is coated with a sealing resin layer 8 for surface-protection, which has a thickness of about 10 to 150 µm, in the manner that the sealing resin layer 8 swells around the periphery of the surface of the post 7 and only the center thereof is exposed. In other words, the area of an opening portion 10 made in the sealing resin layer 8 is made smaller than that of the upper surface of the post 7. As this sealing resin layer, a polyimide resin, an epoxy resin or a silicone resin can be preferably used. Fig. 5 shows, after the sealing resin layer 8 is formed, the surface state of the semiconductor package, and is a view obtained by tracing a photograph wherein the wafer is diagonally viewed from the side direction thereof. The step of forming the sealing resin layer 8 can be carried out, for example, by making the sealing resin layer 8 of a photosensitive resin, such as a photosensitive polyimide resin, and then patterning this layer by photolithography. However, this method is not restrictive.Next, for example, a solder bump 11 is formed on the surface of the post 7. Examples of the method for forming the solder bump 11 include plating, printing and metal jetting methods, and a method of putting a solder ball on the surface. It is important for uniform dispersion of stress that the center of the solder bump 11 and that of the resin projection portion 4 are consistent with each other, as are viewed in plan (from the upper of the wafer). In other words, it is important that the center position of the solder bump 11, which is round as is viewed in plan, and the center position of the round resin projection portion 4 are consistent with each other.The post 7 of the semiconductor package produced in this manner has a shape as shown in Fig. 2 (c) and Fig. 4. That is the seed layer 5 and the 20-µm Cu plating layer 6 are formed on the upper surface and the side surface of the resin projection portion 4, which has a trapezoidal section and whose height is, for example, 30 µm, so as to cover the projection portion 4. The post having a height of 50 µm as a whole is formed. Therefore, in the case that the wafer is mounted on a printed circuit board and stress is generated, the stress is uniformly dispersed by the flexible resin projection portion 4 so that strain given to the wafer is relieved. The seed layer 5 and the Cu plating layer 6 also function as a rerouting layer between the solder bump and the Al pad 2. This rerouting layer corresponds to the above-mentioned rerouting path.As described above, according to the present semiconductor package and method for producing the same, it is possible to keep electric conductivity and disperse the stress uniformly even if there is no plating layer having a thickness as large as 100 µm. Accordingly, the package can be produced for a short time by the simplification of the plating step, and costs for producing it can be reduced. Since the height of the post 7 can be controlled by the height of the projection portion 4, the adjustment thereof can be attained by only the adjustment of a resin-swelling amount. This is easy.The following will describe a first embodiment of the present invention. Fig. 6 is a sectional view showing a semiconductor package produced according to the first embodiment of the present invention. In the first embodiment shown in Fig. 6, to the same constituents as in the semiconductor package shown in Fig. 3 (b) are attached the same reference numbers and detailed description thereof is omitted. The first embodiment is different from the semiconductor package of Fig. 3(b) in that none of the upper surface of a post is coated with a sealing resin layer.In the first embodiment, the Cu plating layer 6 is formed and the unnecessary seed layer 5 is removed in the same steps as in above method for producing the semiconductor package of Fig. 3(b). Thereafter, as shown in Fig. 6, a sealing resin layer 8a for surface-protection is formed on the entire surface in the manner that the surface of the post 7 is exposed and a groove is made between the sealing resin layer 8a and the post 7. In other words, the area of the round opening portion 10a in the sealing resin layer 8a is made larger than that of the round upper surface of the post 7. In the opening portion in the sealing resin layer 8a, its inside surface 10d is inclined inwards, that is, toward the side of the wafer. In short, the inside surface 10d falls in. A round groove that surrounds the post 7 is made around the post 7. This groove divides the post 7 from the sealing resin layer 8a. Fig. 7 is a view obtained by tracing a photograph showing a state after the sealing resin layer 8a is formed in the first embodiment. It will be understood that the ring-like groove is made to surround the conductive layer 6 that is bare on the post 7. Thereafter, the solder bump 11 is formed on the surface of the post 7 in the same way as in above method for producing the semiconductor package of Fig. 3(b). Examples of the depth of the groove vary. As shown, there are various modified examples, for example, a groove which is cut off to the upper portion of the post 7 and has a shallow depth, and a groove which is cut off to the lower portion thereof.In the case that the semiconductor package produced according to the first embodiment as described above is mounted on a printed circuit board and stress is generated, the stress is dispersed by the projection portion 4 in the post 7. Particularly in the first embodiment, since the side of the post 7 is not completely covered with the sealing resin layer 8a and no sealing resin layer 8a is present above the post 7, no circumference of the post 7 is fixed by the sealing resin layer 8a. Thus, in the first embodiment the post 7 deforms more easily than in the semiconductor package of Fig. 3(b). Namely, the resin projection portion constituting the post 7 deforms easily. For this reason, the effect of the stress-dispersion is still higher. The seed layer 5 and the Cu plating layer 6 also function as a rerouting layer between the solder bump and the Al pad 2.The step of forming the sealing resin layer 8a may be a step of forming a resin layer for covering the Cu plating layer 6 and then subjecting the resin layer to surface-polishing until the Cu plating layer 6 is exposed.The following will describe a second embodiment. Fig. 8 is a sectional view showing a semiconductor package produced according to the second embodiment of the present invention. In the second embodiment shown in Fig. 8, to the same constituents as in the semiconductor package shown in Fig. 3 (b) are attached the same reference numbers and detailed description thereof is omitted.In the second embodiment, the Cu plating layer 6 is formed the unnecessary seed layer 5 is removed in the same step as in the method for producing the semiconductor package of Fig.3(b). Thereafter, as shown in Fig. 8, a sealing resin layer 8b for surface-protection is formed in regions except the upper surface of the post 7 and the upper part of the side surface of the post 7. In this case, therefore, an opening portion 10b in the sealing resin layer 8b has a larger area than the area of the upper surface of the post 7. Subsequently, the solder bump 11 is formed on the surface of the post 7 in the same way as in the method for producing the semiconductor package of Fig. 3(b).The upper surface 8d of the sealing resin layer 8b at the position apart from the post 7 is lower than the upper surface of the post 7. An inner edge 7a of the opening portion 10b in the sealing resin layer 8b surrounds the periphery of the post 7. The inner edge 7a crawls up the side surface of the post 7 to make a thin layer around the post.A tip 10c of this inner edge 7a is slightly lower than the upper surface of the post 7. Namely, the periphery of the post 7 or a part thereof is coated with the sealing resin layer 8b. The sealing resin layer 8b is formed to have such a thickness that the surface 8d apart from the post 7 is lower than the upper surface of the post 7. The tip 10c of the inner edge 7a may be consistent with the upper surface of the post 7.In the post 7 of the semiconductor package produced according to the second embodiment as described above, the side surface of the post 7 is not completely covered with the sealing resin layer 8b. Since the sealing resin layer 8b is not present particularly in the circumference of the upper part of the post 7, the post 7 deforms easily in the same way as in the first embodiment. Therefore, the effect of stress-dispersion becomes still stronger, as is compared with the semiconductor package of Fig. 3(b). The thickness of the sealing resin layer 8b (that is, the inner edge 7a of the opening portion 10b) around the post 7 may be gradually thinner toward the upper side, which is not particularly shown. Since the upper surface of the Cu plating layer 6 is completely bare from the sealing resin layer 8b, the reliability of both ensuring electric conduction and mechanical connection is still higher.The raw material of the resin projection portion made inside the post is not limited to a polyimide, epoxy, silicone resin or the like. If a material makes it possible to disperse the stress, the material can be used. The conductive layer in the post 7 does not necessarily coat the whole of the inside resin projection portion. It is sufficient that the conductive layer coats the resin projection portion at least above the region where the solder bump is formed. In the above-mentioned embodiments, the post 7 and the electrode 2 are connected to each other through the conductive layer 6. However, in order to uniformalize the stress distribution of the whole of the wafer, which is to be connected to a circuit board, on the surface thereof, the posts 7 that are not connected to the electrode 2 may be dispersed and arranged on the wafer 7.The semiconductor package produced in these embodiments is afterward integrated into, for example, an electronic device by connecting the solder bump to a circuit board.The electronic device is an apparatus obtained by combining this circuit board with a peripheral device or the like, and is, for example, a mobile phone or a personal computer.As the insulating layer 3, there can be used a resin other than resins in the respective embodiments, or an insulating material other than resins.The positional relationship between the electrode and the resin projection portion is not limited to that in these embodiments.As the wafer, there can be used, for example, a compound semiconductor wafer made of GaAs, GaP, or the like, besides a Si wafer.Industrial ApplicabilityAs described in detail, according to the present invention, since the post is provided with the resin projection portion coated with the conductive layer, the stress generated in the post can be dispersed mainly by the resin projection portion. Therefore, it is possible to make unnecessary a thick plating layer which has been hitherto required for the post and to shorten the production process. Moreover, the height of the post can be controlled by the height of the resin projection portion. Thus, the control thereof is easy.
A multigate transistor device such as a fin-shaped field effect transistor (FinFET) is fabricated by applying a self-aligned diffusion break (SADB) mask having an opening positioned to expose an area of at least one portion of at least one gate stripe designated as at least one tie-off gate in the multigate transistor device and removing the tie-off gate through the opening of the SADB mask to isolate transistors adjacent to the tie-off gate.
CLAIMSWHAT IS CLAIMED IS:1. A method of making an integrated circuit, comprising:applying a self-aligned diffusion break (SADB) mask to a multigate transistor device comprising a plurality of transistors, the SADB mask having an opening positioned to expose an area over at least one portion of at least one gate stripe designated as at least one tie-off gate, said at least one gate stripe disposed across at least one oxide diffusion (OD) stripe of the multigate transistor device; andremoving said at least one tie-off gate through the opening of the SADB mask to isolate transistors adjacent to said at least one tie-off gate.2. The method of claim 1, wherein the transistors comprise a plurality of multigate field effect transistors (FETs).3. The method of claim 2, wherein the multigate FETs comprise a plurality of fin- shaped field effect transistors (FinFETs).4. The method of claim 1, wherein said at least one OD stripe comprises at least one continuous OD stripe before the step of removing said at least one tie-off gate.5. The method of claim 1, further comprising a plurality of OD stripes substantially in parallel to one another.6. The method of claim 1, further comprising a plurality of gate stripes substantially in parallel to one another.7. The method of claim 1, wherein said at least one gate stripe is substantially perpendicular to said at least one OD stripe.8. The method of claim 1, wherein the step of removing said at least one tie-off gate comprises etching said at least one tie-off gate through the opening of the SADB mask.9. The method of claim 1, further comprising removing at least one portion of said at least one OD stripe underneath said at least one tie-off gate through the opening of the SADB mask.10. The method of claim 9, further comprising filling said removed at least one tie -off gate and said removed at least one portion of said at least one OD stripe underneath said at least one tie-off gate with an insulating dielectric.11. A method for making an integrated circuit, comprising the steps for: applying a self-aligned diffusion break (SADB) mask to a multigate transistor device comprising a plurality of transistors, the SADB mask having an opening positioned to expose an area over at least one portion of at least one gate stripe designated as at least one tie-off gate, said at least one gate stripe disposed across at least one oxide diffusion (OD) stripe of the multigate transistor device; andremoving said at least one tie-off gate through the opening of the SADB mask to isolate transistors adjacent to said at least one tie-off gate.12. The method of claim 11, wherein the transistors comprise a plurality of multigate field effect transistors (FETs).13. The method of claim 12, wherein the multigate FETs comprise a plurality of fin-shaped field effect transistors (FinFETs).14. The method of claim 11, wherein said at least one OD stripe comprises at least one continuous OD stripe before the step of removing said at least one tie-off gate.15. The method of claim 11, further comprising a plurality of OD stripes substantially in parallel to one another.16. The method of claim 11, further comprising a plurality of gate stripes substantially in parallel to one another.17. The method of claim 11, wherein said at least one gate stripe is substantially perpendicular to said at least one OD stripe.18. The method of claim 11, wherein the step for removing said at least one tie-off gate comprises the step for etching said at least one tie-off gate through the opening of the SADB mask.19. The method of claim 11, further comprising the step for removing at least one portion of said at least one OD stripe underneath said at least one tie-off gate through the opening of the SADB mask.20. The method of claim 19, further comprising the step for filling said removed at least one tie-off gate and said removed at least one portion of said at least one OD stripe underneath said at least one tie-off gate with an insulating dielectric.21. An integrated circuit device, comprising:a plurality of gate stripes;a plurality of oxide diffusion (OD) stripes disposed across the gate stripes, wherein at least one portion of at least one of the gate stripes and at least one portion of at least one of the OD stripes are removed to form at least one void; andan insulating dielectric in said at least one void to isolate transistors adjacent to said at least one void.22. The device of claim 21, wherein the gate stripes are substantially parallel to one another, and wherein the OD stripes are substantially parallel to one another.23. The device of claim 21, wherein the OD stripes are substantially perpendicular to the gate stripes.24. The device of claim 21, wherein the integrated circuit device comprises a plurality of field effect transistors (FETs).25. The device of claim 24, wherein the FETs comprise a plurality of fin- shaped field effect transistors (FinFETs).26. A multigate transistor device, comprising:a plurality of gate stripes;a plurality of oxide diffusion (OD) stripes disposed across the gate stripes, wherein at least one portion of at least one of the gate stripes and at least one portion of at least one of the OD stripes are removed to form at least one void; andan insulating dielectric deposited in said at least one void to isolate transistors adjacent to said at least one void.27. The device of claim 26, wherein the gate stripes are substantially parallel to one another, and wherein the OD stripes are substantially parallel to one another.28. The device of claim 26, wherein the OD stripes are substantially perpendicular to the gate stripes.29. The device of claim 26, wherein the multigate transistor device comprises a plurality of field effect transistors (FETs).30. The device of claim 29, wherein the FETs comprise a plurality of fin- shaped field effect transistors (FinFETs).
MULTIGATE TRANSISTOR DEVICE AND METHOD OF ISOLATING ADJACENT TRANSISTORS IN MULTIGATE TRANSISTOR DEVICE USING SELF-ALIGNED DIFFUSIONBREAK (SADB)Field of Disclosure[0001] Various embodiments described herein relate to fabrication of semiconductor devices, and more particularly, to fabrication of multigate transistor devices such as fin-shaped field effector transistor (FinFET) devices.Background[0002] Multigate transistors have been implemented in integrated circuit chips for area efficiency. Examples of multigate transistors include fin-shaped field effect transistors (FinFET s) having multiple fins disposed on two sides of a gate stripe, with fins on one side of the gate stripe serving as sources and fins on the other side of the gate stripe serving as drains of the FinFETs. Examples of typical FinFET devices include devices in which transistor arrays are formed by multiple gate stripes in parallel with one another, which are positioned perpendicular to multiple oxide diffusion (OD) stripes in parallel with one another. The OD stripes are positioned like fins on two sides of each gate stripe. Each pair of source and drain and a portion of the gate stripe between such pair of source and drain may be implemented as an individual transistor. Adjacent transistors may need to be isolated in order for a pair of source and drain and the associated portion of the gate stripe to serve as an individual transistor.[0003] Various conventional techniques have been devised for isolating adjacent transistors in FinFET layouts, including, for example, techniques using a single OD break, a double OD break, or continuous OD. With either a single or double OD break, a break in an OD stripe is created during the OD masking step. A double OD break is a larger break than a single OD break for better isolation but sacrifices a column (or row) of gates in comparison to a single OD break. Alignment of OD breaks may be difficult with either single or double OD break in practice. In continuous OD, no OD break is created, but a gate that is selected for "tie-off to isolate two adjacent transistors is driven to a low voltage or turned off to mitigate leakage across the adjacent transistors. In practice, some leakage may still exist with continuous OD because there is no physical break between the transistors. SUMMARY[0004] Exemplary embodiments are directed to an integrated circuit device, such as a device comprising multigate transistors or fin-shaped field effect transistors (FinFETs), and a method of fabricating the same, using a self-aligned diffusion break (SADB) mask.[0005] In an embodiment, a method of making an integrated circuit is provided, the method comprising: applying a self-aligned diffusion break (SADB) mask to a multigate transistor device comprising a plurality of transistors, the SADB mask having an opening positioned to expose an area over at least one portion of at least one gate stripe designated as at least one tie-off gate, said at least one gate stripe disposed across at least one oxide diffusion (OD) stripe of the multigate transistor device; and removing said at least one tie-off gate through the opening of the SADB mask to isolate transistors adjacent to said at least one tie-off gate.[0006] In another embodiment, a method for making an integrated circuit is provided, the method comprising the steps for: applying a self-aligned diffusion break (SADB) mask to a multigate transistor device comprising a plurality of transistors, the SADB mask having an opening positioned to expose an area over at least one portion of at least one gate stripe designated as at least one tie-off gate, said at least one gate stripe disposed across at least one oxide diffusion (OD) stripe of the multigate transistor device; and removing said at least one tie-off gate through the opening of the SADB mask to isolate transistors adjacent to said at least one tie-off gate.[0007] In another embodiment, an integrated circuit device is provided, the device comprising: a plurality of gate stripes; a plurality of oxide diffusion (OD) stripes disposed across the gate stripes, wherein at least one portion of at least one of the gate stripes and at least one portion of at least one of the OD stripes are removed to form at least one void; and an insulating dielectric in said at least one void to isolate transistors adjacent to said at least one void.[0008] In yet another embodiment, a multigate transistor device is provided, the device comprising: a plurality of gate stripes; a plurality of oxide diffusion (OD) stripes disposed across the gate stripes, wherein at least one portion of at least one of the gate stripes and at least one portion of at least one of the OD stripes are removed to form at least one void; and an insulating dielectric deposited in said at least one void to isolate transistors adjacent to said at least one void. BRIEF DESCRIPTION OF THE DRAWINGS[0009] The accompanying drawings are presented to aid in the description of embodiments and are provided solely for illustration of the embodiments and not limitations thereof.[0010] FIG. 1 is a simplified perspective view of an embodiment of a portion of a fin-shaped field effect transistor (FinFET) device.[0011] FIG. 2 is a simplified top plan view of an embodiment of a portion of a FinFET device with a plurality of gate stripes and a plurality of oxide diffusion (OD) stripes before any portions of the gate stripes and any portions of OD stripes are removed to isolate adjacent transistors in the FinFET device.[0012] FIG. 3 is a simplified top plan view of the portion of the FinFET device of FIG. 2, with a self-aligned diffusion break (SADB) mask having an opening aligned for the removal of three tie-off gates.[0013] FIG. 4 is a sectional view of the FinFET device taken along sectional lines 300a-300b in the top plan view of FIG. 3, showing the SADB mask with an opening over one of the tie-off gates.[0014] FIG. 5 is a sectional view of the FinFET device of FIG. 4 after the removal of the gate region of a tie-off gate.[0015] FIG. 6 is a sectional view of the FinFET device of FIG. 5 showing a void after the removal of a portion of the OD stripe underneath the tie-off gate.[0016] FIG. 7 is a sectional view of the FinFET device of FIG. 6 after the void created by the removal of the OD stripe underneath the tie-off gate is filled with an insulating dielectric.[0017] FIG. 8 is flowchart illustrating an embodiment of a method for fabricating an integrated circuit device.DETAILED DESCRIPTION[0018] Aspects of the disclosure are described in the following description and related drawings directed to specific embodiments. Alternate embodiments may be devised without departing from the scope of the disclosure. Additionally, well known elements will not be described in detail or will be omitted so as not to obscure the relevant details of the disclosure.[0019] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments include the discussed feature, advantage or mode of operation.[0020] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments. As used herein, the singular forms "a," "an," and "the," are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. Moreover, it is understood that the word "or" has the same meaning as the Boolean operator "OR," that is, it encompasses the possibilities of "either" and "both" and is not limited to "exclusive or" ("XOR"), unless expressly stated otherwise.[0021] FIG. 1 is a perspective view of an embodiment of a portion of a fin-shaped field effect transistor (FinFET) device to which an embodiment of a method for isolating adjacent transistors using a self-aligned diffusion mask (SADB) for tie-off gate and diffusion etching is applicable. Although embodiments of the method are described with respect to fin-shaped field effect transistor (FinFET) devices, the method is also applicable to semiconductor devices of other layouts, for example, other types of multigate transistor devices, including devices with planar field effect transistor (FET) layouts, without departing from the scope of the disclosure. In the perspective view shown in FIG. 1, the FinFET comprises an elongate gate stripe 102 and a plurality of oxide diffusion (OD) stripes 104, 106 and 108 disposed across the gate stripe 102.[0022] In an embodiment, the OD stripes 104, 106 and 108 extend from both sides of the gate 102 to serve as sources and drains of a multigate transistor device. For example, the OD stripes 104, 106 and 108 may comprise segments 104a, 106a and 108a on one side of the gate 102, serving as sources of the multigate transistor device, and segments 104b, 106b and 108b on the other side of the gate 102, serving as drains of the multigate transistor device, respectively. Thus, the OD stripes 104, 106 and 108 are arranged in the form of "fins" on both sides of the gate 102. In the FinFET device shown in FIG. 1, the top of the gate stripe 102 is above the top of the OD stripes 104, 106 and 108. Although the OD stripes 104, 106 and 108 are shown as being substantially parallel to one another and substantially perpendicular to the gate stripe 102, the OD stripes 104, 106 and 108 need not be strictly in parallel with one another, and they need not be strictly perpendicular to the gate stripe 102. Moreover, although the perspective view of FIG. 1 shows only one gate stripe 102 for simplicity of illustration, multiple gate stripes may be implemented in an integrated circuit device, such as the one shown in the simplified top plan view of FIG. 2 described below. Furthermore, in the embodiment shown in FIG. 1, the FinFET device also comprises a substrate 110, which may comprise a silicon substrate, and an oxide layer 112, which may be fabricated in conventional manners known to persons skilled in the art. Other layers of materials or dopants may also be provided in the FinFET in conventional manners known to persons skilled in the art.[0023] FIG. 2 is a simplified top plan view of an embodiment of a portion of a FinFET device with a plurality of gate stripes and a plurality of oxide diffusion (OD) stripes before any portions of the gate stripes and any portions of OD stripes are removed to isolate adjacent transistors of the FinFET device. In FIG. 2, three OD stripes 202, 204 and 206 are positioned across five gate stripes 208, 210, 212, 214 and 216. In the top plan view of the FinFET device as shown in FIG. 2, the gate stripes 208, 210, 212, 214 and 216 cross over and above the OD stripes 202, 204 and 206. Although the OD stripes 202, 204 and 206 are shown as being parallel to one another, the gate stripes 208, 210, 212, 214 and 216 are shown as being parallel to one another, and the OD stripes 202, 204 and 206 are shown as being perpendicular to the gate stripes 208, 210, 212, 214 and 216 in a grid configuration in FIG. 2, the gate stripes need not be strictly in parallel with one another, the OD stripes need not be strictly in parallel with one another, and the gate stripes need not be strictly perpendicular to the OD stripes in other embodiments.[0024] FIG. 3 is a simplified top plan view of the portion of the FinFET device of FIG. 2, with a self-aligned diffusion break (SADB) mask 302 covering at least the portion of the FinFET device as shown in FIG. 2. In an embodiment, the SADB mask 302 has an opening 304 defined by edges 306a, 306b, 306c and 306d. In the embodiment shown in FIG. 3, the opening 304 of the SADB mask 302 is aligned for the removal of three tie- off gates 308, 310 and 312 formed by crossovers of the gate stripe 212 with the OD stripes 202, 204 and 206, respectively. In an embodiment, a tie-off gate is selected for removal in order to isolate two adjacent transistors in an array of transistors in a multigate transistor device. For example, in FIG. 3, the tie-off gate 308 is designated for removal to isolate two adjacent transistors 314 and 316, which are formed by crossovers of the OD stripe 202 with the gate stripes 210 and 214, respectively. In a similar manner, the tie-off gate 310 is designated for removal to isolate two adjacent transistors 318 and 320, which are formed by crossovers of the OD stripe 204 with the gate stripes 210 and 214, respectively, whereas the tie-off gate 312 is designated for removal to isolate two adjacent transistors 322 and 324, which are formed by crossovers of the OD stripe 206 with the gate stripes 210 and 214, respectively. In an embodiment, before the tie-off gates 308, 310 and 312 are removed, the OD stripes 202, 204 and 206 are continuous OD stripes.[0025] In an embodiment, one or more edges of an opening of the SADB mask may be self- aligned to the exposed polysilicon gate regions of tie-off gates designated for removal. For example, in the embodiment shown in FIG. 3, the opening 304 of the SADB mask 302 has a substantially rectangular shape with edges 306a, 306b, 306c and 306d, among which the long edges 306a and 306b are aligned substantially equidistantly to two sides of the gate stripe 212. The short edges 306c and 306d of the opening 304 of the SADB mask 302 may be determined by the number of tie-off gates to be exposed by the mask opening 304 and the distances between OD stripes 202, 204 and 206. Although the opening 304 of the SADB the mask 302 is shown in FIG. 3 to expose three tie-off gates 308, 310 and 312, the mask opening may be planned to remove any number of tie-off gates selected for removal to isolate adjacent transistors that are intended to function as active circuit elements. Moreover, in other embodiments, the mask opening 304 need not be substantially rectangular in shape as shown in FIG. 3. Furthermore, for a large- scale integrated circuit device with a large array of transistors arranged in multiple columns and rows, the SADB mask 302 may have multiple openings over tie-off gates selected for removal anywhere in the transistor array. In an embodiment, the layout of openings in an SADB mask may be planned simply by using markers on the mask.[0026] FIG. 4 is a is a sectional view of the FinFET device taken along sectional lines 300a- 300b in the top plan view of FIG. 3, showing the SADB mask 302 having an opening 304 defined by edges 306a and 306b over one of the tie-off gates 310, which is formed by the crossover of the gate stripe 212 with the OD stripe 204. Before any part of the FinFET device is removed or etched away through the opening 304 of the SADB mask 302, the tie-off gate 310 may be no different from the gates of other transistors, for example, adjacent transistors 318 and 320 formed by crossovers of the gate stripes 210 and 214 with the OD stripe 204, respectively. In an embodiment, an oxide layer 410 may be disposed on the OD stripe 204 and around the gate stripes 210, 212 and 214 such that the top surface 412 of the oxide layer 410 is flush with the top of the gate stripes 210, 212 and 214 to allow placement of the SADB mask 302 over the gate stripes. In an embodiment, the OD stripe 204 is disposed on a substrate 110, such as a silicon substrate, for example. Other materials or dopants may also be provided in conventional manners known to persons skilled in the art.[0027] FIG. 5 is a sectional view of the FinFET device of FIG. 4 after removing a portion of the gate material in the gate stripe 212 directly underneath the opening 304 of the SADB mask 302, that is, the gate region of the tie-off gate 310 as shown in FIG. 4. After the removal of the portion of the gate material in the gate stripe 212 which was previously the gate region of the tie-off gate 310, a void 502 is formed directly underneath the opening 304 of the SADB mask 302, as shown in FIG. 5. In an embodiment, the gate region of the tie-off gate 310 is removed by etching through the opening 304 of the SADB mask 302. In an embodiment, etching may be performed by using a conventional etching technique known to persons skilled in the art. For example, in embodiments in which the material of the gate stripe 212 comprises polysilicon, the polysilicon gate material underneath the opening 304 of the SADB mask 302 may be removed by a conventional etching process. In the embodiment shown in FIG. 5, the gate region of the tie-off gate 310 is etched to a depth such that the portion of the OD stripe 204 underneath what was previously the gate region of the tie-off gate 310 is exposed through the void 502 and the opening 304 of the SADB mask 302.[0028] FIG. 6 is a sectional view of the FinFET device of FIG. 5 after further removing the portion of the OD stripe 204 underneath what was previously the gate region of the tie- off gate 310, directly underneath the opening 304 of the SADB mask 302 as previously shown in FIG. 4. After the removal of the portion of the OD stripe 204 underneath what was previously the gate region of the tie-off gate 310, a deeper void 602 is formed under the opening 304 of the SADB mask 302, as shown in FIG. 6. In an embodiment, the portion of the OD stripe 204 underneath the tie-off gate 310 may be removed by using a conventional etching process for removing an OD material known to persons skilled in the art, after the polysilicon gate region 402 of the tie-off gate 310 is removed in an earlier etching process. In another embodiment, the gate region of the tie-off gate and the portion of the OD stripe underneath the gate region of the tie-off gate may be removed in a single step without departing from the scope of the disclosure.[0029] In the embodiment of the top plan view shown in FIG. 3, the opening 304 of the SADB mask 302 is of a substantially rectangular elongate shape over three tie-off gates 308, 310 and 312 formed by crossovers of the gate stripe 212 with three OD stripes 202, 204 and 206, respectively. In such an embodiment, the void 602 as shown in the sectional view of FIG. 6 after the exposed gate regions of the tie-off gates 308, 310 and 312 along the gate stripe 212 as well as portions of the OD stripes 202, 204 and 206 underneath the opening 304 of the SADB mask 302 are removed would be an elongate trench as viewed through the mask opening 304 in the top plan view of FIG. 3. In other embodiments, one or more openings may be provided in the SADB mask and aligned with one or more gates designated as tie-off gates to be removed, and each opening of the SADB mask need not be rectangular in shape as long as it is aligned with one or more tie-off gates selected for removal.[0030] FIG. 7 is a sectional view of the FinFET device of FIG. 6 after the void 602 created by the removal of the tie-off gate and the portion of the OD stripe underneath the tie-off gate is filled with an insulating dielectric 702. In an embodiment, the polysilicon gate region 402 of the tie-off gate 310 and the portion 404 of the OD stripe 204 underneath the polysilicon gate region 402 of the tie-off gate 310 as shown in FIG. 4 are completely removed to provide good electrical isolation between transistors 318 and 320 adjacent to the insulating dielectric 702, that is, to prevent leakage currents between the transistors 318 and 320.[0031] In an embodiment, the SADB mask 302 may be removed before the insulating dielectric 702 fills the void 602 created by the removal of the tie-off gate 310 and the portion of the OD stripe underneath it. In an embodiment, the insulating dielectric 702 may be filled to slightly above the level of the top surface 412 of the oxide layer 410. In a further embodiment, after the insulating dielectric 702 fills the void 602, a gentle chemical mechanical planarization (CMP) process may be performed to smooth the top surface 412 of the oxide layer 410 and the insulating dielectric 702 which has filled the void created by the removal of the tie-off gate. In yet a further embodiment, a metal gate process may be performed in a conventional manner to provide gate electrodes by replacing the polysilicon gates of transistors with metal, serving as circuit elements in the integrated circuit device, that is, transistors not removed by the SADB masking and removal processes described above.[0032] FIG. 8 is flowchart illustrating an embodiment of a method for fabricating an integrated circuit device. In FIG. 8, a self-aligned diffusion break (SADB) mask is placed over a multigate transistor device, such as a FinFET device, as shown in step 802. In an embodiment, the SADB mask has an opening to expose an area over one or more portions of one or more gate stripes designated as one or more tie-off gates, respectively. In an embodiment, the gate stripes are disposed across one or more oxide diffusion (OD) stripes of the multigate transistor device. In step 804, the tie-off gates are removed through the opening of the SADB mask to isolate transistors adjacent to the tie-off gates. In step 806, one or more portions of one or more OD stripes underneath one or more tie-off gates designated for removal are also removed through the opening of the SADB mask to create a void. In step 808, the void created by removing the tie- off gates and portions of OD stripes underneath the tie-off gates is filled with an oxide to provide electrical isolation, that is, to prevent leakage current flow between transistors adjacent to the oxide fill in place of the removed tie-off gates in the integrated circuit device. In a further embodiment, as described above with respect to FIG. 7, the top surface of the oxide fill may be made substantially even with the top surface of the gate stripes by a gentle chemical mechanical planarization (CMP) process. In yet a further embodiment, a gate metal process may be performed to provide metal gate electrodes to replace the polysilicon gates of transistors not removed by the SADB masking and removal process.[0033] While the foregoing disclosure describes illustrative embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the appended claims. The functions, steps or actions in the method and apparatus claims in accordance with the embodiments described herein need not be performed in any particular order unless explicitly stated otherwise. Furthermore, although elements may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
A method of providing a field effect transistor includes depositing a layer of a laser-reflective material on a substrate which has an active region and an inactive region; selectively removing portions of the deposited layer disposed over the active region; exposing laser energy to activate dopants in the active region; and stripping the deposited layer.
What is claimed is: 1. A method of providing a field effect transistor comprising: depositing a layer of a material on a substrate, the substrate having an active region and an inactive region; selectively removing portions of the deposited layer disposed over the active region and leaving portions of the deposited layer disposed over the inactive region; exposing laser energy to activate dopants in the active region; and stripping the deposited layer. 2. The method of claim 1, wherein the step of selectively removing portions of the deposited layer comprises: patterning the deposited layer; and etching the deposited layer in active areas. 3. The method of claim 1, wherein the step of exposing laser energy to activate dopants in the active region comprises melting a source drain junction. 4. The method of claim 1, wherein the step of selectively removing portions of the deposited layer comprises a chemical etching process selective to the active areas. 5. The method of claim 1, wherein the step of depositing a layer of a material on a substrate comprises depositing a laser reflective material. 6. The method of claim 1, wherein the step of depositing a layer of a material on a substrate comprises depositing an aluminum layer having a thickness of approximately 2,000-5,000 .ANG.. 7. The method of claim 1, wherein the step of selectively removing portions of the deposited layer disposed over the active region comprises providing a margin wherein portions of the inactive region are removed to insure that all the deposited layer is uncovered in the active region. 8. An integrated circuit being manufactured by a process comprising: (a) depositing a mask layer over a portion of an integrated circuit; (b) removing sections of the mask layer disposed over active areas in the portion of the integrated circuit and leaving sections of the mask layer disposed over inactive areas in the portion of the integrated circuit; (c) introducing the portion of the integrated circuit to laser energy, the laser energy being reflected by remaining sections of the mask layer and the laser energy annealing active areas not covered by the mask layer; and (d) removing remaining sections of the mask layer. 9. The integrated circuit manufactured by the process of claim 8, wherein depositing a mask layer over a portion of an integrated circuit comprises depositing an aluminum layer having a thickness of approximately 2,000-5,000 .ANG.. 10. The integrated circuit manufactured by the process of claim 8, wherein depositing a mask layer over a portion of an integrated circuit comprises depositing a material which has a high reflectivity for laser light. 11. The integrated circuit manufactured by the process of claim 8, further comprising forming shallow trench isolation (STI) structures in the portion of the integrated circuit. 12. The integrated circuit manufactured by the process of claim 8, further comprising (e) removing, simultaneously with step (b), a section of the mask layer disposed over inactive areas proximate active areas to insure that all active areas are uncovered by the mask layer. 13. A method of manufacturing an integrated circuit comprising: forming a portion of an integrated circuit including an active region and an inactive region on a semiconductor substrate, the active region comprising a gate stack, a source region, a drain region, a source extension, and a drain extension, the inactive region comprising a field oxide; depositing a masking layer over the portion of the integrated circuit; selectively removing sections of the masking layer over the active region and leaving sections of the masking layer over the inactive region; exposing the active region to laser energy, the laser energy melting a junction of the source region and source extension and a junction of the drain region and drain extension; and removing the remainder of the masking layer. 14. The method of claim 13, wherein the step of exposing the active region to laser energy does not over heat the field oxide in the inactive region. 15. The method of claim 13, wherein the step of exposing the active regions to comprises melting a junction between the source region and the source extension and a junction between the drain region and the drain extension. 16. The method of claim 13, wherein the step of selectively removing sections of the masking layer over the active region comprises wet etching. 17. The method of claim 13, wherein the gate stack comprises a polysilicon structure and an oxide layer. 18. The method of claim 13, wherein the inactive region comprises a shallow trench isolation (STI) structure. 19. The method of claim 13, further comprising removing a section of the mask layer disposed over inactive areas proximate the active region to insure that the entire active region is uncovered by the mask layer. 20. The method of claim 13, wherein the step of depositing a masking layer over the portion of the integrated circuit comprises depositing a laser reflective material which reflects laser energy a sufficient amount so as to prevent overheating of the field oxide and damage to polysilicon lines adjacent to field oxide.
FIELD OF THE INVENTION The present invention relates generally to the field of integrated circuits and to methods of manufacturing integrated circuits. More particularly, the present invention relates to a method of selective laser annealing using highly reflective masks. BACKGROUND OF THE INVENTION Integrated circuits (ICs), such as, ultra-large scale integrated (ULSI) circuits, can include as many as one million transistors or more. The ULSI circuit can include complementary metal oxide semiconductor (CMOS) field effect transistors (FETs). The transistors can include semiconductor gates disposed between drain and source regions. The drain and source regions are typically heavily doped with a P-type dopant (boron) or an N-type dopant (phosphorous). The drain and source regions generally include a thin extension that is disposed partially underneath the gate to enhance the transistor performance. Shallow source and drain extensions help to achieve immunity to short-channel effects which degrade transistor performance for both N-channel and P-channel transistors. Short-channel effects can cause threshold voltage roll-off and drain-induced barrier-lowering. Thus, controlling short channel effects is important to assuring proper semiconductor operation. Conventional techniques utilize a double implant process to form shallow source and drain extensions. According to the conventional process, the source and drain extensions are formed by providing a transistor gate structure without sidewall spacers on a top surface of a silicon substrate. The silicon substrate is doped on both sides of the gate structure via a conventional doping process, such as, a diffusion process or ion implantation process. Without the sidewall spacers, the doping process introduces dopants into a thin region (i.e., just below the top surface of the substrate) to form the drain and source extensions as well as to partially form the drain and source regions. After the drain and source extensions are formed, silicon dioxide spacers, which abut lateral sides of the gate structure, are provided over the source and drain extensions. The substrate is doped a second time to form the deeper source and drain regions. The source and drain extensions are not further doped due to the blocking capability of the silicon dioxide spacers. As transistors disposed on integrated circuits (ICs) become smaller, transistors with shallow and ultra-shallow source/drain extensions have become more difficult to manufacture. Manufacturing is more difficult because the vertical dimensions associated with the depths of source/drain junctions and the thin extensions to the source/drain junctions must be decreased in a ratio corresponding to the reduction in lateral dimension of the manufactured MOSFET. For example, smaller transistors should have ultra-shallow source and drain extensions (less than 30 or 40 nanometer (nm) junction depth). Forming source and drain extensions with junction depths of less than 30 nm is very difficult using conventional fabrication techniques. Conventional ion implantation, diffusion doping and activation techniques make transistors on the IC susceptible to a dopant profile tail distribution that extends deep into the substrate. Also, conventional ion implantation techniques have difficulty maintaining shallow source and drain extensions because point defects generated in the bulk semiconductor substrate during ion implantation can cause the dopant to more easily diffuse (transient enhanced diffusion, TED). The diffusion often extends the source and drain extension vertically into the bulk semiconductor substrate. As MOSFET scaling continues to be reduced, ultra-shallow and highly-activated junctions are essential for device performance. Source/Drain (S/D) extensions shallower than 30 nm are needed for sub-70 nm CMOS transistors. In addition, the transition from the S/D extensions to the channel region (laterally) must be as precipitous as possible. An aggressive scaling of the lateral abruptness of S/D extensions is critical for controlling short-channel effects in a sub-100 nm CMOS transistor. On the other hand, external resistances (S/D extension, contact, etc.) play a significant role in the device performance. Along with the aggressive scaling of S/D extension junction depth and abruptness, it may be desirable to form a more highly doped S/D extension, as devices become smaller. For example, a Super-Doped Extension (SDE), instead of the extension associated with conventional design of LDD (lightly doped drain) or HDD (highly doped drain), are desired as transistors become smaller. Dopant electrical activation in the SDE becomes a great challenge. Another result of the minimization of transistor critical dimensions is that the total thermal budget (Dt) of the drain and source regions and the semiconductor gate becomes more critical. In general, the thermal budget for dopant activation in the source/drain junction (including source/drain extension) should be as low as possible to provide good formation of an ultra-shallow junction. Fundamentally, reducing the thermal budget has several advantages including: (1) more accurate formation of ultra-shallow junctions; (2) formation of ultra-tight dope profiles, such as, profiles for halo implants or retrograded channel implants; and (3) reduction of dopant penetration through the gate oxide and into the gate (e.g., Boron (B) in P-channel MOSFETs). Both shallow source and drain extensions and tight profile pocket regions help to improve the immunity of a transistor to short-channel effects. Taking advantage of the results attainable via a lower thermal budget, conventional processes have reduced thermal budgets for CMOS transistor fabrication by utilizing a rapid thermal annealing (RTA) to heat the substrate. RTA does not require a significant period of time to heat the substrate. Another approach involves a spike RTA which increases the ramping rate of RTA. Nonetheless, the substrate must be exposed to the RTA for a time period of one second or more to appropriately diffuse and activate dopants. Conventional rapid thermal anneal processes, such as RTA, face the problems of undesired thermal diffusion and low electrical activation limited by solid solubility. One possible solution is to use a laser thermal process (LTP). LTP includes advantages such as: 1) "zero" thermal budget (a laser pulse is a few nanoseconds, approximately 8 orders of magnitude shorter than rapid thermal processes and the thermal diffusion is almost negligible); 2) metastable process above dopant solid solubility limit, allowing active dopant concentrations larger than 10@21 cm@- to be achieved; and 3) selective local heating of specific regions of silicon does not add thermal budget to Vth/channel/halo implant profiles. One of the major integration issues of LTP is that the poly-Si line above the field oxide (e.g., shallow trench isolation region) could over-melt due to poor thermal dissipation through the thick oxide. This over-melting can cause the polysilicon line to be deformed or disconnected. Thus, there is a need for a process which overcomes these problems, such as, over-melting due to poor thermal dissipation through thick oxide. Further, there is a need for a transistor fabrication method which avoids problems of thermal diffusion and low electrical activation occurring in conventional rapid thermal processes. Even further still, there is a need for a transistor which is manufactured by a selective laser anneal process which uses a highly reflective mask. SUMMARY OF THE INVENTION One embodiment of the invention relates to a method of providing a field effect transistor. The method includes depositing a layer of a material on a substrate which has an active region and an inactive region; selectively removing portions of the deposited layer disposed over the active region; exposing laser energy to activate dopants in the active region; and stripping the deposited layer. Briefly, another exemplary embodiment is related to an integrated circuit being manufactured by a process which includes depositing a mask layer over a portion of an integrated circuit; removing sections of the mask layer disposed over active areas in the portion of the integrated circuit; introducing the portion of the integrated circuit to laser energy; and removing remaining sections of the mask layer. The laser energy is reflected by remaining sections of the mask layer and the laser energy anneals active areas not covered by the mask layer. Briefly, another exemplary embodiment is related to a method of manufacturing an integrated circuit which includes forming a portion of an integrated circuit including an active region and an inactive region on a semiconductor substrate; depositing a masking layer over the portion of the integrated circuit; selectively removing sections of the masking layer over the active region; exposing the active region to laser energy; and removing the remainder of the masking layer. The active region includes a gate stack, a source region, a drain region, a source extension, and a drain extension. The inactive region includes a field oxide. The laser energy melts a junction of the source region and source extension and a junction of the drain region and drain extension. Other principle features and advantages of the present invention will become apparent to those skilled in the art upon review of the following drawings, the detailed description, and the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS The exemplary embodiments will hereafter be described with reference to the accompanying drawings, wherein like numerals denote like elements, and: FIG. 1 is a cross-sectional view of a portion of an integrated circuit fabricated in accordance with an exemplary embodiment of the present invention; FIG. 2 is a cross-sectional view of a portion of the integrated circuit illustrated in FIG. 1, showing shallow trench isolation, gate stack, S/D extension and deep S/D contact junction formation steps; FIG. 3 is a cross-sectional view of a portion of the integrated circuit illustrated in FIG. 1, showing a deposited aluminum layer and a laser exposure step; and FIG. 4 is a top view of a portion of the integrated circuit illustrated in FIG. 3. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring to FIG. 1, a portion 10 of an integrated circuit (IC) or chip includes a substrate 12, isolation structures 14, a gate stack 16, a source region 18, a drain region 20, a source extension 22, a drain extension 24, and source/drain (S/D) junctions 26. Portion 10 is preferably part of an ultra-large-scale integrated (ULSI) circuit having millions or more transistors. Portion 10 is manufactured as part of the IC on a semiconductor wafer, such as, a silicon wafer. Substrate 12 is any of a variety of semiconductor materials, such as, silicon. Substrate 12 is preferably a P-type substrate. Isolation structures 14 are two field oxide or shallow trench isolation (STI) structures which provide electrical insulation for the elements there between. Gate stack 16 is any of a variety of conductive materials. In the exemplary embodiment, gate stack 16 is polysilicon disposed over a gate dielectric, such as thermally grown silicon dioxide. Gate stack 16 is aligned between active regions in substrate 12. Active regions are areas in portion 10 between the isolation structures 14 including impurities or dopants such as a p-type dopant (e.g., boron) or an n-type dopant (e.g., phosphorous). Source region 18 and drain region 20 are formed by ion implantation. Gate stack 16 is also doped during the same implantation. The dopant is later activated by thermal activation (e.g., furnace anneal). Source region 18 and drain region 20 are formed such that they have no overlap with gate stack 16. Advantageously, this arrangement reduces or prevents gate-to-drain or gate-to-source tunneling leakage. Source extension 22 is a shallower extension of source region 18. Drain extension 24 is a shallower extension of drain region 20. Preferably, source extension 22 and drain extension 24 extend at least partially below gate stack 16. Preferably, these extensions are 20-40 nm deep. Preferably, the source /drain regions 18 and 20 are 60-100 nm deep. Preferably, the concentration of dopants in the extensions is 5.times.10@20 -5.times.10@21 cm@-3. Preferably, the width of each extension region is 30-50 nm. The method of forming portion 10 is described below with reference to FIGS. 1-5. The method advantageously forms portion 10 including isolation structures 14 which are not over heated during the laser anneal. In FIG. 2, a cross-sectional view of portion 10 illustrates portion 10 after a conventional CMOS fabrication process is followed to form isolation structures 14, gate stack 16, source region 18, drain region 20, source extension 22, and drain extension 24. Regions 18, 20, 22, and 24 are not yet annealed and dopants within region 18, 20, 22, and 24 are not yet activated. For example, gate stack 16 can be formed in a CVD and selective etch process, and regions 18, 20, 22, and 24 can be initially formed in a double implant process using sidewall spacers. Regions 18, 20, 22, and 24 can also be formed by doping amorphous regions as described in U.S. patent application Ser. No. 09/187,630, filed on Nov. 6, 1998 incorporated herein by reference. Preferably, regions 18, 20, 22, and 24 are doped in a low KeV implantation process. In FIG. 3, portion 10 includes a mask layer 30 which is deposited and selectively etched to cover inactive regions, such as isolation structures 14, and expose active regions, such as gate stack 16, source region 18, and drain region 20. Mask layer 30 is any material which has a high reflectivity to laser energy. Preferably mask layer 30 is an aluminum layer and has a thickness of 2,000-5,000 .ANG.. Preferably, mask layer 30 is patterned by photo-lithography. As shown in FIG. 4, mask layer 30 is deposited and etched to have a margin or edge 32. Margin 32 is present because mask layer 30 cannot be perfectly deposited and etched to cover only the inactive region. Margin 32 exposes a very small portion of the inactive region, insuring that all of the active region will be exposed after the deposition and etching of mask layer 30. Margin 32 is preferably 20-30 nm wide. After the laser annealing, mask layer 30 is stripped or removed by a suitable process, such as wet chemistry, and the conventional CMOS fabrication process is continued. Laser annealing is a thermal process which advantageously uses a laser pulse of only a few nanoseconds, which is approximately 8 orders of magnitude shorter than rapid thermal processes. Further, the thermal diffusion with laser annealing is almost negligible. Moreover, laser annealing is a metastable process above the dopant solid solubility limit, allowing active dopant concentrations larger than 10@21 cm@-3 to be achieved. Even further, laser annealing provides selective local heating of specific regions of silicon which does not add a thermal budget to Vth, channel, or halo implant profiles. Laser annealing provides full (100%) dopant activation. In an exemplary embodiment, the laser reflective material of mask layer 30 reflects 80% of laser energy. In another embodiment, the laser reflective material of mask layer 30 reflects 90% laser energy. The amount of laser energy sufficient to reflect laser energy prevents overheating of the field oxide and damage to polysilicon lines adjacent to field oxide. Advantageously, mask layer 30 covers the area on portion 10 where no laser anneal is needed to highly activate the dopant (i.e., in the inactive areas). Mask layer 30 is removed in the locations needing exposure to the laser anneal. Specifically, S/D junctions 26 are in the open area in order to receive the needed laser annealing. Because of the high reflectivity of mask layer 30, the majority of the laser light is reflected from mask layer 30. Such selective use of mask layer 30 during the laser anneal process prevents the over-heating of oxides (e.g., isolation structures 14) which prevents polysilicon lines such as conductor 16 from being destroyed when it crosses over an isolation structure. While the embodiments illustrated in the FIGURES and described above are presently preferred, it should be understood that these embodiments are offered by way of example only. Other embodiments may include, for example, different techniques for selectively providing mask layer 30. The invention is not limited to a particular embodiment, but extends to various modifications, combinations, and permutations that nevertheless fall within the scope and spirit of the appended claims.
A memory management circuit includes a direct memory access (DMA) channel. The DMA channel includes logic configured to receive a buffer of data to be written using DMA. The DMA channel further includes logic to perform bit manipulation in real-time during a DMA write cycle of the first buffer of data.
1.A memory management circuit includes a first direct memory access (DMA) channel, and the DMA channel includes:Logic configured to receive a first data buffer to be written using DMA; andA first circuit that includes logic configured to perform bit operations in real time during a DMA write cycle of the first data buffer.2.The memory management circuit according to claim 1, further comprising logic in the first circuit configured to perform a bit operation with a set function.3.The memory management circuit according to any one of claims 1-2, further comprising logic in the first circuit configured to perform a bit operation with a clear function.4.The memory management circuit according to any one of claims 1-3, further comprising logic in the first circuit configured to perform a bit operation with an invert function.5.The memory management circuit according to any one of claims 1 to 4, further comprising logic in the first circuit configured to perform a bit operation with a setting function that has priority over a clear function.6.The memory management circuit according to any one of claims 1 to 5, further comprising logic in the first circuit configured to perform a bit operation having a setting function that has priority over an inversion function.7.The memory management circuit according to any one of claims 1 to 6, further comprising logic in the first circuit configured to perform a bit operation having a clear function in preference to the negation function.8.The memory management circuit according to any one of claims 1-7, further comprising a second DMA channel, the second DMA channel comprising:Logic configured to receive a second data buffer to be written using DMA; andA second circuit including logic configured to perform bit operations in real time during the DMA write cycle of the second data buffer;among them:The first data buffer includes a bit indicating that the first-in first-out (FIFO) shift register is full;The first circuit is configured to reset the bit, the bit indicates that during rewriting of the byte including the bit, the FIFO shift register is full, while masking the other BitThe first DMA channel further includes logic for sending a trigger to the second DMA channel;The second DMA channel is further configured to load the content of the FIFO shift register into the second buffer when receiving the trigger; andThe second circuit includes logic to write the second buffer to the target during another DMA write cycle.9.The memory management circuit according to any one of claims 1-7, further comprising a second DMA channel, the second DMA channel comprising:Logic configured to receive a second data buffer to be written using DMA; andA second circuit including logic configured to perform bit operations in real time during a DMA write cycle of the second data buffer;among them:The first data buffer includes mask information for the second circuit;The first circuit is configured to issue a trigger to the second DMA channel when loading data into the first buffer; andUpon receiving the trigger, the second circuit is configured to use the mask information from the first data buffer to the second buffer during DMA writing of the second buffer To apply bit manipulation.10.The memory management circuit according to claim 9, wherein the second circuit is configured to use the mask information to write the second buffer to the source of the second buffer using the bit operation.11.A method including the operation of the memory management circuit according to any one of claims 1 to 10.12.A microcontroller, the microcontroller includes:Processor; andThe memory management circuit according to any one of claims 1 to 10.13.A memory management controller, including:Memory interface; andThe memory management circuit according to any one of claims 1 to 10.
Bit-operable direct memory accessRelated patent applicationsThis application claims the priority of US Provisional Patent Application No. 62 / 576,966 filed on October 25, 2017, and the contents of this application are hereby incorporated in full.Technical fieldThe present disclosure relates to memory access, and more specifically to bit-operable direct memory access (DMA).Background techniqueFor memory transfer operations between different memories or memory components, the processor can use programming input and output instructions to read, write, and set data. However, due to the delay of memory access, such instructions executed by the processor may be slow. Access to the memory may require a physical interface with the mechanical or electronic components of the memory. The instructions executed by the processor will not end until the read or write is completed. Therefore, the processor is waiting for the end of the instruction. As mentioned above, the instruction may execute slowly due to memory delay. During this operation, the processor or the processor thread assigned to the task may not be able to perform other tasks.DMA can allow the processor to offload data write or read blocks between memory locations. DMA can be implemented by a separate controller or circuit. The DMA controller may have an interface through which a processor or peripheral device of the system may call the DMA controller to read or write data blocks. While the DMA controller is operating to read or write data blocks, the processor or peripheral device can perform other tasks. When the DMA controller is completed, the DMA controller can issue an interrupt or other signals to the processor or peripheral devices.Summary of the inventionEmbodiments of the present disclosure include memory management circuits. The memory management circuit may include a first DMA channel. The DMA channel may include logic configured to receive a first data buffer to be written using DMA. The memory management circuit may include a first circuit including logic configured to perform bit operations in real time during a DMA write cycle of the first data buffer. In combination with any of the above embodiments, the first circuit may further include logic for performing bit operations with a set function. In combination with any of the above embodiments, the first circuit may further include logic for performing a bit operation with a clear function. In combination with any of the above embodiments, the first circuit may further include logic for performing a bit operation with an invert function. In combination with any of the above embodiments, the first circuit may further include logic for performing a bit operation having a setting function that has priority over the clear function. In combination with any of the above embodiments, the first circuit may further include logic for performing a bit operation with a setting function that has priority over the negation function. In combination with any of the above-described embodiments, the first circuit may further include logic for performing a bit operation with a clear operation that has priority over the negation function.In combination with any of the above embodiments, the memory management circuit may further include a second DMA channel. The second DMA channel may include logic configured to receive a second data buffer to be written using DMA and second circuit including logic configured to perform bit operations in real time during a DMA write cycle of the second data buffer . In conjunction with any of the above embodiments, the first data buffer may include a bit indicating that the first-in first-out (FIFO) shift register is full. In combination with any of the above embodiments, the first circuit may be configured as a reset bit that indicates that the FIFO shift register is full during the rewriting of the byte including the bit, while masking the other bits of the byte. In combination with any of the above embodiments, the first DMA channel may further include logic for sending a trigger to the second DMA channel. In combination with any of the above embodiments, the second DMA channel may also be configured to load the content of the FIFO shift register into the second buffer when the trigger is received. In conjunction with any of the above embodiments, the second circuit may include logic for writing the second buffer to the target during another DMA write cycle.In combination with any of the above embodiments, the memory management circuit may further include a second DMA channel. In conjunction with any of the above embodiments, the second DMA channel may include logic configured to receive a second data buffer to be written using DMA, and include a DMA write cycle configured to be during the second data buffer A second circuit of logic that performs bit operations in real time. In combination with any of the above embodiments, the first data buffer may include mask information for the second circuit. In combination with any of the above embodiments, the first circuit may be configured to issue a trigger to the second DMA channel when loading data into the first buffer. In combination with any of the above embodiments, when a trigger is received, the second circuit may be configured to be applied using mask information from the first data buffer to the second buffer during DMA writing of the second buffer Bit manipulation. In combination with any of the above embodiments, the second circuit may be configured to use the mask information to write the second buffer to the source of the second buffer using bit operations.Embodiments of the present disclosure include microcontrollers. The microcontroller may include any one of the processor and the memory management circuit of the above embodiment.Embodiments of the present disclosure may include methods performed by any of the memory management circuits in the above-described embodiments.BRIEF DESCRIPTIONFIG. 1 is an illustration of a system for bit operations in DMA according to an embodiment of the present disclosure.2 is a more detailed illustration of a mask circuit according to an embodiment of the present disclosure.FIG. 3 is a diagram of an exemplary application of DMA for a first-in first-out application according to an embodiment of the present disclosure.FIG. 4 is an illustration of kernel-independent bit collisions according to an embodiment of the present disclosure.detailed descriptionFIG. 1 is an illustration of a system 100 for bit operations in DMA according to an embodiment of the present disclosure. The system 100 can be implemented in any suitable environment, such as a microcontroller, system on chip (SoC), computer, tablet, smart phone, server, printer, router, industrial automation controller, automotive electronic system, or any other suitable electronic device. The system 100 may include a DMA controller 104 configured to transfer memory from the data space 102 to another data space 106.The DMA controller 104 may be implemented by analog circuits, digital circuits, or any suitable combination thereof. The DMA controller 104 may include a data buffer 110. In addition, the DMA controller may include a bit operation mask circuit 114.The data space 102 and the data space 106 may include any suitable type of memory or other elements for storing data in the system 100. For example, the data space 102 may include a series of memory locations in a dedicated function register (SFR) or static random access memory (SRAM) 108. Similarly, the data space 106 may include SFR or SRAM 108.The DMA 104 may be configured to transfer memory from the data space 102 to the data space 106 and from the data space 106 to the data space 102. In one embodiment, the data spaces 102, 106 may not be persistent storage. DMA 104 may perform such transfers on behalf of other parts of system 100 such as processor 116 or peripheral 118. In other cases, the DMA 104 may be used for in-chip data transfer in the processor 116 in a multi-core processor or in the peripheral 118.The processor 116 may be implemented by a single or multi-core processor, or a single-thread or multi-thread processor. The processor 116 may be implemented in the peripheral device 118, or the peripheral device 118 may include, for example, a digital signal processor, a printer, a disk drive controller, a graphics card, a network card, or a sound card.By using DMA 104 instead of directly inputting and outputting to data spaces 102, 106, processor 116 and peripherals 118 may be able to transfer data blocks between data spaces 102, 106 without reducing processor overhead. In addition, by using the DMA 104 within the multi-core implementation of the processor 116, the processor 116 can transfer data to and from its local memory without occupying its processor time, thereby allowing parallel calculations and data transfer. The DMA 104 can be used to copy or move data from memory to memory within the memory. The DMA 104 may be used when the processor 116 or the peripheral device 118 cannot maintain the data transfer rate, or when the processor 116 or the peripheral device 118 needs to perform other tasks waiting for relatively slow I / O data transfer. The processor 116 or the peripheral device 118 may offload expensive memory operations (such as large copy or scatter collection operations) from the CPU to a dedicated DMA engine.The DMA 104 transfers data between the data spaces 102, 106 in the form of blocks. Blocks can be defined in the form of bytes or words, where the smallest amount of data will be transferred between the data spaces 102, 106. This minimum amount of data is a trade-off in efficiency obtained by transferring data via DMA. Other specific implementations of DMA do not allow the transmission of data smaller than the defined bytes or words. In such other specific implementations of DMA, the processor or peripheral will directly input and output commands to transfer data between data spaces instead of using DMA. You cannot perform bit-by-bit operations or bit operations on subword or subbyte transfers. Alternatively, in other such specific implementations of DMA, the processor or peripheral device may apply the mask to DMA operations, but such applications require the processor or peripheral device to directly access or operate on the memory location, and therefore encounter To the same latency and bandwidth utilization issues. This combination of DMA and direct input and output commands of the processor or peripherals circumvents some advantages of DMA use.In contrast, embodiments of the present disclosure include bit operations performed by the DMA 104 during the DMA process of transferring data. Such bit operations can occur in real time. In addition, bit operations may be performed within DMA 104. In addition, bit operations may be asynchronous with respect to the processor 116 or the peripheral device 118, just as other DMA operations are asynchronous to the processor 116 or the peripheral device 118. When such data is streamed through the DMA 104, bit operations can be performed on the data streamed through the DMA 104. During the same clock cycle in which data is to be stored in the target data space, bit operations can be performed on the data streamed through the DMA 104. Compared with the required operations, bit operations can be performed in a dynamic manner.The bit operations to be performed by the DMA 104 can be used, for example, for first-in first-out memory operations, or for bit collisions as part of the communication protocol. In addition, bit operations can be performed through DMA 104 to use DMA to enhance other typical data transfers. For example, DMA transfers can usually transfer entire words or bytes of data from one data space to another. However, the total range of data to be transmitted may not be evenly distributed among the total number of words or bytes. Therefore, the data transmission of DMA will have too much tolerance or insufficient tolerance. DMA transfers can be over-contained because the number of requested memory transfers from one data space to another is exceeded. The amount of additional memory can reflect the additional data address. Since the additional data address is within the same word or byte in the source data space as data to be transferred to the new data space, the additional data address is not specifically requested to be moved to Moved when new position. These additional data addresses may reflect meaningless, junk data, or other destructive or useless information about the intended recipient. If such data is transferred along with the data intended to be transferred during the DMA process, the writing of the data must be corrected by the processor or peripheral device during the post-script process. Similarly, DMA transfers can be inadequate because data addresses to be transferred that do not completely fill words or bytes may not be transferred using DMA in order to avoid over-contained data transfer. On the contrary, the processor or the peripheral device itself can perform direct input and output transfers to the data space to fill in missing data addresses that would otherwise not completely fill words or bytes. The management of this data transmission may be slow and resource-intensive. In contrast, the bit operations performed by the DMA 104 can be adapted to such data by writing such data as a mask to the target data space.The DMA 104 may include a data buffer 110. The data buffer 110 may be a temporary memory filled with bytes or words from the data space 102 to be written to the data space 106. DMA 104 may include other examples of data buffer 110, thus exhibiting a separate DMA channel.In one embodiment, DMA 104 may include mask circuit 114. The mask circuit 114 may be implemented by analog circuits, digital circuits, or any combination thereof. A more detailed implementation of the mask circuit 114 can be found, for example, in the context of FIG. 2. The mask circuit 114 may include or may be configured to access one or more mask registers. The mask register may define one or more logical operations to be performed on the data to be transferred to the target data space in the data buffer 110 based on the bits. Each register may be the size or width of the data buffer 110. Each register can be the size or width of a word or byte sent to the target simultaneously as part of the DMA process.The mask circuit 114 may be configured to perform any suitable number and kind of logic operations. In one embodiment, the mask circuit 114 may be configured to perform a setting operation. In such operations, the bits of the mask may define the bits of the target data space 106 to be set by the corresponding values of the data buffer 110. In another embodiment, the circuit 114 may be configured to perform a clear operation. In such operations, the bits of the mask may define the bits of the target data space 106 to be cleared. In another embodiment, the circuit 114 may be configured to perform an invert operation. In such operations, the bits of the mask may define the bits of the data buffer 110 to be inverted and then written to the target data space 106.In one embodiment, the mask circuit 114 may be configured to selectively apply a single of the available bit manipulation operations. The mask circuit 114 may be configured to maintain a hierarchical structure of available bit operation operations, so that if more than one type of bit operation operations is requested, only the more preferred operations are performed. For example, setting the bit value may be the most preferred operation, then clearing the bit value, and then inverting the bit value.The processor 116 and the peripheral device 118 may be configured to call the DMA 104 to transfer data from the data space 102 to the data space 112 in a DMA manner. Such a call may be performed by a function call of DMA 104, for example. When the DMA 104 has completed its transfer, the processor 116 or peripheral device 118 may be notified of the call in any suitable manner. For example, DMA 104 may issue an interrupt when the transfer is completed, or an interrupt when an error condition occurs, making it impossible to complete.The processor 116 and the peripheral device 118 may be configured to call the bit manipulation operations of the DMA 104 in any suitable manner. In one embodiment, the processor 116 and the peripheral device 118 can invoke a normal DMA transfer through the DMA 104 through one command, and can utilize another command to invoke a DMA transfer with bit operations. In another embodiment, the processor 116 and the peripheral device 118 may call a normal DMA transfer and use the same command to call a DMA transfer using bit operations. In such implementations, bit operations during DMA transfers can be performed by setting or enabling bits in masks or registers accessed by circuit 114.Embodiments of the present disclosure can eliminate CPU interference—compared to normal DMA operations performed by DMA 104, DMA bit operations performed by circuit 114 can be performed without additional bus utilization or delay. The bit operations performed by the circuit 114 may be performed when starting or terminating operations on the DMA flip-flop.FIG. 2 is a more detailed illustration of the mask circuit 114 according to an embodiment of the present disclosure. The data to be transferred from one data space to another data space may be input into the data buffer 110. As described above, the data buffer 110 may have a specific word or byte size, such as 16 bits. The mask circuit 114 may process the contents of the data buffer 110 and write these elements to the target data space in a single clock cycle.The mask circuit 114 may include or may be configured to access registers or other sources of information that define the bit operation operations to be performed on the data of the data buffer 110. In addition, such registers or other sources of information may define the bits to be manipulated by such operations. The register content may be referenced as "set" to indicate that the associated content is operated, and referred to as "unset" to indicate that the associated content is not operated. In various specific implementations, the setting or canceling of the setting state can be realized by using logic high (“1”) or logic low (“0”).In one embodiment, the mask circuit 114 may include or be configured to access the register 226 or define other sources of information for the inverse operation. The register 226 may be given as "DMAINVx". If any bit of the register 226 is set, the mask circuit 114 may be configured to invert the value of the data buffer 110 specified by the set bit. The other values of the data buffer 110 corresponding to the unset bits may not be inverted. The result may be passed to other parts of the mask circuit 114. The inversion can be implemented by one or more XOR gates 220 by means of input from register bits and data received from the data buffer 110. The XOR gate 220 can be implemented by a bit-by-bit XOR gate or by more than one XOR gate, the amount of which is sufficient to provide the size of the data buffer 110 at the same time.In one embodiment, if a preferred bit operation is to be performed on a given positioning, such as clearing or setting, the negation for such given positioning may not be performed. If the register 226 indicates that the positioning is to be inverted, but the positioning will be cleared or set by other parts of the mask circuit 114, the inversion operation may be overwritten. In another embodiment, the inversion can still be performed on such given positioning, but the inverted content can then be further operated by other parts of the mask circuit 114.In one embodiment, mask circuit 114 may include or be configured to access register 228 or other source of information that defines a clearing operation. The register 228 may be given as "DMACLRx". If any bit of the register 228 is set, the mask circuit 114 may be configured to clear the value of the data buffer 110 received from the XOR gate 220 specified by the set bit. Other values of the data buffer 110 received from the XOR gate 220 corresponding to unset bits may not be cleared. The result may be passed to other parts of the mask circuit 114. The inversion can be implemented by one or more AND gates 222 with the use of the inverting input for the bits from the register 228 and the input received from the XOR gate 220. The AND gate 222 may be implemented by a bit-wise AND gate or by more than one AND gate, the amount of which is sufficient to provide the size of the data buffer 110 at the same time.In one embodiment, if a preferred bit operation is to be performed on a given positioning, such as setting, then clearing for such given positioning may not be performed. If the register 228 indicates that the given position is to be cleared, but the given position will be set by other parts of the mask circuit 114, the clear operation may be overwritten. In another embodiment, such a given location can still be cleared, but the cleared content can then be further manipulated by other parts of the mask circuit 114.In one embodiment, the mask circuit 114 may include or be configured to access the register 230 or other source of information defining the set operation. The register 230 may be given as "DMASETx". If any bit of the register 230 is set, the mask circuit 114 may be configured to set the value of the data buffer 110 received from the AND gate 222 specified by the set bit. Other values of the data buffer 110 received from the AND gate 222 corresponding to unset bits may not be set. The result can be passed to the data space target as the output of the mask circuit 114. The setting can be performed through one or more OR gates 224 with the help of the bits received from the register 228 and the input received from the AND gate 222. The OR gate 224 may be implemented by a bit-wise OR gate or by more than one OR gate, the amount of which is sufficient to provide the size of the data buffer 110 at the same time.The DMA 104 may include multiple instances (not shown) of some or all of the data buffer 110 and the mask circuit 114, thereby exhibiting separate DMA channels. For each DMA channel, there may be instances of dedicated mask registers and registers 226, 228, 230.Applications of circuit 114 may include, for example, FIFO data transmission, or status bit operations used with FIFO data transmission. The circuit 114 may be used to perform bit collision or port register operations on the DMA flip-flop in communication. A specific bit collision or FIFO operation can be defined by the value set in the register of the circuit 114. The FIFO or bit collision may send an initialization or termination signal to, for example, a peripheral device or a client device. Therefore, DMA triggers can be used to send such signals. Starting or terminating slave devices on the DMA trigger can be used to effectively control communication.The memory column modification can have a bit enable function with read modification write. For example, the mask of circuit 114 may be used to selectively read data from data space 102 and rewrite it to the same address in data space 102. Data can be read one byte at a time and fed into the buffer 110. The particular desired rewriting may determine the mask of the circuit 114 to be used. For example, if the memory column modification is intended to set the first and third bits of the octet (starting from the least significant bit (LSB)), the register 230 may define the mask "00000101". The registers 226, 228 may each be "00000000", or the register 230 may cover the contents therein. Each row or row of memory columns can then be fed into the circuit 114 from the buffer 110, and the circuit 114 can set the first and third bits to maintain eight when rewriting to the same row from which the memory is read The integrity of other bits in the bit. Circuit 114 may similarly process the next row of eight bits. As another example, if the memory column modification is intended to clear all bits in the lower four bit positions of the memory column, the register 228 may define the mask "00001111". The registers 226, 230 may each be "00000000", or the contents of the register 226 may be overwritten by the register 228. Each row or row of memory columns can then be fed from the buffer 110 into the circuit 114, and the circuit 114 can clear the next four bits in the rewrite back to the same row from which the memory was read, thereby maintaining the other four bits content. Circuit 114 may similarly process the next row of eight bits.The DMA 104 or mask circuit 114 may be implemented as a stand-alone or portable logic block. For example, these can be implemented as internal peripherals in a microcontroller. A bus master circuit or controller may be included in the DMA 104 or mask circuit 114 or interface with the DMA or mask circuit to allow access to internal memory mapped registers, thereby reducing the need for the peripheral's own processor.FIG. 3 is an illustration of an exemplary application of DMA for FIFO according to an embodiment of the present disclosure. DMA 304 may be a more specific example of DMA 104. The DMA 304 may include two channels 306A, 306B. Each channel 306 may include a corresponding data buffer 308 and a bit operation mask circuit 310. The data buffer 308 may be a more specific example of the buffer 110, and the circuit 310 may be a more specific example of the circuit 114.The DMA 304 may be configured to process data from the peripheral device 301 for FIFO operations. FIFO operations may include, for example, serial output, serial peripheral interface (SPI) operations, uART, or other applications. Serial data may arrive, where each bit is written into the shift register. Once the entire byte or word is collected (according to the size of the shift register), the entire collected byte or word is shifted into the FIFO. The data can then be processed in the FIFO as a whole for, for example, serial operations. At this point, the shift register may be empty again. In other specific implementations of the FIFO, obtaining data from the shift register and placing it in the FIFO may require the generation of a CPU interrupt. In one embodiment, DMA 304 may be configured to avoid such participation of the CPU.Peripheral FIFO operations can usually involve two bus transactions. In the first stage, the control or status register 312 can be accessed. In the second stage, data can be moved.Bits can reach a shift register (not shown) in the peripheral device 301. When such a shift register is full, its contents can be placed in the FIFO 314 all at once. From there, such data can be sent to the target via DMA, such as SRAM space 302. When the shift register is full, in other embodiments, the interrupt can be raised to the CPU. In contrast, in one embodiment, when the shift register is full, a bit may be set in the control / status register 312. Such bits can be used to independently and autonomously perform DMA transfers on the contents of the shift register without CPU assistance.The first channel 306A of the DMA 304 may be configured to monitor the setting of bits in the control / status register 312. The second channel 306B of the DMA 304 may be configured to transfer data from the FIFO 314 to the SRAM 302 when a bit is detected through the first channel 306A of the DMA 304. The channel 306A may be configured to issue a trigger to the channel 306B when the detection of the bit of the control / status register 312 is completed.The DMA 304 may be configured to eliminate interrupts of the CPU intended to be generated by filling the shift register. The peripheral device 301 may set any suitable bit of the control / status register 312. For example, bit 8 may be a designated bit for reading and writing that sets the same designated bit. The entire control / status register 312 can be loaded into the buffer 308A. The value of bit 8 can be cleared using the clear mask "1 0000 0000" in circuit 310A. The clear value at bit 8 and the untouched other bits of the read value of the control / status register 312 can be written back to the control / status register 312.Once its FIFO 314 is ready for processing, the peripheral 301 can interrupt or trigger the DMA 304 and set or clear its control / status register 312 bit accordingly. The DMA 304 may be configured to respond by programming the control / status register 312 to obtain the peripheral device 301 ready for the next stage.During the second data movement phase, data can be fetched from the FIFO 314 to other memory locations, such as SRAM302. This can be performed by channel 306B of DMA 304. The circuit 310B may not perform any bit operation. The mask can be all "0" values, indicating that the write to the source FIFO 314 will be written to the SRAM 302 without bit manipulation. The write can be triggered based on the trigger from channel 306A.The FIFO processing can be performed independently of the processing core of the peripheral device 301 or the processor accessing the peripheral device 301. The control / status register 312 may be configured to call a macro to access the data buffer 308A as part of the shared read data bus. Data can be returned from the mask circuit 310A to the control / status register 312 to be shared with the shared write data bus. When channel 306A is completed, it can issue an interrupt input for channel 306B. Therefore, channel 306A can act as a FIFO trigger to another channel 306B. In addition, channel 306A operates the control / status register bits of peripheral 301. Trigger channel 306B may cause data to move from FIFO 314 to SRAM 302 through buffer 308B.The channel 306A may perform a read modification write operation on the bit received from the control / status register 312 by the circuit 310A, and then rewrite it to the control / status register 312. Rewriting the bits to the control / status register 312 may initiate the FIFO data transfer performed by the channel 306B. Channel 306A can program its source address pointer and target address pointer to the same location (control / status register 312), thereby reading the contents of control / status register 312, and operating the relevant bits based on the correct preprogrammed mask register in circuit 310A , And write the modified content back to the same location in the control / status register 312 to complete the read modification write operation independently of its processor.FIG. 4 is a diagram 400 of kernel-independent bit collisions according to an embodiment of the present disclosure. Diagram 400 may show a specific implementation of system 100. For example, DMA 404 may implement DMA 104 using two instances of channels 406, each of which includes data buffer 408 and bit operation mask circuit 410.Bit collision can refer to a series of bits or bit patterns to be published to a bus, network, device, or other suitable target. Bit collision output can be performed by DMA write. The specific mode of bit collision to be performed may depend entirely on the specific protocol, handshake, exchange, shared secret, or other agreed specified communication method. The pattern of bit collisions to be emitted may change during different stages. Otherwise, bit collisions may require a processor to continuously process the pattern of bits to be emitted. However, given the bit manipulation capabilities of DMA 404, bit collisions can be performed independently of processor intervention.Although any suitable bit collision can be performed, in the example of FIG. 4, the different value patterns of the two least significant bits of the sixteen-byte byte can be changed in a specific sequence. The specific sequence may be "00", "10", "01", "11". Other sequences may involve fewer or more bits, or duplication or omission of sequence values. The output of the sequence can be performed on the port register 402 (RA) by DMA writing. The port register 402 can be an output to a network or other device. Specifically, the sequence output can be performed on the two least significant bits of the port register 402, RA0, and RA1.In order to perform bit collision, as the first stage, DMA 404 may clear RA0 and RA1 of port register 402. The second channel 406B may have its initial default mask value, "0000 0000 0000 0000" is used to invert the mask, "0000 00000000 0011" is used to clear the mask, and "0000 0000 0000 0000" is used for bit operation The mask circuit 410 calculates the set mask. At (1), the value of the port register 402 can be read into the buffer 406B. At (2), the mask can be applied to the value in the buffer 406B and written to the port register 402 via DMA. By applying the clear mask of "0000 0000 00000011", the lowest two bits RA0 and RA1 of the port register 402 can be cleared while maintaining the other fourteen values of the port register 402.The DMA channels 406A and 406B may include a trigger that is activated when the DMA operation is completed. The DMA channel 406A may set its trigger to notify the DMA channel 406B when the operation of the DMA channel 406A is completed. The DMA channel 406B may set its trigger to notify the DMA channel 406A when the operation of the DMA channel 406B is completed.As the second stage, the values of RA0 and RA1 can be manipulated. In one embodiment, the values in SRAM 401 can be used to sequentially set one or more masks for channel 406B. The mask set for circuit 410B in channel 406B may cause the sequence to collide and issue in the port register 402. The given mask value can be loaded from the row or row in the SRAM 401. The mask value can simply exist as data.Upon receiving the trigger from channel 406B, operation may begin in channel 406A. At (3), the mask value may be loaded into the buffer 408A of the first channel 406A. The mask of channel 406A may be set to all "0" values, which means that circuit 410A may not perform bit operations. At (4), the channel 406A can write its value to the appropriate mask register of the second channel 406B in a DMA manner. For example, these values can be written to the setting mask of the circuit 410B. Therefore, the contents of the memory of each given line in the SRAM 401 can control the setting mask of the circuit 410B. When channel 406A finishes writing these values to the setting mask of circuit 410B, channel 406A can issue a trigger to channel 406B, and the channel can begin operation.At (5), the contents of the port register 402 can be read into the buffer 408B. At (6), the contents of the port register 402 can be rewritten back to their position at the port register 402, but using the bit operations performed by the circuit 410B according to the mask value provided from the SRAM 401.The mask value read from the SRAM 401 can be applied to the set mask in the circuit 410B. The clear mask value that exists by default may be retained in the circuit 410B. When the given position is cleared (with "1" in the clear mask) and set (with "1" in the set mask), the set mask may have priority and the mask may be set. The SRAM 401 can be continuously re-read (restart if necessary), and the port register 402 continues to be rewritten until a specified time.For example, the first line of SRAM 401 may be "0000 0000 0000 0000", which means that the setting mask of circuit 410B will be "0000 0000 0000 0000". The removal mask may be "0000 0000 0000 0011". The value of the port register 402 may be loaded in the buffer 408B. The highest 14 digits may not be changed. The bit RA1 can be cleared by clearing the mask. The bit RA0 can be cleared by clearing the mask. Setting the mask may not work because it is all "0" values. The resulting value of the lowest two bits of the port register 402 may be "00".The second line of the SRAM 401 may be "0000 0000 0000 0001", which means that the setting mask of the circuit 410B is "0000 0000 0000 0001". The removal mask may be "0000 0000 0000 0011". The value of the port register 402 may be loaded in the buffer 408B. The highest 14 digits may not be changed. The bit RA1 can be cleared by clearing the mask. Setting the mask has no effect on the bit RA1, because the corresponding value is "0" in the setting mask. Bit RA0 can be set by setting the mask. The corresponding value in the clear mask can be overwritten by this "1" value in the set mask. The resulting value of the lowest two bits of the port register 402 may be "01".The third line of the SRAM 401 may be "0000 0000 0000 0010", which means that the setting mask of the circuit 410B will be "0000 0000 0000 0010". The removal mask may be "0000 0000 0000 0011". The value of the port register 402 can be loaded in the buffer 408B. The highest 14 digits may not be changed. Bit RA1 can be set by setting a mask. The bit RA0 can be cleared by clearing the mask. The resulting value of the lowest two bits of the port register 402 may be "10".The fourth line of the SRAM 401 may be "0000 0000 0000 0011", which means that the setting mask of the circuit 410B will be "0000 0000 0000 0011". The removal mask may be "0000 0000 0000 0011". The value of the port register 402 can be loaded in the buffer 408B. The highest 14 digits may not be changed. Bit RA1 can be set by setting a mask. Bit RA1 can be set by setting a mask. The value "11" in the corresponding bit of the clear mask can be overwritten. The resulting value of the lowest two bits of the port register 402 may be "11".Therefore, the memory map of the SRAM 401 can be used only for the program channel 406B and its set mask register, instead of directly providing the content of the bit value for bit collision. DMA 404 utilizes the implicit priority of the three-bit operation function. By defaulting the required bits to clear the mask register to '1', when the corresponding set mask bit is '0', the incoming data bit from SRAM 401 will be cleared to '0', thus completing the ideal Output.The present disclosure has been described in terms of one or more embodiments, and it should be understood that, in addition to those explicitly stated, many equivalents, substitutes, variations, and modifications are possible and within the scope of the present disclosure. Although the present disclosure is susceptible to various modifications and alternative forms, specific exemplary embodiments thereof have been shown in the drawings and described in detail herein. However, it should be understood that the description of specific exemplary embodiments herein is not intended to limit the disclosure to the particular forms disclosed herein.
Embodiments are generally directed to cooling of electronics using folded foil microchannels. An embodiment of an apparatus includes a semiconductor die; a substrate, the semiconductor die being coupled with the substrate; and a cooling apparatus for the semiconductor die, wherein the cooling apparatus includes a folded foil preform, the folded foil forming a plurality of microchannels, and a fluid coolant system to direct a fluid coolant through the microchannels of the folded foil.
CLAIMSWhat is claimed is:1. An apparatus comprising:a semiconductor die;a substrate, the semiconductor die being coupled with the substrate; anda cooling apparatus for the semiconductor die, wherein the cooling apparatus includes: folded foil, the folded foil forming a plurality of microchannels, anda fluid coolant system to direct a fluid coolant through the microchannels of the folded foil.2. The apparatus of claim 1, wherein the cooling apparatus includes zero or more heat spreaders and heat planes.3. The apparatus of claim 1, wherein the folded foil is coupled with a backside of the semiconductor die.4. The apparatus of claim 3, wherein the folded foil is coupled with the backside of the semiconductor die using a solder preform.5. The apparatus of claim 1, wherein the folded foil is incorporated in an integrated coldplate, the integrated coldplate being coupled with the semiconductor die.6. The apparatus of claim 5, wherein the integrated coldplate includes a baseplate, the folded foil, and a lid, the lid including a cavity for insertion of the folded foil.7. The apparatus of claim 1, wherein the folded foil is incorporated in an enabled coldplate, the enabled coldplate being coupled with the semiconductor die and with an integrated heat spreader.8. The apparatus of claim 1, wherein the folded foil is formed from folding of a metal foil to generate a pattern.9. The apparatus of claim 8, wherein the microchannels are formed in the folds of the metal foil.10. The apparatus of claim 1, wherein the semiconductor die is a processor.11. A method comprising:generating folded foil by folding a foil according to a pattern, the folding of the foil generating a plurality of microchannels;installing the folded foil in a cooling structure for a semiconductor die; andinstalling a flow control system for fluid cooling on the cooling structure, the flow control system to direct a fluid coolant through the microchannels of the folded foil.12. The method of claim 11, further comprising coupling the folded foil with a backside of the semiconductor die.13. The method of claim 12, wherein coupling the folded foil with a backside of the semiconductor die includes using a solder preform.14. The method of claim 11, further comprising incorporating the folded foil in an integrated coldplate.15. The method of claim 14, further comprising coupling the integrated coldplate with the semiconductor die.16. The method of claim 15, wherein the integrated coldplate includes a baseplate, the folded foil, and a lid, the lid including a cavity for insertion of the folded foil.17. The method of claim 11, further comprising incorporating the folded foil in an enabled coldplate.18. The method of claim 17, further comprising coupling the enabled coldplate with the semiconductor die and with an integrated heat spreader.19. A computing system comprising:one or more processors for the processing of data;a dynamic random access memory for the storage of data for the one or more processors; anda cooling apparatus for at least a first processor of the one or more processors, wherein the cooling apparatus includes:folded foil, the folded foil forming a plurality of microchannels, anda fluid coolant system to direct a fluid coolant through the microchannels of the folded foil.20. The computing system of claim 19, wherein folded foil is coupled with a backside of the first processor.21. The computing system of claim 19, wherein folded foil is incorporated in an integrated coldplate, the integrated coldplate being coupled with the first processor22. The computing system of claim 19, wherein folded foil is incorporated in an enabled coldplate, the enabled coldplate being coupled with the semiconductor die and with an integrated heat spreader.23. The computing system of claim 19, wherein the folded foil is formed from folding of a metal foil to generate a pattern.
COOLING OF ELECTRONICS USING FOLDED FOIL MICROCHANNELSTECHNICAL FIELDEmbodiments described herein generally relate to the field of electronic devices and, more particularly, to cooling of electronics using folded foil microchannels.BACKGROUNDElectronic devices such as microprocessors, and in particular high power server products, are demonstrating trends that require improved heat removal from silicon structures:Density factor is decreasing trend due to the increasing number of processor cores and inclusion of new technologies;Total thermal design power (TDP) is increasing, thus demanding that the cross plane heat removal be improved which is pushing the capabilities of air cooling; andEmergence of multichip package (MCP) technology in, for example, high power server use with on-package memory generates increasing amounts of heat in an electronic device.Further, coating with certain polymeric layers may present thermal resistance that is too high for traditional air cooling.However, existing liquid cooling technology is generally inadequate to address such heating concerns because of factors including costs, risks to electronic devices, and lack of sufficient cooling capacity.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments described here are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.Figure 1 is an illustration of an apparatus with fluid cooling via folded foil microchannels;Figures 2A to 2C illustrate cooling apparatuses in a device or system according to an embodiment;Figure 3 illustrates a conventional process for skiving of channels for liquid cooling;Figures 4A to 4D illustrate fabrication of a device utilizing folded foil according to an embodiment;Figures 5A and 5B illustrate the formation of folded foil material according to an embodiment; Figure 6A to 6D illustrate fabrication of a fluid cooling solution on a die according to an embodiment;Figure 7A and 7B further illustrate elements of a fluid cooling solution for a die according to an embodiment;Figure 8A to 8D illustrate fabrication of a fluid cooling solution in an integrated coldplate according to an embodiment;Figures 9A to 9D illustrate coolant flows through folded foil microchannels in an integrated coldplate according to an embodiment;Figure 10 is a flow chart to illustrate fabrication of a package including folded foil microchannels according to an embodiment;Figure 11 is an illustration of components of a computing system including a component utilizing fluid cooling through use of folded foil material; andFigure 12 is an illustration of integrated coldplate and enabled coldplate solutions according to an embodiment.DETAILED DESCRIPTIONEmbodiments described herein are generally directed to cooling of electronics using folded foil microchannels.As used herein, the following terms apply:"Computing device" or "computing system" refers to a computer, including, but limited a server, or other electronic device that includes processing ability."Electronic device" refers to any apparatus, device, or system having an electronic system to provide one or more functions, the functions including, but not limited to, mobile devices and wearable devices.In some embodiments, fluid cooling is provided for electronic devices using folded foil.In some embodiments, microchannels (MCs) for fluid flow are formed using folding of a metal foil, allowing for economical and efficient cooling using fluid coolant flow through the microchannels. As used here, a fluid refers to a substance without fixed shape that is capable of flowing, including a liquid or gas.Figure 1 is an illustration of an apparatus with fluid cooling via folded foilmicrochannels. In some embodiments, an apparatus includes a semiconductor die 110 coupled with a substrate 115, the apparatus further including a cooling solution using fluid flow, including a flow control system 150 to direct coolant through, for example, a flow manifold coupled with a coldplate body. In some embodiments, the coolant is directed (or pumped) through microchannels formed using folded foil to draw heat away from the die 110. In one example, the flow control system may include a pump unit 152 to pump fluid through hoses 154 into a manifold unit 156. However, embodiments are not limited to a particular flow control system for fluid cooling, but rather utilize any known technology for pumping or otherwise directing a fluid coolant through microchannels to cool a die.In some embodiments, a folded foil material for a cooling solution may be generated as illustrated and described in Figures 5 A and 5B.In some embodiments, a cooling apparatus to utilize folded foil may be fabricated as illustrated and described in Figures 4A to 4D and Figures 6A to 9D.Figures 2A to 2C illustrate cooling apparatuses utilized in a device or system according to an embodiment. In some embodiments, the apparatuses may be utilized in an embodiment of a fluid cooling system with folded foils.Figure 2A: Certain products utilize integrated heat spreaders (IHS) as lids over the semiconductor dies (such as silicon dies). Figure 2 A illustrates a "2-TIM" structure for the cooling of a die 210 coupled to a substrate or other material 215 in a particular package 200. As illustrated in Figure 2A, a heat plane 220 with a first thermal interface material (TIM1) is coupled with the die 210, over which an integrated heat spreader 225 with a second thermal interface material (TIM2) is provided. Thermal solutions 230, such as passive heat sinks, heat sink/fan combinations, or fluid cooling solutions, may be implemented on the integrated heat spreader 225.Figure 2B: Figure 2B illustrates a "1-TIM" structure for the cooling of the die 210 on the substrate 215. The illustrated cooling structure includes a heat plane 220 with a first thermal material (TIM1), in which the cooling solution 230 is implemented on the heat plane 220. In contrast with the 2-TIM structure, the 1-TIM provides a cooling structure that does not include an integrated heat spreader).Figure 2C: Figure 2C illustrates a 0-TIM structure for the cooling of the die 210 on the substrate 215, in which the cooling solution 230 is implemented directly on the die (such as in an air cooled structure).The 2-TIM configuration, as illustrated in Figure 2A, provides additional cooling capacity through use of the integrated heat spreader. However, limitations of this configuration include a large stackup height, and multiple thermal interfaces where thermal interface materials must be applied. With thermal performance of common thermal interface materials being highly optimized to essentially a physical limit, a fundamental revision in the thermal stackup (such as elimination of a TIM layer by design) is required to improve to the thermal management of high power microprocessors.In some embodiments, an alternative cooling solution utilizes fluid cooling.Conventional processes for the generation of materials for fluid cooling are generally expensive and difficult. For example, to generate channels in a metal, a convention process involves the cutting (skiving) of channels.Figure 3 illustrates a conventional process for skiving of channels for liquid cooling. In this process, a material 300 such as copper is to be machined to generate microchannels 330 for cooling using a fluid cooling. The microchannels 330 are commonly generated by use of a skiving tool 310 to cut the necessary channels. However, skiving is an expensive and difficult process, which increases the overall cost of a cooling solution.In some embodiments, a fluid cooling solution utilizes folded foil microchannels in a fluid cooling solution. In some embodiments, the formation of folded foil microchannels provides an efficient and effective alternative to silicon microchannels and skivedmicrochannels. The generation of folded foil microchannels is illustrated in Figures 5A and 5B. In some embodiments, an apparatus, system and method applying folded foil provides a reduced cost process for creating microchannels in comparison with conventional micromachining and skiving. In some embodiments, the folded foil may be applied at any interface, and provides a lower risk to silicon health while allowing for higher throughput integration of microchannels on a silicon die.In some embodiments, microchannels are created by use of a folded foil copper/metal preform. In some embodiments, a folded foil microchannel preform may implemented at any interface for a cooling solutions, such as: bonding directly on a die backside with nomodification of the silicon die (allowing for removal of thermal interface or additional copper in comparison with skived microchannels); implemented within an integrated coldplate (iCP), wherein the folded foil microchannel cooling solution is applied as a 1-TIM solution; or within an enabled coldplate (eCP), wherein the folded foil microchannel cooling solution is used as a 2- TIM solution, with no machining required (thereby simplifying the fabrication of such a cooling structure).Figures 4A to 4D illustrate fabrication of a device utilizing folded foil according to an embodiment.Figure 4A: In some embodiments, a folded foil preform 405 is generated. The generation of the folded foil preform may be as illustrated in Figure 5A and 5B. Figure 4B: In some embodiments, the folded foil preform is integrated into a package 400 to create microchannels for fluid coolant flow. As illustrated in Figure 5B, the fluid coolant flow is provide from the coolant inlet 415 to the coolant outlet 420. A flow control system (not illustrated in Figures 4A to 4D) may, for example, be a flow control system 150 as illustrated in Figure 1.Figure 4C: In some embodiments, the microchannels of the folded foil preform may implemented inside an integrated coldplate (iCP) 415 to provide a 1-TIM solution or other similar cooling solution for cooling of the die 425 within the package 400.Figure 4D: In some embodiments, the folded foil microchannels may be fabricated under a flow manifold 430 to provide a 0-TIM solution or other similar cooling solution for cooling of the die.Figures 5A and 5B illustrate the formation of folded foil material according to an embodiment.Figure 5A: In some embodiments, a foil, such as copper, is folded in one of a plurality of ways. In this illustration, the folding of the foil may include, but is not limited to, a sawtooth pattern; a square pattern 520; or a serpentine pattern 530. Each particular folding pattern may require a different processing to achieve a desired folded foil geometry.Figure 5B: In a particular example, a folded foil material 540 may include the illustrated serpentine folded foil, wherein the folding has produced microchannels 545. The folded foil material may vary in, for example, a foil thickness, a pitch between folds; and a height of the folds.In some embodiments, efficiency of the folded foil microchannels may be modulated via design of the folded foil. Combinations of different design options result in different embodiments of cooling solutions. In some embodiments, in contrast with commonconventional process for creating microchannels using skiving (such as illustrated in Figure 3) a process includes the use of the folded foil to create microchannels 545 for fluid cooling of an electronic device. In operation, the resulting material can provide effective heat transfer coefficients using low or medium flow rates.In device fabrication, the cost of implementing fluid cooling with folded foil may be significantly lower than with skived microchannels. Skived microchannels are fundamentally a machining process where each unit is skived individually. In some embodiments, folded foil is generated as a large sheet, which may then be clipped or singulated to a desired size and integrated into a 0-TIM, 1-TIM, or 2-TIM design or other similar cooling design using high volume manufacturing techniques. The amount of folded foil material in a cooling solution may be defined as a length in the fold direction (LFD) 560 and a length in the transverse direction (LTD) 570. In an embodiment of a package, fluid coolant is pumped through the microchannels of the folded foil material in the transverse direction.In some embodiments, a cooling solution utilizing folded foil may implemented as, for example, an on-die backside installation (0-TIM solution or similar cooling solution); as folded foil MCs in an integrated coldplate (1-TIM solution or similar cooling solution); or as folded foil MCs in an enabled coldplate (2-TIM solution or similar cooling solution) as follows:(1) On die backside: Processes for assembly on die backside may be implemented as provided in Figures 6A to 6D and Figures 7A and 7B. In a particular example, an assembly may be as illustrated for a full thickness die with BSM (backside metallization), but embodiments are not limited to this example.(2) Folded foil MCs in Integrated Coldplate: In some embodiments, an integrated coldplate consists of three key components: A lid (or manifold) with a cavity for a folded foil preform; the folded foil material; and a baseplate that seals the folded foil material into the iCP. In some embodiments, processes for assembly of the iCP may be implemented as provided in Figures 8A to 8D. In general, an integrated coldplate is a cooling solution that may replace a conventional integrated heat spreader.(3) Folded Foil MCs in Enabled Coldplate (2-TIM solution): In some embodiments, a process for integration of folded foil MCs into an eCP is similar to the process illustrated for an iCP in Figures 8A to 8D. In some embodiments, a coldplate including a block with a cavity for the folded foil preform, the folded foil material, and a baseplate is assembled. In some embodiments, coldplate is utilized as a 2-TIM cooling solution. In general, an enabled coldplate is a cooling solution that may replace a cooling solution that is on top of a conventional integrated heat spreader.Figure 6A to 6D illustrate fabrication of a fluid cooling solution on a die according to an embodiment.Figure 6A: In some embodiments, a die 610 is coupled with a substrate 600.Figure 6B: A fluid seal 620 is applied around the die 610. The fluid seal 620 acts to prevent leakage of the coolant fluid outside of the intended flow region.Figure 6C: A folded foil preform 630 may be bonded to the surface with, for example, high heat flux via high temperature solder, such as a folded foil preform integrated on a thin solder preform on top of the BSM (backside metallization) die. However, embodiments are not limited to any particular method of bonding the folded form preform. In some embodiments, the folded foil preform 630 includes folded foil material as provided in Figures 5 A and 5B.Figure 6D: A flow manifold 640 is assembled on top of the integrated folded foil preform, where the manifold includes a cavity into which the folded foil preform fits snugly. In some embodiments, the manifold cavity is longer along the LTD direction to allow ease of coolant entry and exit and a uniform flow of coolant through the folded foil microchannels.Figure 7A and 7B further illustrate elements of a fluid cooling solution for a die according to an embodiment.Figure 7A: Folded foil material 730 is integrated on top of a bare die 710, with a close up view of the folded foil 730 on the die 710 being provided.Figure 7B: In some embodiments, a manifold 740 is installed on the folded foil preform, the manifold 740 including a cavity for the folded foil preform. Figure 7B further provides a cutaway view of the manifold 740 installed on the folded foil, with a close up of the folded foil below the manifold being also provided.Figure 8A to 8D illustrate fabrication of a fluid cooling solution in an integrated coldplate according to an embodiment.Figure 8A: In some embodiments, a thin solder preform 810 is placed on top of a copper baseplate (BP) 810. However, embodiments are not limited to this particular bonding process.Figure 8B: A folded foil preform 830 in placed on top of the solder preform.Figure 8C: A flow manifold 840 is placed on top of the folded foil, the folded foil baseplate combination being inserted in a cavity in a lid of the flow manifold 840. In some embodiments, a thin solder preform is placed on top of the folded foil and reflowed to couple the components to ensure a strong mechanical join between the folded foil and the flow manifold, Figure 8C.Figure 8D: In some embodiments, the resulting completed folded foil iCP 850 is then ready for integration onto a package.In some embodiments, because the iCP assembly is completed ahead of integration onto the package, a high temperature solder may be recommended so that no additional reflow occurs within the iCP during iCP attachment onto the package.Figures 9A to 9D illustrate coolant flows through folded foil microchannels in an integrated coldplate according to an embodiment. Figures 9A to 9D illustrate cross-sections of an iCP assembled on a package and the direction of coolant flow. Figure 9A: In some embodiments, a folded foil preform 905 is produced, such as illustrated in Figures 5A and 5B.Figure 9B: The folded foil preform is incorporated into an integrated cooling plate 950, the structure including a coolant inlet 915 and a coolant outlet 920 for the flow of coolant through the microchannels of the folded foil.Figure 9C: As illustrated in cutaway view provided in Figure 9C, the folded foil microchannels allow for coolant flow over the surface of the die 925 to provide an effective solution of removal of heat from the die 925.Figure 9D: As illustrated in Figure 9D, the coolant flow 930 is into the coolant inlet 915, through each of the parallel microchannels, and out of the coolant output 920.Figure 10 is a flow chart to illustrate fabrication of a package including folded foil microchannels according to an embodiment. In some embodiments, a process for fabrication of a package 1000 includes, but is not limited to, the following:1002: Fabricating folded foil from a copper foil or other head conductive foil, the resulting structure including multiple microchannels created by the folding of the material.1004: Installing the folded foil into a cooling structure, wherein the installation may be in the form of one of the following:1006: A 0-TIM solution or similar cooling solution installed on a die backside;1008: A 1-TIM solution or similar cooling solution installed in an integrated coldplate; or1010: A 2-TIM solution or similar cooling solution installed in an enabled coldplate.1012: Installing coolant control system onto the cooling solution to provide for the pumping of fluid coolant through the folded foil microchannels in the operation of the resulting package.Figure 11 is an illustration of components of a computing system including a component utilizing fluid cooling through use of folded foil material. Elements shown as separate elements may be combined, including, for example, an SoC (System on Chip) combining multiple elements on a single chip.In some embodiments, a computing system 1100, which may be, but is not limited to, a computer server, may include one or more processors 1110 coupled to one or more buses or interconnects, shown in general as bus 1165. The processors 1110 may comprise one or more physical processors and one or more logical processors. In some embodiments, the processors may include one or more general-purpose processors or special-processor processors. In some embodiments, the processors include a memory controller. In some embodiments, one or more of the processors 1110 include a cooling solution utilizing fluid cooling through folded foil microchannels 1112. In some embodiments, a particular processor 1111 includes a cooling apparatus 1116 to provide cooling for at least one die 1114, wherein the cooling apparatus 1116 includes folded foil material 1118. In some embodiments, the cooling apparatus may vary in different implementations, such as a 2-TIM, 1- TIM, or 0-TIM structure or other cooling structure, such as illustrated in Figures 2 A, 2B, and 2C. In some embodiments, the folded foil material 1118 may be generated as illustrated and described in Figures 5 A and 5B. In some embodiments, the cooling apparatus 116 may be fabricated as illustrated and described in Figures 4A to 4D and Figures 6A to 9DThe bus 1165 is a communication means for transmission of data. The bus 1165 is illustrated as a single bus for simplicity, but may represent multiple different interconnects or buses and the component connections to such interconnects or buses may vary. The bus 1165 shown in Figure 11 is an abstraction that represents any one or more separate physical buses, point-to-point connections, or both connected by appropriate bridges, adapters, or controllers.In some embodiments, the computing system 1100 further comprises a random access memory (RAM) or other dynamic storage device or element as a main memory 1120 for storing information and instructions to be executed by the processors 1110.The computing system 1100 also may comprise a non- volatile memory 1125; a storage device such as a solid state drive (SSD) 1130; and a read only memory (ROM) 1135 or other static storage device for storing static information and instructions for the processors 1110.In some embodiments, the computing system 1100 includes one or more transmitters or receivers 1140 coupled to the bus 1165. In some embodiments, the computing system 1100 may include one or more antennae 1144, such as dipole or monopole antennae, for the transmission and reception of data via wireless communication using a wireless transmitter, receiver, or both, and one or more ports 1142 for the transmission and reception of data via wiredcommunications. Wireless communication includes, but is not limited to, Wi-Fi, Bluetooth™, near field communication, and other wireless communication standards.In some embodiments, computing system 1100 includes one or more input devices 1150 for the input of data, including hard and soft buttons, a joy stick, a mouse or other pointing device, a keyboard, voice command system, or gesture recognition system.In some embodiments, the computing system 1100 includes an output display 1155, where the display 1155 may include a liquid crystal display (LCD) or any other display technology, for displaying information or content to a user. In some environments, the display 1155 may include a touch-screen that is also utilized as at least a part of an input device 1150. Output display 1155 may further include audio output, including one or more speakers, audio output jacks, or other audio, and other output to the user.The computing system 1100 may also comprise power source 1160, which may include a power transformer and related electronics, a battery, a solar cell, a fuel cell, a charged capacitor, near field inductive coupling, or other system or device for providing or generating power in the computing system 1100. The power provided by the power source 1160 may be distributed as required to elements of the computing system 1100.Figure 12 is an illustration of integrated coldplate and enabled coldplate solutions according to an embodiment. As referred to herein, an integrated coldplate is a cooling solution that may be implemented to replace a conventional IHS (as in a 1-TIM solution), and an enabled coldplate is a cooling solution that may be implemented to replace a cooling solution on top of a conventional IHS (as in a 2-TIM solution).In a simplified illustration, an integrated coldplate 1200 may include a manifold 1205 including a cavity 1210 to contain a folded foil preform 1215 (shown in an on end view through the microchannels in this illustration); and a baseplate 1220 that operates to seal the folded foil material into the integrated coldplate. In some embodiments, the baseplate 1220 may then be attached to a die 1225 on a package substrate 1230, wherein the attachment of the baseplate 1220 to the die 1225 may include STIM (solder thermal interface material) or PTIM (polymer thermal interface material). While not illustrated here, the integrated coldplate 1200 may include a more complex structure, including, for example, the inclusion of extended feet that attach to the package substrate 1230 through using, for example, IHS sealant material.An enabled coldplate 1250 may similarly include manifold 1205 including a cavity1210 to contain a folded foil preform 1215, and a baseplate 1220 that operates to seal the folded foil material into the enabled coldplate. In some embodiments, the baseplate 1220 may then be attached to an integrated heat spreader (IHS) 1260, wherein the IHS 1260 is coupled with the die1225 on the package substrate 1230. In this instance, the enabled coldplate 1250 is attached to a traditional package with IHS 1260, wherein the attachment to the IHS 1260 may utilize common loading mechanisms such as screws.In the description above, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent, however, to one skilled in the art that embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. There may be intermediate structure between illustrated components. The components described or illustrated herein may have additional inputs or outputs that are not illustrated or described.Various embodiments may include various processes. These processes may be performed by hardware components or may be embodied in computer program or machine- executable instructions, which may be used to cause a general-purpose or special -purpose processor or logic circuits programmed with the instructions to perform the processes.Alternatively, the processes may be performed by a combination of hardware and software.Portions of various embodiments may be provided as a computer program product, which may include a computer-readable medium having stored thereon computer program instructions, which may be used to program a computer (or other electronic devices) for execution by one or more processors to perform a process according to certain embodiments. The computer-readable medium may include, but is not limited to, magnetic disks, optical disks, compact disk read-only memory (CD-ROM), and magneto-optical disks, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically-erasable programmable read-only memory (EEPROM), magnet or optical cards, flash memory, or other type of computer-readable medium suitable for storing electronic instructions. Moreover, embodiments may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer.Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the present embodiments. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the concept but to illustrate it. The scope of the embodiments is not to be determined by the specific examples provided above but only by the claims below.If it is said that an element "A" is coupled to or with element "B," element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A "causes" a component, feature, structure, process, or characteristic B, it means that "A" is at least a partial cause of "B" but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing "B." If the specification indicates that a component, feature, structure, process, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, this does not mean there is only one of the described elements.An embodiment is an implementation or example. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various novel aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiments requires more features than are expressly recited in each claim. Rather, as the following claims reflect, novel aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment.In some embodiments, an apparatus includes a semiconductor die; a substrate, the semiconductor die being coupled with the substrate; and a cooling apparatus for thesemiconductor die, wherein the cooling apparatus includes: a folded foil thermal, the folded foil forming a plurality of microchannels, and a fluid coolant system to direct a fluid coolant through the microchannels of the folded foil.In some embodiments, the cooling apparatus includes zero or more heat spreaders and heat planes.In some embodiments, the folded foil preform is coupled with a backside of the semiconductor die.In some embodiments, the folded foil preform is coupled with the backside of the semiconductor die using a solder preform.In some embodiments, the folded foil preform is incorporated in an integrated coldplate, the integrated coldplate being coupled with the semiconductor die.In some embodiments, the integrated coldplate includes a baseplate, the folded foil preform, and a lid, the lid including a cavity for insertion of the folded foil. In some embodiments, the folded foil preform is incorporated in an enabled coldplate, the enabled coldplate being coupled with the semiconductor die and with an integrated heat spreader.In some embodiments, the folded foil preform is formed from folding of a metal foil to generate a pattern. In some embodiments, the microchannels are formed in the folds of the metal foil.In some embodiments, the semiconductor die is a processor.In some embodiments, a method includes generating a folded foil preform by folding a foil according to a pattern, the folding of the foil generating a plurality of microchannels;installing the folded foil preform in a cooling structure for a semiconductor die; and installing a flow control system for fluid cooling on the cooling structure, the flow control system to direct a fluid coolant through the microchannels of the folded foil.In some embodiments, the method further includes comprising coupling the folded foil preform with a backside of the semiconductor die.In some embodiments, coupling the folded foil preform with a backside of the semiconductor die includes using a solder preform.In some embodiments, the method further includes incorporating the folded foil preform into an integrated coldplate.In some embodiments, the method further includes coupling the integrated coldplate with the semiconductor die.In some embodiments, the integrated coldplate includes a baseplate, the folded foil preform, and a lid, the lid including a cavity for insertion of the folded foil preform.In some embodiments, the method further includes comprising incorporating the folded foil preform in an enabled coldplate.In some embodiments, the method further includes coupling the enabled coldplate with the semiconductor die and with an integrated heat spreader.In some embodiments, a computing system includes one or more processors for the processing of data; a dynamic random access memory for the storage of data for the one or more processors; and a cooling apparatus for at least a first processor of the one or more processors, wherein the cooling apparatus includes folded foil, the folded foil forming a plurality of microchannels, and a fluid coolant system to direct a fluid coolant through the microchannels of the folded foil.In some embodiments, the folded foil is coupled with a backside of the first processor. In some embodiments, the folded foil is incorporated in an integrated coldplate, the integrated coldplate being coupled with the first processorIn some embodiments, the folded foil is incorporated in an enabled coldplate, the enabled coldplate being coupled with the semiconductor die and with an integrated heat spreader.In some embodiments, the folded foil is formed from folding of a metal foil to generate a pattern.In some embodiments, an apparatus includes a semiconductor die; a substrate, the semiconductor die being coupled with the substrate; and a cooling apparatus for thesemiconductor die, wherein the cooling apparatus includes folded foil material, the folded foil forming a plurality of microchannels, and a fluid coolant system to direct a fluid coolant through the microchannels of the folded foil material.In some embodiments, the folded foil material includes a folded foil preform.In some embodiments, a method includes fabricating a folded foil preform, the folded foil preform including foil that is folded according to a pattern, the folding of the foil generating a plurality of microchannels; installing the folded foil preform in a cooling structure for a semiconductor die; and installing a flow control system for fluid cooling on the cooling structure, the flow control system to direct a fluid coolant through the microchannels of the folded foil preform.
An integrated circuit includes clock deskew circuitry. The deskew circuitry includes a loop circuit to align an input clock signal with an output clock signal, and also aligns transmitted data with the output clock signal.
1、An integrated circuit comprising:a clock input pad for receiving an input clock signal;a clock output pad for transmitting an output clock signal;a loop circuit for phase locking the input clock signal and the output clock signal.2、The integrated circuit of claim 1 wherein said loop circuit includes a phase detector for comparing phases of said input clock signal and said output clock signal.3、The integrated circuit of claim 2 further comprising a clock generator for generating a plurality of clock signals having different phases based on said input clock signal.4、The integrated circuit of claim 3 wherein said loop circuit further comprises a first phase interpolator for generating said output clock signal based on said plurality of clock signals generated by said clock generator.5、The integrated circuit of claim 4 wherein said loop circuit further comprises control logic for affecting operation of said first phase interpolator in response to said phase detector.6、The integrated circuit of claim 5 further comprising a delay line responsive to said clock generator.7、The integrated circuit of claim 6 further comprising at least one other phase interpolator coupled to said delay line for generating at least one clock signal to synchronize output data of said integrated circuit.8、The integrated circuit of claim 7 wherein operation of said at least one other phase interpolator is affected by PI control logic.9、The integrated circuit of claim 8 further comprising an output multiplexer having a plurality of control inputs for synchronizing output data of said integrated circuit in response to said at least one clock signal.10、The integrated circuit of claim 9 wherein said loop circuit comprises a pseudo output multiplexer having substantially the same delay characteristics as said output multiplexer.11、An integrated circuit comprising:a clock input pad for receiving an input clock;Clock output solder joint for transmitting an output clock;a loop circuit for aligning the output clock with the input clock, wherein the output clock is derived from the input clock;A data output circuit for synchronizing data output from the integrated circuit using the output clock.12、The integrated circuit of claim 11 wherein said loop circuit comprises a phase locked loop for generating said output clock.13、The integrated circuit of claim 12 further comprising a clock routing circuit having a first delay characteristic coupled between said clock input pad and said phase locked loop.14、The integrated circuit of claim 13 further comprising a clock routing circuit having a second delay characteristic coupled between said phase locked loop and said clock output pad.15、The integrated circuit of claim 14 wherein said phase locked loop comprises a feedback path having both said first and second delay characteristics.16、A method comprising:Receiving an input clock signal;Providing the input clock signal to a clock generator;Interpolating between phases of a clock signal provided by the clock generator to generate an output clock signal;The input clock signal is phase locked to the output clock signal by modifying the interpolation.17、The method of claim 16 further comprising interpolating between phases to generate at least one clock signal to synchronize data output from said integrated circuit.18、The method of claim 17 wherein said at least one clock signal comprises two clock signals for synchronizing data output from said integrated circuit at a rate four times the output clock signal.19、The method of claim 18 further comprising multiplexing said four data signals using said two clock signals.20、An electronic system comprising:antenna;a radio frequency circuit coupled to the antenna;Memory device;a controller coupled to the radio frequency circuit and the memory device, the controller including a clock input pad for receiving an input clock signal, a clock output pad for transmitting an output clock signal, and for causing A loop circuit that inputs a clock signal and the output clock signal is phase locked.21、The electronic system of claim 20 wherein said loop circuit includes a phase detector for comparing phases of said input clock signal and said output clock signal.22、The electronic system of claim 21 wherein said controller further comprises a clock generator for generating a plurality of clock signals having different phases based on said input clock signal.23、The electronic system of claim 22 wherein said loop circuit further comprises a first phase interpolator for generating said output clock signal based on said plurality of clock signals generated by said clock generator.
Clock de-skew method, device and systemTechnical fieldThe present invention relates generally to clock circuits and, more particularly, to clock circuits having a deskew function.Background techniqueIntegrated circuits such as processors and memory devices typically use digital data signals and clock signals to communicate with one another. The clock signal and the data signal are typically "timed" or "phase aligned" to each other such that the clock signal can be used to latch the data.Figure 1 shows a prior art circuit that aligns a transmitted data signal with a received clock signal. The circuit includes a clock buffer 102, a frequency divider 108, a phase comparator 114, a pseudo clock buffer 118, delay lines 104 and 110, a shift register 116, an output buffer 106, and a dummy output buffer 112.The output data DQ is synchronized using the clock signal generated by the delay line 104, wherein the delay line 104 and the delay line 110 are controlled in parallel by the shift register 116. The delay locked loop (DLL) circuit is composed of a phase comparator 114, a shift register 116, a delay line 110, a dummy output buffer 112, and a pseudo clock register 118. The delay of the pseudo output buffer 112 matches the delay of the output buffer 106, and the delay of the pseudo clock buffer 118 matches the delay of the clock buffer 102. By using a matched delay circuit in the DLL, the phase of the signal on node 117 approximately matches CLK, and the phase of DQ also approximately matches CLK.DRAWINGSFigure 1 shows an existing circuit for aligning a transmitted data signal with a received clock signal;Figure 2 shows an integrated circuit with a clock de-skew function;Figure 3 shows a timing diagram;Figure 4 shows an integrated circuit with a clock de-skew function;Figure 5 shows a timing diagram;Figure 6 shows a flow chart in accordance with various embodiments of the present invention;7 and 8 show views of an electronic system in accordance with various embodiments of the present invention.Detailed waysBRIEF DESCRIPTION OF THE DRAWINGS In the following detailed description, reference to the drawings These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It will be understood that the various embodiments of the invention, although different, are not necessarily mutually exclusive. For example, the specific features, structures, or characteristics described in connection with the embodiments herein may be implemented in other embodiments without departing from the spirit and scope of the invention. In addition, it is to be understood that changes may be made in the s Therefore, the following detailed description is not to be construed as limiting the scope of the invention In the figures, like numerals refer to the same or theFigure 2 shows an integrated circuit with a clock de-skew function. The integrated circuit 200 receives an input clock signal (RxCK) at a pad 202 and an output clock signal (TxCK) at a pad 252. Integrated circuit 200 also transmits output data (TxDATA) at solder joint 256. The integrated circuit 200 includes solder joints 202, 252, and 256, a receiver 204, a driver 254, a pseudo clock tree 230, a master clock generator 220, a phase interpolator (PI) 228, a PI control logic 210, and a phase detector (PD) 223. The pseudo output multiplexer 262 and the data output circuit 270. The data output circuit 270 includes a clock tree 234, a slave delay line (DL) 222, phase interpolators 224, 226, and an output multiplexer 260. In some embodiments, integrated circuit 200 includes a plurality of data output circuits 270. Representative embodiments are described more fully below.In some embodiments, the transmitted data signal includes more than one data symbol for each cycle of the input clock signal. For example, in some embodiments, for each cycle of the input clock signal, the output data signal TxDATA can include four data symbols. Integrated circuit 200 can be used in high speed systems that use a forward multiphase clocking scheme in which one transition of the output clock signal is sent with each set of data. The remainder of the description relates to embodiments that include four data symbols for each transition on the input clock signal, but this is not a limitation of the invention.In operation, the input clock (RxCK) is received by receiver 204 and provided to master clock generator 220 and slave delay line (DL) 222. As shown in FIG. 2, the master clock generator 220 provides a control signal to the slave delay line 222. In some embodiments, integrated circuit 200 includes a single master clock generator and a plurality of slave delay lines distributed throughout the integrated circuit. In other embodiments, master clock generator 220 and slave delay line 222 are combined and operated as a single clock generator. In some embodiments, master clock generator 220 is implemented as a delay locked loop (DLL). In other embodiments, master clock generator 220 is implemented as a phase locked loop.Delay line 222 produces a plurality of clocks having different phases. For example, delay line 222 can generate two or more clock signals having a substantially fixed phase difference, such as a difference of 45 degrees between clock phases or a phase difference of 90 degrees between clock phases. Phase interpolators (PI) 224 and 226 receive a plurality of clock signals from delay line 222 and phase interpolate between them to generate local clock signals TxCK-0 and TxCK-90. Phase interpolators 224 and 226 provide interpolation in response to control information received from PI control logic 210. As shown in FIG. 2, TxCK-0 and TxCK-90 are offset by 90 degrees in phase, and output multiplexer 260 is controlled to synchronize data transmitted from integrated circuit 200. In some embodiments, multiplexer 260 includes a latch circuit to latch the data, and in other embodiments, multiplexer 260 does not include a latch circuit.Data output circuit 270 can be placed anywhere on the integrated circuit die. Clock tree 234 represents the buffers and routes used to distribute the clock signals to the slave DL 222. In an embodiment with multiple data output circuits 270, clock tree 234 is equalized to have substantially equal delay characteristics. The pseudo clock tree 230 is also equalized to have substantially the same delay characteristics as the clock tree 234. By equalizing the clock tree delay, the clock (MCKIn) provided to the master clock generator is substantially matched to the clock (SCKIn) supplied to the slave DL.The master clock generator 220 generates a plurality of clocks that match the phase of the plurality of clocks generated from the delay line 222. Phase interpolator 228 receives the plurality of clocks from master clock generator 220 and interpolates to generate MCKOut. MCKOut is used to control the pseudo output multiplexer 262, which then provides a clock signal to the driver 254. The delay characteristics of the pseudo output multiplexer 262 are matched to the output multiplexer 260 such that TxCK and TxDATA are clock aligned for use in a forwarding multiphase clock system.In addition to the circuitry just described for aligning TxCK and TxDATA, integrated circuit 200 also includes loop circuitry for maintaining RxCK and TxCK alignment. The loop circuit includes a phase detector 232, PI control logic 210, a phase interpolator 228, a pseudo multiplexer 262, and a driver 254. Phase detector 232 compares the phases of RxCK and TxCK and provides phase error information to PI control logic 210. The PI control logic 210 provides a phase control code to the phase interpolator 228, which then modifies the phase of the MCKOut.As described above, the data output circuit 270 or portions thereof can be repeated multiple times in the integrated circuit 200. For example, there can be many different circuits to send output data. A clock signal is provided to each of these circuits by a clock tree 234, and each circuit may also have a slave delay line, a phase interpolator, and an output multiplexer. In some embodiments, some or all of the data output circuits may share some components. For example, adjacent output circuits can share all or part of the clock tree, and adjacent output circuits can share slave delay lines and phase interpolators. In embodiments having multiple data output circuits 270, master clock generator 220 can provide control signals to a plurality of delay lines 222, which can provide control signals to a plurality of phase interpolators 224 and 226.Phase interpolators 224, 226, and 228 can be controlled in parallel using phase interpolator control logic. The phase interpolator control logic operates in response to phase error information from phase detector 232 during operation. In response to the phase error information received from phase detector 232, PI control logic 210 affects the operation of each phase interpolator in parallel.Integrated circuit 200 can be any type of integrated circuit. For example, integrated circuit 200 can be a memory device, controller, processor, or any other integrated circuit that can receive clock signals and transmit clock signals and data signals. The various functional blocks that are part of the integrated circuit are intentionally omitted from FIG. 2 to make the description clearer. Although only one input clock signal, one output clock signal, and one output data signal are shown in FIG. 2, this is not a limitation of the present invention. For example, multiple data signals can be de-skewed relative to a single clock signal.Figure 3 shows a timing diagram. The timing diagram of Figure 3 illustrates the operation of the circuit shown in Figure 2. The output of clock receiver 204, RxCKd, is the point at which the clock signal branches. Starting with RxCKd, one path goes to the slave DL 222 and the other path to the master clock generator 220. The input of the master clock generator 220 (MCKIn) is in phase with the input from the DL 222 (SCKIn) due to the routing match. Using the same delay line control voltage and PI control code, the output of the master clock generator 220 (MCKOut) and the output of the slave DL 222 (TxCK-0) are also in phase. Signal TxCK-0 is fed into the transmit data path; thus, after clock-to-output delay (Tco), the output data (TxDATA) becomes active. At the same time, MCKOut passes through the pseudo output multiplexer 262 and the clock output driver with the same Tco delay and becomes the output clock (TxCK). TxCK is aligned with TxDATA due to delay matching. Through the PI control logic, the phase detector of the loop keeps TxCK aligned with RxCK, as shown in the timing diagram of Figure 3.4 shows an integrated circuit with a clock de-skew function, and FIG. 5 shows a timing diagram for illustrating the operation of the circuit of FIG. Integrated circuit 400 includes an input clock pad 402, a receiver 404, and buffers and routings 406, 408, 410, and 412. The integrated circuit 400 also includes a phase/frequency detector (PFD) 414, an electric pump and loop filter 416, a voltage controlled oscillator (VCO) 418, and delay lines (DL) 420, 442, 444, and 446. The integrated circuit 400 also includes an equalized clock distribution network including buffers 430, 432, 434, 436, 438, 440 and two pseudo buffers. The integrated circuit 400 also includes drivers 450, 454, and 458 and output pads 452, 456, and 460.In operation, an input clock signal (RxCK) is received by the receiver 404 at the solder joint 402. Receiver 404 is shown with "A" to show that it is intentionally matched to other buffers/receivers shown with "A". The output of receiver 404 is fanned out in integrated circuit 400 to provide a clock signal to each circuit. For example, buffer and route 410 provide clocking to the remaining circuits shown in FIG. Buffers and routes 406 and 408 are included in FIG. 2 to indicate that many other circuits within integrated circuit 400 can use the input clock.The buffer and route 410 provides a clock signal "pllref" to the PFD 414. PFD 414 is part of a loop circuit that causes pllref and pllfbk to be substantially locked in phase, as shown in FIG. The loop circuit includes a PFD 414, an electric pump and loop filter 416, a VCO 418, buffers 430, 432 and 434, DL 420, buffers 422 and 424, and a buffer and routing 412. As shown in Figure 4, the loop circuit operates as a phase locked loop (PLL). In some embodiments, the VCO 418 is replaced with a delay line and the loop circuit operates as a delay locked loop (DLL).The loop circuit produces a clock signal that is driven into buffered network by the buffers 430 and 432. Thus, buffer 434 feeds back the version of the clock signal, and buffers 436, 438, and 440 provide versions of the clock signal to directly transmit the integrated circuit or synchronize the data to be sent out of the integrated circuit. For example, buffer 454 can send an output clock signal (TxCK) out of the integrated circuit. Also, for example, buffers 450 and 458 can send output data (TxDATA) out of the integrated circuit. Although only the buffer is shown, a synchronization element such as a latch or flip-flop can be used to drive the output data out of the integrated circuit. Additionally, a multiplexer such as output multiplexer 260 (FIG. 2) can be used. The buffer 454 is shown with "C" to show that it is intentionally matched to other buffers shown with "C".In operation, RxCK and fbck are substantially phase matched because pllref and pllfbk match due to loop operation, and pllref and pllfbk are created because both RxCk and fbck pass through the matching circuit. The matching circuit includes buffers/receivers 404 and 424 and buffers and routes 410 and 412. This timing is shown near the top of Figure 3. In addition to fbck matching RxCK, fbck is also substantially phase matched to TxCK. Both fbck and TxCK are generated from VCO 418, and each of fbck and TxCK passes through a substantially matched delay path. For example, TxCK passes through parallel buffers 430 and 432, buffer 438, DL 444, and buffer 454; and fbck passes through parallel buffers 430 and 432, buffer 434, DL 420, and buffer 422. Therefore, RxCK and TxCK are basically phase matched. This timing is shown near the center of FIG.In addition to the phase matching of RxCK and TxCK, TxDATA is aligned with TxCK because TxCK is used to synchronize the TxDATA output of the integrated circuit. Although not specifically shown in FIG. 4, all of the delay lines can be controlled by the main DLL in the same manner as described with reference to FIG. Moreover, in some embodiments, the loop circuit shown in Figure 4 is replaced with a DLL, and the DLL provides a delay control word to each of the delay lines shown.Integrated circuit 400 can be any type of integrated circuit. For example, integrated circuit 400 can be a memory device, controller, processor, or any other integrated circuit that can receive clock signals and transmit clock signals and data signals. The various functional blocks that are part of the integrated circuit are intentionally omitted from FIG. 4 to make the description clearer. Although only one input clock signal, one output clock signal, and one output data signal are shown in FIG. 4, this is not a limitation of the present invention. For example, multiple data signals can be de-skewed relative to a single clock signal.Figure 6 shows a flow chart in accordance with various embodiments of the present invention. In some embodiments, method 600 can be used to perform clock de-skew. In some embodiments, method 600 or a portion thereof is performed using input/output (I/O) circuitry in an integrated circuit, and an embodiment of an integrated circuit is shown in various figures. In other embodiments, method 600 is performed using a controller or memory device. Method 600 is not limited to a particular type of device for performing the method. The various actions in method 600 may be performed in the order presented or in a different order. Moreover, some of the actions listed in FIG. 6 may be omitted from method 600 in some embodiments.Method 600 begins at 610 where an input clock signal is received. At 620, the input clock signal is provided to a clock generator such as a phase locked loop or a delay locked loop. For example, the actions of 610 and 620 may correspond to the integrated circuit 200 (FIG. 2) receiving TxCK and providing a clock to the main clock generator 220.At 630, phase interpolation is performed. The clock generator provides a plurality of clock signals and performs phase interpolation between the plurality of clock signals to generate an output clock signal. Returning now to Figure 2, the action of 630 may correspond to phase interpolator 228 generating an output clock signal. At 640, the input clock signal can be phase locked to the output clock signal by modifying the interpolation performed at 630. For example, PD232 locks the RxCK phase to TxCK.Figure 7 illustrates an electronic system in accordance with various embodiments of the present invention. The electronic system 700 includes a processor 710, a memory controller 720, a memory 730, an input/output (I/O) controller 740, a radio frequency (RF) circuit 750, and an antenna 760. In operation, system 700 transmits and receives signals using antenna 760, and these signals are processed by the various elements shown in FIG. Antenna 760 can be a directional antenna or an omnidirectional antenna. As used herein, the term omnidirectional antenna refers to an antenna having a substantially uniform pattern in at least one plane. For example, in some embodiments, antenna 760 can be an omnidirectional antenna, such as a dipole antenna or a quarter wave antenna. Also, for example, in some embodiments, antenna 760 can be a directional antenna such as a parabolic antenna, a patch antenna, or a Yagi antenna. In some embodiments, antenna 760 can include multiple physical antennas.Radio frequency circuit 750 is in communication with antenna 760 and I/O controller 740. In some embodiments, RF circuit 750 includes a physical interface (PHY) corresponding to a communication protocol. For example, RF circuit 750 can include a modulator, a demodulator, a mixer, a frequency synthesizer, a low noise amplifier, a power amplifier, and the like. In some embodiments, RF circuit 750 can include a heterodyne receiver, and in other embodiments, RF circuit 750 can include a direct conversion receiver. In some embodiments, RF circuit 750 can include multiple receivers. For example, in an embodiment with multiple antennas 760, each antenna can be coupled to a respective receiver. In operation, RF circuit 750 receives communication signals from antenna 760 and provides analog or digital signals to I/O controller 740. Additionally, I/O controller 740 can provide signals to RF circuitry 750, which operates on the signals and then transmits them to antenna 760.Processor 710 can be any type of processing device. For example, processor 710 can be a microprocessor, microcontroller, or the like. Moreover, processor 710 can include any number of processing cores or can include any number of separate processors.Memory controller 720 provides a communication path between processor 710 and other devices shown in FIG. In some embodiments, memory controller 720 is part of a hub device that also provides other functionality. As shown in FIG. 7, memory controller 720 is coupled to processor 710, I/O controller 740, and memory 730. Memory controller 720 can communicate with memory 730 using a forwarding clock on bus 722. For example, memory controller 720 can transmit a clock signal and a data signal to memory 730 using any of the clock de-skew embodiments described herein.Memory 730 can include a plurality of memory devices. Moreover, each of the plurality of memory devices can include the circuitry described with respect to FIG. 2 or FIG. Memory 730 can be any type of memory technology. For example, memory 730 can be random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), non-volatile memory such as flash memory, or any other type of memory.Memory 730 can represent a single memory device or multiple memory devices on one or more memory modules. Memory controller 720 provides data to memory 730 via bus 722 and receives data from memory 730 in response to a read request. Commands and/or addresses may be provided to memory 730 via wires outside of bus 722 or via bus 722. Memory controller 720 can receive data to be stored in memory 730 from processor 710 or from another source. Memory controller 720 can provide data it receives from memory 730 to processor 710 or other destination. Bus 722 can be a bidirectional bus or a unidirectional bus. Bus 722 can include a plurality of parallel wires. The signal can be differential or single-ended. In some embodiments, bus 722 operates using a forward multiphase clocking scheme.Memory controller 720 can also be coupled to I/O controller 740 and provide a communication path between processor 710 and I/O controller 740. I/O controller 740 includes circuitry for communicating with I/O circuitry, such as a serial port, a parallel port, a universal serial bus (USB) port, and the like. As shown in FIG. 7, I/O controller 740 provides a communication path to RF circuit 750. Memory controller 720 and I/O controller 740 can include any of the clock de-skew embodiments described herein. For example, memory controller 720 or I/O controller 740 can include the circuitry described with respect to FIG. 2 or FIG.Figure 8 illustrates an electronic system in accordance with various embodiments of the present invention. The electronic system 800 includes a memory 730, an I/O controller 740, an RF circuit 750, and an antenna 760, all of which are described above with reference to FIG. Electronic system 800 also includes a processor 810 and a memory controller 820. As shown in FIG. 8, memory controller 820 is included in processor 810. Processor 810 can be any type of processor as described above with reference to processor 710 (FIG. 7). Processor 810 differs from processor 710 in that processor 810 includes a memory controller 820 and processor 710 does not include a memory controller. Memory controller 820 can include any of the clock de-skew embodiments described herein.The example system represented in Figures 7 and 8 includes a desktop computer, laptop, cellular telephone, personal digital assistant, wireless local area network interface, or any other suitable system. Many other systems are used to implement clock de-skew. For example, the clock de-skew embodiments described herein can be used in a server computer, bridge or router, or any other system with or without an antenna.Although the present invention has been described in connection with the specific embodiments thereof, it is understood that modifications and variations may be made without departing from the spirit and scope of the invention. Such modifications and variations are considered to be within the scope of the invention and the appended claims.
Methods and apparatus for performing matrix transforms within a memory fabric. Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a "crossbar" construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to a digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.
WHAT IS CLAIMED IS:1. A method to perform matrix transformation operations, comprising: receiving a matrix transformation opcode;configuring an array of memory cells of a memory into a matrix structure, based on the matrix transformation opcode;configuring a memory sense component based on the matrix transformation opcode; andresponsive to reading a matrix transformation operand into the matrix structure, writing a matrix transformation result from the memory sense component.2. The method of Claim 1, wherein the configuring the array of memory cells comprises connecting a plurality of word lines and a plurality of bit lines corresponding to a row dimension and a column dimension associated with the matrix structure.3. The method of Claim 2, further comprising determining the row dimension and the column dimension from the matrix transformation opcode.4. The method of Claim 1, wherein the configuring the array of memory cells comprises setting one or more analog values of the matrix structure based on a look-up-table (LUT) data structure.5. The method of Claim 4, further comprising identifying an entry from the LUT data structure based on the matrix transformation opcode.6. The method of Claim 1, wherein the configuring the memory sense component enables matrix transformation results having a radix greater than two (2).7. A non-transitory computer readable medium, comprising:an array of memory cells, where each memory cell of the array of memory cells is configured to store a digital value as an analog value in an analog medium; a memory sense component, where the memory sense component is configured to read the analog value of a first memory cell as a first digital value; logic configured to:receive a matrix transformation opcode;operate the array of memory cells as a matrix multiplication unit (MMU) based on the matrix transformation opcode;wherein each memory cell of the MMU modifies the analog value in the analog medium in accordance with the matrix transformation opcode and a matrix transformation operand;configure the memory sense component to convert the analog value of the first memory cell into a second digital value in accordance with the matrix transformation opcode and the matrix transformation operand; andresponsive to reading the matrix transformation operand into theMMU, write a matrix transformation result based on the second digital value.8. The non-transitory computer readable medium of Claim 7, wherein the matrix transformation opcode indicates a size of the MMU.9. The non-transitory computer readable medium of Claim 8, wherein the matrix transformation opcode corresponds to a frequency domain transform operation.10. The non-transitory computer readable medium of Claim 9, wherein the frequency domain transform operation spans at least one other MMU.11. The non-transitory computer readable medium of Claim 7, wherein the matrix transformation opcode identifies one or more analog values corresponding to one or more memory cells.12. The non-transitory computer readable medium of Claim 11, wherein the one or more analog values corresponding to the one or more memory cells are stored within a look-up-table (LUT) data structure.13. The non-transitory computer readable medium of Claim 7, wherein each memory cell of the MMU comprises resistive random access memory (ReRAM) cells; andwherein the each memory cell of the MMU multiplies the analog value in the analog medium in accordance with the matrix transformation opcode and the matrix transformation operand.14. The non-transitory computer readable medium of Claim 13, wherein each memory cell of the MMU further accumulates the analog value in the analog medium with a previous analog value.15. The non-transitory computer readable medium of Claim 7, wherein the first digital value is characterized by a first radix of two (2); andwherein the second digital value is characterized by a second radix greater than two (2).16. A device, comprising:a processor coupled to a non-transitory computer readable medium; wherein the non-transitory computer readable medium comprises one or more instructions which, when executed by the processor, cause the processor to:write a matrix transformation opcode and a matrix transformation operand to the non-transitory computer readable medium;wherein the matrix transformation opcode causes the non-transitory computer readable medium to operate an array of memory cells as a matrix structure;wherein the matrix transformation operand modifies one or more analog values of the matrix structure; andread a matrix transformation result from the matrix structure.17. The device of Claim 16, wherein the non-transitory computer readable medium further comprises one or more instructions which, when executed by the processor, cause the processor to:capture image data comprising one or more captured color values; and wherein the matrix transformation operand comprises the one or more captured color values and the matrix transformation result comprises one or more shifted color values.18. The device of Claim 16, wherein the non-transitory computer readable medium further comprises one or more instructions which, when executed by the processor, cause the processor to:receive video data comprising one or more image blocks;wherein the matrix transformation operand comprises the one or more image blocks and the matrix transformation result comprises one or more frequency domain image coefficients; andwherein the one or more analog values of the matrix structure accumulate the one or more frequency domain image coefficients from video data over time.19. The device of Claim 16, wherein the matrix transformation opcode causes the non-transitory computer readable medium to operate another array of memory cells as another matrix structure; andwherein the matrix transformation result associated with the matrix structure and another matrix transformation result associated with the another matrix structure are logically combined.20. The device of Claim 16, wherein the one or more analog values of the matrix structure are stored within a look-up-table (LUT) data structure.
METHODS AND APPARATUS FOR PERFORMING MATRIXTRANSFORMATIONS WITHIN A MEMORY ARRAYThis application claims priority to U.S. patent application Serial No. 16/403,245 entitled “METHODS AND APPARATUS FOR PERFORMING MATRIX TRANSFORMATIONS WITHIN A MEMORY ARRAY”, filed on May 3, 2019 and incorporated herein by reference in its entirety.Background1. Technological FieldThe following relates generally to the field of data processing and device architectures. Specifically, a processor-memory architecture that converts a memory array into a matrix fabric for matrix transformations and performing matrix operations therein is disclosed.2. Description of Related TechnologyMemory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing different states of a memory device. For example, binary devices have two states, often denoted by a logical“1” or a logical “0.” To access the stored information, the memory device may read (or sense) the stored state in the memory device. To store information, the memory device may write (or program) the state in the memory device. So-called volatile memory devices may require power to maintain this stored information, while non-volatile memory devices may persistently store information even after the memory device itself has, for example, been power cycled. Different memory fabrication methods and constructions enable different capabilities. For example, dynamic random access memory (DRAM) offers high density volatile storage inexpensively. Incipient research is directed to resistive random access memory (ReRAM) which promises non-volatile performance similar to DRAM.Processor devices are commonly used in conjunction with memory devices to perform a myriad of different tasks and functionality. During operation, a processor executes computer readable instructions (commonly referred to as“software”) from memory. The computer readable instructions define basic arithmetic, logic, controlling, input/output (I/O) operations, etc. As is well known in the computing arts, relatively basic computer readable instructions can perform a variety of complex behaviors when sequentially combined. Processors tend to emphasize circuit constructions and fabrication technologies that differ from memory devices. For example, processing performance is generally related to clock rates, thus most processor fabrication methods and constructions emphasize very high rate transistor switching structures, etc.Over time, both processors and memory have increased in speed and power consumption. Typically, these improvements are a result of shrinking device sizes because electrical signaling is physically limited by the dielectric of the transmission medium and distance. As previously alluded to, most processors and memories are manufactured with different fabrication materials and techniques. Consequently, even though processors and memory continue to improve, the physical interface between processors and memories is a“bottleneck” to the overall system performance. More directly, no matter how fast a processor or memory can work in isolation, the combined system of processor and memory is performance limited to the rate of transfer allowed by the interface. This phenomenon has several common names e.g., the“processor- memory wall”, the“von Neumann Bottleneck Effect”, etc.SummaryThe present disclosure provides, inter alia, methods and apparatus for converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein.In one aspect of the present disclosure, a non-transitory computer readable medium is disclosed. In one exemplary embodiment, the non-transitory computer readable medium includes: an array of memory cells, where each memory cell of the array of memory cells is configured to store a digital value as an analog value in an analog medium; a memory sense component, where the memory sense component is configured to read the analog value of a first memory cell as a first digital value; and logic. In one exemplary embodiment, the logic is further configured to: receive a surjective opcode; operate the array of memory cells as a matrix multiplication unit (MMU) based on the matrix transformation opcode; wherein each memory cell of the MMU modifies the analog value in the analog medium in accordance with the matrix transformation opcode and a matrix transformation operand; configure the memory sense component to convert the analog value of the first memory cell into a second digital value in accordance with the matrix transformation opcode and the matrix transformation operand; and responsive to reading the matrix transformation operand into the MMU, write a matrix transformation result based on the second digital value.In one variant, the matrix transformation opcode indicates a size of the MMU. In one such variant, the matrix transformation opcode corresponds to a frequency domain transform operation. In one exemplary variant, the frequency domain transform operation spans at least one other MMU.In one variant, the matrix transformation opcode identifies one or more analog values corresponding to one or more memory cells. In one such variant, the one or more analog values corresponding to the one or more memory cells are stored within a look up-table (LUT) data structure.In one variant, each memory cell of the MMU comprises resistive random access memory (ReRAM) cells; and each memory cell of the MMU multiplies the analog value in the analog medium in accordance with the matrix transformation opcode and the matrix transformation operand.In one variant, each memory cell of the MMU further accumulates the analog value in the analog medium with a previous analog value.In one variant, the first digital value is characterized by a first radix of two (2); and the second digital value is characterized by a second radix greater than two (2).In one aspect of the present disclosure, a device is disclosed. In one embodiment, the device includes a processor coupled to a non-transitory computer readable medium; where the non-transitory computer readable medium includes one or more instructions which, when executed by the processor, cause the processor to: write a matrix transformation opcode and a matrix transformation operand to the non-transitory computer readable medium; wherein the matrix transformation opcode causes the non- transitory computer readable medium to operate an array of memory cells as a matrix structure; wherein the matrix transformation operand modifies one or more analog values of the matrix structure; and read a matrix transformation result from the matrix structure. In one variant, the non-transitory computer readable medium further comprises one or more instructions which, when executed by the processor, cause the processor to: capture image data comprising one or more captured color values; and wherein the matrix transformation operand comprises the one or more captured color values and the matrix transformation result comprises one or more shifted color values.In one variant, the non-transitory computer readable medium further comprises one or more instructions which, when executed by the processor, cause the processor to: receive video data comprising one or more image blocks; wherein the matrix transformation operand comprises the one or more image blocks and the matrix transformation result comprises one or more frequency domain image coefficients; and wherein the one or more analog values of the matrix structure accumulate the one or more frequency domain image coefficients from video data over time.In one variant, the matrix transformation opcode causes the non-transitory computer readable medium to operate another array of memory cells as another matrix structure; and the matrix transformation result associated with the matrix structure and another matrix transformation result associated with another matrix structure are logically combined.In one variant, the one or more analog values of the matrix structure are stored within a look-up-table (LUT) data structure.In one aspect of the present disclosure, a method to perform transformation matrix operations is disclosed. In one embodiment, the method includes: receiving a matrix transformation opcode; configuring an array of memory cells of a memory into a matrix structure, based on the matrix transformation opcode; configuring a memory sense component based on the matrix transformation opcode; and responsive to reading a matrix transformation operand into the matrix structure, writing a matrix transformation result from the memory sense component.In one variant, configuring the array of memory cells includes connecting a plurality of word lines and a plurality of bit lines corresponding to a row dimension and a column dimension associated with the matrix structure.In one variant, the method also includes determining the row dimension and the column dimension from the matrix transformation opcode.In one variant, configuring the array of memory cells includes setting one or more analog values of the matrix structure based on a look-up-table (LUT) data structure.In one variant, the method includes identifying an entry from the LUT data structure based on the matrix transformation opcode.In one variant, configuring the memory sense component enables matrix transformation results having a radix greater than two (2).In one aspect, an apparatus configured to configure a memory device into a matrix fabric is disclosed. In one embodiment, the apparatus includes: a memory; a processor configured to access the memory; pre-processor logic configured to allocate one or more memory portions for use as a matrix fabric.In another aspect of the disclosure, a computerized image processing device apparatus configured to dynamically configure a memory into a matrix fabric is disclosed. In one embodiment, the computerized image processing device includes: a camera interface; digital processor apparatus in data communication with the camera interface; and a memory in data communication with the digital processor apparatus and including at least one computer program.In another aspect of the disclosure, a computerized video processing device apparatus configured to dynamically configure a memory into a matrix fabric is disclosed. In one embodiment, the computerized video processing device includes: a camera interface; digital processor apparatus in data communication with the camera interface; and a memory in data communication with the digital processor apparatus and including at least one computer program.In another aspect of the disclosure, a computerized wireless access node apparatus configured to dynamically configure a memory into a matrix fabric is disclosed. In one embodiment, the computerized wireless access node includes: a wireless interface configured to transmit and receive RF waveforms in the spectrum portion; digital processor apparatus in data communication with the wireless interface; and a memory in data communication with the digital processor apparatus and including at least one computer program.In an additional aspect of the disclosure, computer readable apparatus is described. In one embodiment, the apparatus includes a storage medium configured to store one or more computer programs within or in conjunction with characterized memory. In one embodiment, the apparatus includes a program memory or HDD or SDD on a computerized controller device. In another embodiment, the apparatus includes a program memory, HDD or SSD on a computerized access node.These and other aspects shall become apparent when considered in light of the disclosure provided herein.Brief Description of the DrawingsFIG. 1A is a diagram of processor-memory architecture and a graphical depiction of an associated matrix operation.FIG. IB is a diagram of processor-PIM architecture and a graphical depiction of an associated matrix operation.FIG. 2 is a logical block diagram of one exemplary implementation of a memory device in accordance with various principles of the present disclosure.FIG. 3 is an exemplary side-by-side illustration of a first memory device configuration and a second memory device configuration.FIG. 4 is a graphical depiction of a matrix operation performed in accordance with the principles of the present disclosure.FIG. 5A is a logical block diagram of one exemplary implementation of processor-memory architecture.FIG. 5B is a logical flow diagram of one exemplary set of matrix operations, performed in accordance with the principles of the present disclosure.FIG. 5C is an alternate logical flow diagram of one exemplary set of matrix operations, performed in accordance with the principles of the present disclosure.FIG. 6 is a block diagram of one exemplary method of converting a memory array into a matrix fabric and performing matrix operations therein.All figures © Copyright 2019-2020 Micron Technology, Inc. All rights reserved.Detailed DescriptionReference is now made to the drawings wherein like numerals refer to like parts throughout.As used herein, the term“application” (or“app”) refers generally and without limitation to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could include a downloadable application that runs within an operating system environment.As used herein, the term“computer program” or“software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.), Register Transfer Language (RTL), VHSIC (Very High Speed Integrated Circuit) Hardware Description Language (VHDL), Verilog, and the like.As used herein, the term “decentralized” or “distributed” refers without limitation to a configuration or network architecture involving multiple computerized devices that are able to perform data communication with one another, rather than requiring a given device to communicate through a designated (e.g., central) network entity, such as a server device. For example, a decentralized network enables direct peer-to-peer data communication among multiple UEs (e.g., wireless user devices) making up the network.As used herein, the term“distributed unit” (DU) refers without limitation to a distributed logical node within a wireless network infrastructure. For example, a DU might be embodied as a next-generation Node B (gNB) DU (gNB-DU) that is controlled by a gNB CU described above. One gNB-DU may support one or multiple cells; a given cell is supported by only one gNB-DU.As used herein, the terms“Internet” and“internet” are used interchangeably to refer to inter-networks including, without limitation, the Internet. Other common examples include but are not limited to: a network of external servers,“cloud” entities (such as memory or storage not local to a device, storage generally accessible at any time via a network connection, and the like), service nodes, access points, controller devices, client devices, etc. 5G-servicing core networks and network components (e.g., DU, CU, gNB, small cells or femto cells, 5G-capable external nodes) residing in the backhaul, fronthaul, crosshaul, or an“edge” thereof proximate to residences, businesses and other occupied areas may be included in“the Internet.”As used herein, the term“memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, random access memory (RAM), pseudostatic RAM (PSRAM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM) including double data rate (DDR) class memory and graphics DDR (GDDR) and variants thereof, ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (ReRAM), read-only memory (ROM), programmable ROM (PROM), electrically erasable PROM (EEPROM or E2PROM), DDR/2 SDRAM, EDO/FPMS, reduced-latency DRAM (RLDRAM), static RAM (SRAM),“flash” memory (e.g., NAND/NOR), phase change memory (PCM), 3- dimensional cross-point memory (3D Xpoint), and magnetoresistive RAM (MRAM), such as spin torque transfer RAM (STT RAM).As used herein, the terms “microprocessor” and “processor” or “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose processors (GPP), microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.As used herein, the term“server” refers to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, or other services to one or more other devices or entities on a computer network.As used herein, the term“storage” refers to without limitation computer hard drives (e.g., hard disk drives (HDD), solid state drives (SDD)), Flash drives, DVR device, memory, RAID devices or arrays, optical media (e.g., CD-ROMs, Laserdiscs, Blu-Ray, etc.), or any other devices or media capable of storing content or other information, including semiconductor devices (e.g., those described herein as memory) capable of maintaining data in the absence of a power source. Common examples of memory devices that are used for storage include, without limitation: ReRAM, DRAM (e.g., SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, DDR4 SDRAM, GDDR, RLDRAM, LPDRAM, etc ), DRAM modules (e.g, RDIMM, VLP RDIMM, UDIMM, VLP UDIMM, SODIMM, SORDIMM, Mini-DIMM, VLP Mini-DIMM, LRDIMM, NVDIMM, etc ), managed NAND, NAND Flash (e g., SLC NAND, MLC NAND, TLS NAND, Serial NAND, 3D NAND, etc.), NOR Flash (e.g., Parallel NOR, Serial NOR, etc.), multichip packages, hybrid memory cube, memory cards, solid state storage (SSS), and any number of other memory devices.As used herein, the term“Wi-Fi” refers to, without limitation and as applicable, any of the variants of IEEE Std. 802.11 or related standards including 802.11 a/b/g/n/s/v/ac or 802.11-2012/2013, 802.11-2016, as well as Wi-Fi Direct (including inter alia, the “Wi-Fi Peer-to-Peer (P2P) Specification”, incorporated herein by reference in its entirety).As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth/BLE, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CBRS, CDMA (e g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, Zigbee®, Z-wave, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/LTE-U/LTE-LAA, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA).OverviewThe aforementioned“processor-memory wall” performance limitations can be egregious where a processor-memory architecture repeats similar operations over a large data set. Under such circumstances, the processor-memory architecture has to individually transfer, manipulate, and store for each element of the data set, iteratively. For example, a matrix multiplication of 4x4 (sixteen (16) elements) takes four (4) times as long as a matrix multiplication of 2x2 (four (4) elements). In other words, matrix operations exponentially scale as a function of the matrix size.Various embodiments of the present disclosure are directed to converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein. Matrix transformations are commonly used in many different applications and can take a disproportionate amount of processing and/or memory bandwidth. For example, many image signal processing (ISP) techniques commonly use matrix transformations for e.g., color interpolation, white balance, color correction, color conversion, etc. Video compression uses e.g., the discrete cosine transform (DCT) to identify video image data that can be removed with minimum fidelity loss. Many communication technologies employ fast Fourier transforms (FFTs) and matrix multiplication for beamforming and/or massive multiple input multiple output (MIMO) channel processing.Exemplary embodiments described herein perform matrix transformations within a memory device that includes a matrix fabric and matrix multiplication unit (MMU). In one exemplary embodiment, the matrix fabric uses a “crossbar” construction of resistive elements. Each resistive element stores a level of impedance that represents the corresponding matrix coefficient value. The crossbar connectivity can be driven with an electrical signal representing the input vector as an analog voltage. The resulting signals can be converted from analog voltages to a digital values by an MMU to yield a vector-matrix product. In some cases, the MMU may additionally perform various other logical operations within the digital domain.Unlike existing solutions that iterate through each element of the matrix to calculate the element value, the crossbar matrix fabric described hereinafter computes multiple elements of the matrix“atomically” i.e., in a single processing cycle. For example, at least a portion of a vector-matrix product may be calculated in parallel. The “atomicity” of matrix fabric based computations yields significant processing improvements over iterative alternatives. In particular, while iterative techniques grow as a function of matrix size, atomic matrix fabric computations are independent of matrix dimensions. In other words, an NxN vector-matrix product can be completed in a single atomic instruction.Various embodiments of the present disclosure internally derive and/or use matrix coefficient values to further minimize interface transactions. As described in greater detail herein, many useful matrix transformations may be characterized by “structurally defined dimensions” and performed with “structurally defined coefficients.” Structurally definition refers to those aspects of a matrix computation that are defined for a specific matrix structure (e.g., the rank and/or size of the matrix); in other words, the matrix coefficients can be inferred from the matrix structure and need not be explicitly provided via the processor-memory interface. For example, as described in greater detail hereinafter, the various coefficients for mathematical transforms (such“twiddle factors” for the fast Fourier transform (FFT)) are a function of the matrix size. Similarly, ISP filtering and/or massive MIMO channel coding techniques may use e.g., predefined matrixes and/or codebooks of matrixes having known structures and weighting.As a brief aside, practical limitations on component manufacture limit the capabilities of each element within an individual memory device. For example, most memory arrays are only designed to discern between two (2) states (logical“1”, logical “0”). While existing memory sense components may be extended to discern higher levels of precision (e.g., four (4) states, eight (8) states, etc.) the increasing the precision of memory sense components may be impractical to support the precision required for large transforms typically used in e.g., video compression, mathematical transforms, etc.To these ends, various embodiments of the present disclosure logically combine one or more matrix fabrics and/or MMUs to provide greater degrees of precision and/or processing sophistication than would otherwise be possible. In one such embodiment, a first matrix fabric and/or MMU may be used to calculate a positive vector-matrix product and a second matrix fabric and/or MMU may be used to calculate a negative vector-matrix product. The positive and negative vector-matrix product can be summed to determine the net vector-matrix product. In another such embodiment, multiple simple matrix transformations can be used to implement a larger matrix transformation. For example, the first stage of FFT processing for an FFT of size N may be decomposed into M FFTs of size N/M. Thus, the first stage of a 64-point FFT can be decomposed into thirty two (32) 2-point FFTs, sixteen (16) 4-point FFTs, and/or eight (8) 8-point FFTs, depending on a variety of factors (e.g., precision, speed, cost, power consumption, etc.) Handling FFT butterfly transformations in matrix fabric can also be further sequenced or parallelized in accordance with any number of other design considerations. Other examples of logical matrix operations can be substituted with equivalent success (e.g., decomposition, common matrix multiplication, etc.) given the contents of the present disclosure.Certain applications can save a significant amount of power by turning off system components when not in use. For example, video compression may benefit from “sleep” during video blanking intervals (when no video data is active), etc. However, the sleep procedure often requires a processor and/or memory to shuttle data from operational volatile memory to non-volatile storage memory such that the data is not lost while powered down. Wake-up procedures are also needed to retrieve the stored information from the non-volatile storage memory. Shuttling data back and forth between memories is an inefficient use of processor-memory bandwidth. Consequently, various embodiments disclosed herein leverage the“non-volatile” nature of the matrix fabric. In such embodiments, the matrix fabric can retain its matrix coefficient values even when the memory has no power. More directly, the non-volatile nature of the matrix fabric enables a processor and memory to transition into sleep/low power modes or to perform other tasks without shuffling data from volatile memory to non-volatile memory and vice versa.Various other combinations and/or variants of the foregoing will be readily appreciated by artisans of ordinary skill, given the contents of the present disclosure.Detailed Description of Exemplary EmbodimentsExemplary embodiments of the apparatus and methods of the present disclosure are now described in detail. While these exemplary embodiments are described in the context of the previously specific processor and/or memory configurations, the general principles and advantages of the disclosure may be extended to other types of processor and/or memory technologies, the following therefore being merely exemplary in nature.It will also be appreciated that while described generally in the context of a consumer device (within a camera device, video codec, cellular phone, and/or network base station), the present disclosure may be readily adapted to other types of devices including, e.g., server devices, Internet of Things (IoT) devices, and/or for personal, corporate, or even governmental uses, such as those outside the proscribed“incumbent” users such as U.S. DoD and the like. Yet other applications are possible.Other features and advantages of the present disclosure will immediately be recognized by persons of ordinary skill in the art with reference to the attached drawings and detailed description of exemplary embodiments as given below.Processor Memory Architectures -FIG. 1A illustrates one common processor-memory architecture 100 useful for illustrating matrix operations. As shown in FIG. 1A, a processor 102 is connected to a memory 104 via an interface 106. In the illustrative example, the processor multiplies the elements of an input vector a against a matrix M to calculate the vector-matrix product b. Mathematically, the input vector a is treated as a single column matrix having a number of elements equivalent to the number of rows in the matrix M. In order to calculate the first element of the vector-matrix product bo, the processor must iterate through each permutation of input vector a elements for each element within a row of the matrix M. During the first iteration, the first element of the input vector ao is read, the current value of the vector-matrix product bo is read and the corresponding matrix coefficient value Mo,o is read. The three (3) read values are used in a multiply-accumulate operation to generate an“intermediary” vector-matrix product bo. Specifically, the multiply-accumulate operation calculates: (ao · Mo,o) + bo and writes the result value back to bo. Notably, bo is an“intermediary value.” After the first iteration but before the second iteration, the intermediary value of bo may not correspond to the final value of the vector-matrix product bo.During the second iteration, the second element of the input vector ai is read, the previously calculated intermediary value bo is retrieved, and a second matrix coefficient value Mi,o is read. The three (3) read values are used in a multiply- accumulate operation to generate the first element of the vector-matrix product bo. The second iteration completes the computation of bo.While not expressly shown, the iterative process described above is also performed to generate the second element of the vector-matrix product bi. Additionally, while the foregoing example is a 2x2 vector-matrix product, the techniques described therein are commonly extended to support vector-matrix computations of any size. For example, a 3x3 vector-matrix product calculation iterates over an input vector of three (3) elements for each of the three (3) rows of the matrix; thus, requiring nine (9) iterations. A matrix operation of 1024x1024 (which is not uncommon for many applications) would require more than one million iterations. More directly, the aforementioned iterative process exponentially scales as a function of the matrix dimension.Even though the foregoing discussion is presented in the contest of a vector- matrix product, artisans of ordinary skill will readily appreciate that a matrix-matrix product can be performed as a series of vector-matrix products. For example, a first vector-matrix product corresponding to the first single column matrix of the input vector is calculated, a second vector-matrix product corresponding to the second single column matrix of the input vector is calculated, etc. Thus, a 2x2 matrix-matrix product would require two (2) vector-matrix calculations (i.e., 2 x 4 = 8 total), a 3x3 matrix-matrix product would require three (3) vector-matrix calculations (i.e., 3 x 9 = 27 total). Artisans of ordinary skill in the related arts will readily appreciate that each iteration of the process described in FIG. 1A is bottlenecked by the bandwidth limitations of interface 106 (the“processor-memory wall”). Even though the processor and the memory may have internal buses with very high bandwidths, the processor- memory system can only communicate as fast as the interface 106 can support electrical signaling (based on the dielectric properties of the materials used in the interface 106 (typically copper) and the transmission distance (~l-2 centimeters)). Moreover, the interface 106 may also include a variety of additional signal conditioning, amplification, noise correction, error correction, parity computations, and/or other interface based logic that further reduces transaction times.One common approach to improve the performance of matrix operations is to perform the matrix operation within the local processor cache. Unfortunately, the local processor cache takes processor die space, and has a much higher cost-per-bit to manufacture than e.g., comparable memory devices. As a result, the processor’s local cache size is usually much smaller (e.g., a few megabytes) than its memory (which can be many gigabytes). From a practical aspect, the smaller local cache is a hard limitation on the maximum amount of matrix operations that can be performed locally within the processor. As another drawback, large matrix operations result in poor cache utilization since only one row and one column are being accessed at a time (e.g., for a 1024x1024 vector-matrix product, only 1/1024 of the cache is in active use during a single iteration). Consequently, while processor cache implementations may be acceptable for small matrixes, this technique becomes increasingly less desirable as matrix operations grow in complexity.Another common approach is a so-called processor-in-memory (PIM). FIG. IB illustrates one such processor-PIM architecture 150. As shown therein, a processor 152 is connected to a memory 154 via an interface 156. The memory 154 further includes a PIM 162 and a memory array 164; the PIM 162 is tightly coupled to the memory array 164 via an internal interface 166.Similar to the process described in FIG. 1A supra , the processor-PIM architecture 150 of FIG. IB multiplies the elements of an input vector a against a matrix M to calculate the vector-matrix product b. However, the PIM 162 reads, multiply - accumulates, and writes to the memory 164 internally via the internal interface 166. The internal interface 166 in much shorter than the external interface 156; additionally, the internal interface 166 can operate natively without e.g., signal conditioning, amplification, noise correction, error correction, parity computations, etc.While the processor-PIM architecture 150 yields substantial improvements in performance over e.g., the processor-memory architecture 100, the processor-PIM architecture 150 may have other drawbacks. For example, the fabrication techniques (“silicon process”) are substantially different between processor and memory devices because each silicon process is optimized for different design criteria. For example, the processor silicon process may use thinner transistor structures than memory silicon processes; thinner transistor structures offer faster switching (which improves performance) but suffer greater leakage (which is undesirable for memory retention). As a result, manufacturing a PIM 162 and memory array 164 in the same wafer results in at least one of them being implemented in a sub-optimal silicon process. Alternatively, the PIM 162 and memory array 164 may be implemented within separate dies and joined together; die-to-die communication typically increases manufacturing costs and complexity and may suffer from various other detriments (e.g., introduced by process discontinuities, etc.)Moreover, artisans of ordinary skill in the related arts will readily appreciate that the PIM 162 and the memory array 164 are“hardened” components; a PIM 162 cannot store data, nor can the memory 164 perform computations. As a practical matter, once the memory 154 is manufactured, it cannot be altered to e.g., store more data and/or increase/decrease PIM performance/power consumption. Such memory devices are often tailored specifically for their application; this is both costly to design and modify, in many cases they are“proprietary” and/or customer/manufacturer specific. Moreover, since technology changes at a very rapid pace, these devices are quickly obsoleted.For a variety of reasons, improved solutions for matrix operations within processors and/or memory are needed. Ideally, such solutions would enable matrix operations within a memory device in a manner that minimizes performance bottlenecks of the processor-memory wall. Furthermore, such solutions should flexibly accommodate a variety of different matrix operations and/or matrix sizes.Exemplary Memory Device -FIG. 2 is a logical block diagram of one exemplary implementation of a memory device 200 manufactured in accordance with the various principles of the present disclosure. The memory device 200 may include a plurality of partitioned memory cell arrays 220. In some implementations, each of the partitioned memory cell arrays 220 may be partitioned at the time of device manufacture. In other implementations, the partitioned memory cell arrays 220 may be partitioned dynamically (i.e., subsequent to the time of device manufacture). The memory cell arrays 220 may each include a plurality of banks, each bank including a plurality of word lines, a plurality of bit lines, and a plurality of memory cells arranged at, for example, intersections of the plurality of word lines and the plurality of bit lines. The selection of the word line may be performed by a row decoder 216 and the selection of the bit line may be performed by a column decoder 218.The plurality of external terminals included in the memory device 200 may include address terminals 260, command terminals 262, clock terminals 264, data terminals 240 and power supply terminals 250. The address terminals 260 may be supplied with an address signal and a bank address signal. The address signal and the bank address signal supplied to the address terminals 260 are transferred via an address input circuit 202 to an address decoder 204. The address decoder 204 receives, for example, the address signal and supplies a decoded row address signal to the row decoder 216, and a decoded column address signal to the column decoder 218. The address decoder 204 may also receive the bank address signal and supply the bank address signal to the row decoder 216 and the column decoder 218.The command terminals 262 are supplied with a command signal to a command input circuit 206. The command terminals 262 may include one or more separate signals such as e.g., row address strobe (RAS), column address strobe (CAS), read/write (R/W). The command signal input to the command terminals 262 is provided to the command decoder 208 via the command input circuit 206. The command decoder 208 may decode the command signal 262 to generate various control signals. For example, the RAS can be asserted to specify the row where data is to be read/written, and the CAS can be asserted to specify where data is to be read/written. In some variants, the R/W command signal determines whether or not the contents of the data terminal 240 are written to memory cells 220, or read therefrom.During a read operation, the read data may be output externally from the data terminals 240 via a read/write amplifier 222 and an input/output circuit 224. Similarly, when the write command is issued and a row address and a column address are timely supplied with the write command, a write data command may be supplied to the data terminals 240. The write data command may be supplied via the input/output circuit 224 and the read/write amplifier 222 to a given memory cell array 220 and written in the memory cell designated by the row address and the column address. The input/output circuit 224 may include input buffers, in accordance with some implementations.The clock terminals 264 may be supplied with external clock signals for synchronous operation. In one variant, the clock signal is a single ended signal; in other variants, the external clock signals may be complementary (differential signaling) to one another and are supplied to a clock input circuit 210. The clock input circuit 210 receives the external clock signals and conditions the clock signal to ensure that the resulting internal clock signal has sufficient amplitude and/or frequency for subsequent locked loop operation. The conditioned internal clock signal is supplied to feedback mechanism (internal clock generator 212) provide a stable clock for internal memory logic. Common examples of internal clock generation logic 212 includes without limitation: digital or analog phase locked loop (PLL), delay locked loop (DLL), and/or frequency locked loop (FLL) operation.In alternative variants (not shown), the memory device 200 may rely on external clocking (i.e., with no internal clock of its own). For example, a phase controlled clock signal may be externally supplied to the input/output (IO) circuit 224. This external clock can be used to clock in written data, and clock out data reads. In such variants, IO circuit 224 provides a clock signal to each of the corresponding logical blocks (e.g., address input circuit 202, address decoder 204, command input circuit 206, command decoder 208, etc.).The power supply terminals 250 may be supplied with power supply potentials. In some variants (not shown), these power supply potentials may be supplied via the input/output (I/O) circuit 224. In some embodiments, the power supply potentials may be isolated from the I/O circuit 224 so that power supply noise generated by the IO circuit 224 does not propagate to the other circuit blocks. These power supply potentials are conditioned via an internal power supply circuit 230. For example, the internal power supply circuit 230 may generate various internal potentials that e.g., remove noise and/or spurious activity, as well as boost or buck potentials, provided from the power supply potentials. The internal potentials may be used in e.g., the address circuitry (202, 204), the command circuitry (206, 208), the row and column decoders (216, 218), the RW amplifier 222, and/or any various other circuit blocks.A power-on-reset circuit (PON) 228 provides a power on signal when the internal power supply circuit 230 can sufficiently supply internal voltages for a power- on sequence. A temperature sensor 226 may sense a temperature of the memory device 200 and provides a temperature signal; the temperature of the memory device 200 may affect some memory operations.In one exemplary embodiment, the memory arrays 220 may be controlled via one or more configuration registers. In other words, the use of these configuration registers selectively configure one or more memory arrays 220 into one or more matrix fabrics and/or matrix multiplication units (MMUs) described in greater detail herein. In other words, the configuration registers may enable the memory cell architectures within the memory arrays to dynamically change both e.g., their structure, operation, and functionality. These and other variations would be readily apparent to one of ordinary skill given the contents of the present disclosure.FIG. 3 provides a more detailed side-by-side illustration of the memory array and matrix fabric circuitry configurations. The memory array and matrix fabric circuitry configurations of FIG. 3 both use the same array of memory cells, where each memory cell is composed of a resistive element 302 that is coupled to a word-line 304 and a bit- line 306. In the first configuration 300, the memory array circuitry is configured to operate as a row decoder 316, a column decoder 318, and an array of memory cells 320. In the second configuration 350, the matrix fabric circuitry is configured to operate as a row driver 317, a matrix multiplication unit (MMU) 319, and an analog crossbar fabric (matrix fabric) 321. In one exemplary embodiment, a look-up-table (LUT) and associated logic 315 can be used to store and configure different matrix multiplication unit coefficient values.In one exemplary embodiment of the present disclosure, the memory array 320 is composed of a resistive random access memory (ReRAM). ReRAM is a non-volatile memory that changes the resistance of memory cells across a dielectric solid-state material, sometimes referred to as a“memristor.” Current ReRAM technology may be implemented within a two-dimensional (2D) layer or a three-dimensional (3D) stack of layers; however higher order dimensions may be used in future iterations. The complementary metal oxide semiconductor (CMOS) compatibility of the crossbar ReRAM technology may enable both logic (data processing) and memory (storage) to be integrated within a single chip. A crossbar ReRAM array may be formed in a one transistor/one resistor (1T1R) configuration and/or in a configuration with one transistor driving n resistive memory cells (1TNR), among other possible configurations.Multiple inorganic and organic material systems may enable thermal and/or ionic resistive switching. Such systems may, in a number of embodiments include: phase change chalcogenides (e.g., Ge2Sb2Te5, AglnSbTe, among others); binary transition metal oxides (e.g., NiO, TiC , among others); perovskites (e.g., Sr(ZR)Tr03, PCMO, among others); solid state electrolytes (e.g., GeS, GeSe, SiO, C S, among others); organic charge transfer complexes (e.g., Cu tetracynaoquinodimethane (TCNQ), among others); organic charge acceptor systems (e.g., A1 amino- dicyanoimidazole (AIDCN), among others); and/or 2D (layered) insulating materials (e.g., hexagonal BN, among others); among other possible systems for resistive switching.In the illustrated embodiment, the resistive element 302 is a non-linear passive two-terminal electrical component that can change its electrical resistance based on a history (e.g., hysteresis or memory) of current application. In at least one exemplary embodiment, the resistive element 302 may form or destroy a conductive filament responsive to the application of different polarities of currents to the first terminal (connected to the word-line 304) and the second terminal (connected to the bit-line 306). The presence or absence of the conductive filament between the two terminals changes the conductance between the terminals. While the present operation is presented within the context of a resistive element, artisans of ordinary skill in the related arts will readily appreciate that the principles described herein may be implemented within any circuitry that is characterized by a variable impedance (e.g., resistance and/or reactance). Variable impedance may be effectuated by a variety of linear and/or non-linear elements (e.g., resistors, capacitors, inductors, diodes, transistors, thyristors, etc.)For illustrative purposes, the operation of the memory array 320 in the first configuration 300 is briefly summarized. During operation in the first configuration, a memory“write” may be effectuated by application of a current to the memory cell corresponding to the row and column of the memory array. The row decoder 316 can selectively drive various ones of the row terminals so as to select a specific row of the memory array circuitry 320. The column decoder 318 can selectively sense/drive various ones of the column terminals so as to“read” and/or“write” to the corresponding memory cell that is uniquely identified by the selected row and column (as emphasized in FIG. 3 by the heavier line width and blackened cell element). As noted above, the application of current results in the formation (or destruction) of a conductive filament within the dielectric solid-state material. In one such case, a low resistance state (ON- state) is used to represent the logical“1” and a high resistance state (OFF-state) is used to represent a logical“0”. In order to switch a ReRAM cell, a first current with specific polarity, magnitude, and duration is applied to the dielectric solid-state material. Subsequently thereafter, a memory“read” may be effectuated by application of a second current to the resistive element and sensing whether the resistive element is in the ON-state or the OFF-state based on the corresponding impedance. Memory reads may or may not be destructive (e.g., the second current may or may not be sufficient to form or destroy the conductive filament.)Artisans of ordinary skill in the related arts will readily appreciate that the foregoing discussion of memory array 320 in the first configuration 300 is consistent with existing memory operation in accordance with e.g., ReRAM memory technologies. In contrast, the second configuration 350 uses the memory cells as an analog crossbar fabric (matrix fabric) 321 to perform matrix multiplication operations. While the exemplary implementation of FIG. 3 corresponds to a 2x4 matrix multiplication unit (MMU), other variants may be substituted with equivalent success. For example, a matrix of arbitrarily large size (e.g., 3x3, 4x4, 8x8, etc.) may be implemented (subject to the precision enabled by digital-analog-conversion (DAC) 308 and analog-to-digital (ADC) 310 components).In analog crossbar fabric (matrix fabric) 321 operation, each of the row terminals is concurrently driven by an analog input signal, and each of the column terminals is concurrently sensed for the analog output (which is an analog summation of the voltage potentials across the corresponding resistive elements for each row/column combination). Notably, in the second configuration 350, all of the row and column terminals associated with a matrix multiplication are active (as emphasized in FIG. 3 by the heavier line widths and blackened cell elements). In other words, the ReRAM crossbar fabric (matrix fabric) 321 uses the matrix fabric structure to perform an“analog computation” that calculates a vector-matrix product (or scalar-matrix product, matrix-matrix product, etc.)Notably, the concurrent vector-matrix product calculation within the crossbar fabric is atomic. Specifically, the analog computation of vector-matrix products can complete in a single access cycle. As previously mentioned, an atomic operation is immune to data race conditions. Moreover, the vector-matrix product calculation performs calculations on all rows and all columns of the matrix operation concurrently; in other words, the vector-matrix product calculation does not scale in complexity as a function of matrix dimension. While fabrication constraints (e.g., ADC/D AC granularity, manufacturing tolerance, etc.) may limit the amount of precision and complexity that a single matrix fabric can produce, multiple matrix operations may be mathematically combined together to provide much higher precisions and complexities.For example, in one exemplary embodiment of the present disclosure, inputs are converted to the analog domain by the DAC 308 for analog computation, but may also be converted back to the digital domain by the ADC 310 for subsequent digital and/or logical manipulation. In other words, the arithmetic logic unit 312 can enable sophisticated numeric manipulation of matrix fabric 321 output. Such capabilities may be used where the analog domain cannot implement the required computation due to practical implementation limitations (e.g., manufacturing cost, etc.)Consider the illustrative example of FIG. 4, where a simple“FFT buterfly” calculation 400 can be performed via a 2x4 matrix fabric. While conductance can be increased or decreased, conductance cannot be made“negative.” As a result, subtraction may need to be performed within the digital domain. The FFT buterfly operation is described in the following matrix multiplication (EQN. 1):1 1 1 ra0+ aEQN. 1 :ra 1 — -1 La0JThis simple FFT buterfly 400 of EQN. 1 can be decomposed into two distinct matrices representing the positive and negative coefficients (EQN. 2 and EQN. 3):EQN. 2 and EQN. 3 can be implemented as analog computations with the matrix fabric circuitry. Once calculated, the resulting analog values may be converted back to the digital domain via the aforementioned ADC. Existing ALU operations may be used to perform subtraction in the digital domain (EQN. 4):In other words, as illustrated in FIG. 4, a 2x2 matrix can be further subdivided into a 2x2 positive matrix and a 2x2 negative matrix. The ALU can add/subtract the results of the 2x2 positive matrix and a 2x2 negative matrix to generate a single 2x2 matrix. Artisans of ordinary skill in the related arts will readily appreciate the wide variety and/or capabilities enabled by ALUs. For example, ALUs may provide arithmetic operations (e.g., add, subtract, add with carry, subtract with borrow, negate, increment, decrement, pass through, etc.), bit-wise operations (e.g., AND, OR, XOR, complement), bit-shift operations (e.g., arithmetic shift, logical shift, rotate, rotate through carry, etc.) to enable e.g., multiple-precision arithmetic, complex number operations, and/or any extend MMU capabilities to any degree of precision, size, and/or complexity.As used herein, the terms“digital” and/or“logical” within the context of computation refers to processing logic that uses quantized values (e.g.,“0” and“1”) to represent symbolic values (e.g., “ON-state”, “OFF-state”). In contrast, the term “analog” within the context of computation refers to processing logic that uses the continuously changeable aspects of physical signaling phenomena such as electrical, chemical, and/or mechanical quantities to perform a computation. Various embodiments of the present disclosure represent may represent analog input and/or output signals as a continuous electrical signal. For example, a voltage potential may have different possible values (e.g., any value between a minimum voltage (0V) and a maximum voltage (1.8V) etc.). Combining analog computing with digital components may be performed with digital-to-analog converters (DACs), analog-digital-converters (ADCs), arithmetic logic units (ALUs), and/or variable gain amplification/attenuation.Referring back to FIG. 3, in order to configure the memory cells into the crossbar fabric (matrix fabric) 321 of the second configuration 350, each of the resistive elements may be written with a corresponding matrix coefficient value. Unlike the first configuration 300, the second configuration 350 may write varying degrees of impedance (representing a coefficient value) into each ReRAM cell using an amount of current having a polarity, magnitude, and duration selected to set a specific conductance. In other words, by forming/destroying conductive filaments of varying conductivity, a plurality of different conductivity states can be established. For example, applying a first magnitude may result in a first conductance, applying a second magnitude may result in a second conductance, applying the first magnitude for a longer duration may result in a third conductance, etc. Any permutation of the foregoing writing parameters may be substituted with equivalent success. More directly, rather than using two (2) resistance states (ON-state, OFF-state) to represent two (2) digital states (logic“1”, logic“0”), the varying conductance can use a multiplicity of states (e.g., three (3), four (4), eight (8), etc.) to represent a continuous range of values and/or ranges of values (e.g., [0, 0.33, 0.66, 1], [0, 0.25, 0.50, 0.75, 1], [0, 0.125, 0.250, ... , 1], etc.).In one embodiment of the present disclosure, the matrix coefficient values are stored ahead of time within a look-up-table (LUT) and configured by associated control logic 315. During an initial configuration phase, the matrix fabric 321 is written with matrix coefficient values from the LUT via control logic 315. Artisans of ordinary skill in the related arts will readily appreciate that certain memory technologies may also enable write-once-use-many operation. For example, even though forming (or destroying) a conductive filament for a ReRAM cell may require a specific duration, magnitude, polarity, and/or direction of current; subsequent usage of the memory cell can be repeated many times (so long as the conductive filament is not substantially formed nor destroyed over the usage lifetime). In other words, subsequent usages of the same matrix fabric 321 configuration can be used to defray initial configuration times.Furthermore, certain memory technologies (such as ReRAM) are non-volatile. Thus, once matrix fabric circuitry is programmed, it may enter a low power state (or even powered off) to save power when not in use. In some cases, the non-volatility of the matrix fabric may be leveraged to further improve power consumption. Specifically, unlike existing techniques which may re-load matrix coefficient values from non volatile memory for subsequent processing, the exemplary matrix fabric can store the matrix coefficient values even when the memory device is powered off. On subsequent wake-up, the matrix fabric can be directly used.In one exemplary embodiment, the matrix coefficient values may be derived according to the nature of the matrix operation. For example, the coefficients for certain matrix operations can be derived ahead of time based on the“size” (or other structurally defined parameter) and stored within the LUT. As but two such examples, the fast Fourier transform (EQN. 5) and the discrete cosine transform (DCT) (EQN. 6) are reproduced infra·.fc]As can be mathematically determined from the foregoing equations, the matrix coefficient values (also referred to as the“twiddle factors”) are determined according to the size of the transform. For example, the coefficients for an 8-point FFT are: e1*.etc. In other words, once the size of the FFT is known, the values for(where k is 0, 1, 2, 3... 7) can be set a priori. In fact, the coefficients for larger FFTs include the coefficients for smaller FFTs. For example, a 64-point FFT has 64 coefficient values, which include all 32 coefficients used in a 32-point FFT, and all 16 coefficients for a 16-point FFT, etc. More directly, a single LUT may contain all the coefficients to support any number of different transforms.In another exemplary embodiment, the matrix coefficient values may be stored ahead of time. For example, the coefficients for certain matrix multiplication operations may be known or otherwise defined by e.g., an application or user. For example, image processing computations, such as are described in co-owned and co-pending U.S. Patent Application Serial No. 16/002,644 filed June 7, 2018 and entitled“AN IMAGE PROCESSOR FORMED IN AN ARRAY OF MEMORY CELLS”, previously incorporated supra , may define a variety of different matrix coefficient values so as to effect e.g., defect correction, color interpolation, white balance, color adjustment, gamma lightness, contrast adjustment, color conversion, down-sampling, and/or other image signal processing operations.In another example, the coefficients for certain matrix multiplication operations may be determined or otherwise defined by e.g., user considerations, environmental considerations, other devices, and/or other network entities. For example, wireless devices often experience different multipath effects that can interfere with operation. Various embodiments of the present disclosure determine multipath effects and correct for them with matrix multiplication. In some cases, the wireless device may calculate each of the independent different channel effects based on degradation of known signaling. The differences between an expected and an actual reference channel signal can be used to determine the noise effects that it experienced (e.g., attenuation over specific frequency ranges, reflections, scattering, and/or other noise effects). In other embodiments, a wireless device may be instructed to use a predetermined“codebook” of beamforming configurations. The codebook of beamforming coefficients may be less accurate but may be preferable for other reasons (e.g., speed, simplicity, etc.).As previously alluded to, the matrix coefficient values are stored ahead of time within a look-up-table (LUT) and configured by associated control logic 315. In one exemplary embodiment, the matrix fabric may be configured via dedicated hardware logic. Such internal hardware logic may not be limited by processor word size; thus matrix coefficient values of any dimension may be concurrently configurable (e.g., a 4x4, 8x8, 16x16, etc.) While the present disclosure is presented in the context of internal control logic 315, external implementations may be substituted with equivalent success. For example, in other embodiments, the logic includes internal processor-in-memory (PIM) that can set the matrix coefficient values based on LUT values in a series of reads and writes. In still other examples, for example, an external processor can perform the LUT and/or logic functionality.FIG. 5A is a logical block diagram of one exemplary implementation of a processor-memory architecture 500 in accordance with the various principles described herein. As shown in FIG. 5A, a processor 502 is coupled to a memory 504; the memory includes a look-up-table (LUT) 506, a control logic 508, a matrix fabric and corresponding matrix multiplication unit (MMU) 510, and a memory array 512.In one embodiment, the LUT 506 stores a plurality of matrix value coefficients, dimensions, and/or other parameters, associated with different matrix operations. In one exemplary embodiment, the LUT 506 stores a plurality of fast Fourier transform (FFT) “twiddle factors”; where various subsets of the twiddle factors are associated with different FFT dimensions. For example, a LUT 506 that stores the twiddle factors for a 64-point FFT has 64 coefficient values, which include all 32 coefficients used in a 32- point FFT, and all 16 coefficients for a 16-point FFT, etc. In another exemplary embodiment, the LUT 506 stores a plurality of discrete cosine transform (DCT) “twiddle factors” associated with different DCT dimensions. In other embodiments, the LUT 506 stores a plurality of different matrix coefficient values for image signal processing (ISP) e.g., defect correction, color interpolation, white balance, color adjustment, gamma lightness, contrast adjustment, color conversion, down-sampling, and/or other image signal processing operations. In yet another embodiment of the LUT 506, the LUT 506 may include various channel matrix codebooks that may be predefined and/or empirically determined based on radio channel measurements.In one embodiment, the control logic 508 controls operation of the matrix fabric and MMU 510 based on instructions received from the processor 502. In one exemplary embodiment, the control logic 508 can form/destroy conductive filaments of varying conductivity within each of the memory cells of a matrix fabric in accordance with the aforementioned matrix dimensions and/or matrix value coefficients provided by the LUT 506. Additionally, the control logic 508 can configure a corresponding MMU to perform any additional arithmetic and/or logical manipulations of the matrix fabric. Furthermore, the control logic 508 may select one or more digital vectors to drive the matrix fabric, and one or more digital vectors to store the logical outputs of the MMU.In the processing arts, an“instruction” generally includes different types of “instruction syllables”: e.g., opcodes, operands, and/or other associated data structures (e.g., registers, scalars, vectors).As used herein, the term“opcode” (operation code) refers to an instruction that can be interpreted by a processor logic, memory logic, or other logical circuitry to effectuate an operation. More directly, the opcode identifies an operation to be performed on one or more operands (inputs) to generate one or more results (outputs). Both operands and results may be embodied as data structures. Common examples of data structures include without limitation: scalars, vectors, arrays, lists, records, unions, objects, graphs, trees, and/or any number of other form of data. Some data structures may include, in whole or in part, referential data (data that“points” to other data). Common examples of referential data structures include e.g., pointers, indexes, and/or descriptors.In one exemplary embodiment, the opcode may identify one or more of: a matrix operation, the dimensions of the matrix operation, and/or the row and/or column of the memory cells. In one such variant, an operand is a coded identifier that specifies the one or more digital vectors that are to be operated upon. For example, an instruction to process a 64-point FFT on an input digital vector, and store the results in an output digital vector might include the opcode and operands: FFT64 (Sin put, Soutput). where: FFT64 identifies the size and nature of the 64-point FFT operation, Sinput identifies an input digital vector base address, and Soutput identifies an output digital vector base address. In another such example, the 64-point FFT may be split into two distinct atomic operations e.g., FFT64($address) that converts the memory array at the Saddress into a 64-point matrix fabric, and MULT($address, Sinput, Soutput) that stores the vector-matrix product of the Sinput and the matrix fabric at Saddress to Soutput.While FIG. 5A illustrates an instruction interface that is functionally separate and distinct from the input/output (I/O) memory interface. In one such embodiment, the instruction interface may be physically distinct (e.g., having different pins and/or connectivity). In other embodiments, the instruction interface may be multiplexed with the I/O memory interface (e.g., sharing the same control signaling, and address and/or data bus but in a distinct communication mode). In still other embodiments, the instruction interface may be virtually accessible via the I/O memory interface (e.g., as registers located within address space that is addressable via the I/O interface). Still other variants may be substituted by artisans of ordinary skill, given the contents of the present disclosure.In one embodiment, the matrix fabric and MMU 510 are tightly coupled to a memory array 512 to read and write digital vectors (operands). In one exemplary embodiment, the operands are identified for dedicated data transfer hardware (e.g., a direct memory access (DMA)) into and out of the matrix fabric and MMU 510. In one exemplary variant, the digital vectors of data may be of any dimension, and are not limited by processor word size. For example, an operand may specify an operand of N- bits (e.g., 2, 4, 8, 16, etc.). In other embodiments, the DMA logic 508 can read/write to the matrix fabric 510 using the existing memory row/column bus interfaces. In still other embodiments, the DMA logic 508 can read/write to the matrix fabric 510 using the existing address/data and read/write control signaling within an internal memory interface.FIG. 5B provides a logical flow diagram of one exemplary set of matrix operations 550 within the context of the exemplary embodiment 500 described in FIG. 5A. As shown therein, the processor 502 writes an instruction to the memory 504 through interface 507 that specifies an opcode (e.g., characterized by a matrix Mx,y) and the operands (e.g., digital vectors a, b).The control logic 508 determines whether or not the matrix fabric and/or matrix multiplication unit (MMU) should be configured/reconfigured. For example, a section of the memory array is converted into one or more matrix fabrics and weighted with the associated matrix coefficient values defined by the matrix Mx,y. Digital-to-analog (DAC) row drivers and analog-to-digital (ADC) sense amps associated with the matrix fabric may need to be adjusted for dynamic range and/or amplification. Additionally, one or more MMU ALU components may be coupled to the one or more matrix fabrics.When the matrix fabric and/or matrix multiplication unit (MMU) are appropriately configured, the input operand a is read by the digital-to-analog (DAC) and applied to the matrix fabric Mx,yfor analog computation. The analog result may additionally be converted with analog-to-digital (ADC) conversion for subsequent logical manipulation by the MMU ALUs. The output is written into the output operand bFIG. 5C provides an alternative logical flow diagram of one exemplary set of matrix operations 560 within the context of the exemplary embodiment 500 described in FIG. 5 A. In contrast to the flow diagram of FIG. 5B, the system of FIG. 5C uses an explicit instruction to convert the memory array into a matrix fabric. Providing further degrees of atomicity in instruction behaviors can enable a variety of related benefits including for example, pipeline design and/or reduced instruction set complexity.More directly, when the matrix fabric contains the appropriate matrix value coefficients Mx,y, matrix operations may be efficiently repeated. For example, image processing computations, such as are described in co-owned and co-pending U.S. Patent Application Serial No. 16/002,644 filed June 7, 2018 and entitled“AN IMAGE PROCESSOR FORMED IN AN ARRAY OF MEMORY CELLS”, previously incorporated supra , may configure a number of matrix fabric and MMU processing elements so as to pipeline e.g., defect correction, color interpolation, white balance, color adjustment, gamma lightness, contrast adjustment, color conversion, down sampling, and/or other image signal processing operations. Each one of the pipeline stages may be configured once, and repeatedly used for each pixel (or group of pixels) of the image. For example, the white balance pipeline stage may operate on each pixel of data using the same matrix fabric with the matrix coefficient values set for white balance; the color adjustment pipeline stage may operate on each pixel of data using the same matrix fabric with the matrix coefficient values set for color adjustment, etc. In another such example, the first stage of a 64-point FFT can be handled in thirty two (32) atomic MMU computations (thirty two (32) 2-point FFTs) using the same FFT“twiddle factors” (described supra).Moreover, artisans of ordinary skill in the related arts will further appreciate that some matrix fabrics may have additional versatilities and/or uses beyond their initial configuration. For example, as previously noted, a 64-point FFT has 64 coefficient values, which include all 32 coefficients used in a 32-point FFT. Thus, a matrix fabric that is configured for 64-point operation could be reused for 32-point operation with the appropriate application of the 32-point input operand a on the appropriate rows of the 64-point FFT matrix fabric. Similarly, FFT twiddle factors are a superset of discrete cosine transform (DCT) twiddle factors; thus, an FFT matrix fabric could also be used (with appropriate application of input operand a) to calculate DCT results.Still other permutations and/or variants of the foregoing example will be made clear to those of ordinary skill in the related arts, given the content of the present disclosure.Methods -Referring now to FIG. 6, a logical flow diagram of one exemplary method 600 converting a memory array into a matrix fabric for matrix transformations and performing matrix operations therein is presented.At step 602 of the method 600, a memory device receives one or more instructions. In one embodiment, the memory device receives the instruction from a processor. In one such variant, the processor is an application processor (AP) commonly used in consumer electronics. In other such variants, the processor is a baseband processor (BB) commonly used in wireless devices.As a brief aside, so-called“application processors” are processors that are configured to execute an operating system (OS) and one or more applications, firmware, and/or software. The term“operating system” refers to software that controls and manages access to hardware. An OS commonly supports processing functions such as e.g., task scheduling, application execution, input and output management, memory management, security, and peripheral access.A so-called “baseband processor” is a processor that is configured to communicate with a wireless network via a communication protocol stack. The term “communication protocol stack” refers to the software and hardware components that control and manage access to the wireless network resources. A communication protocol stack commonly includes without limitation: physical layer protocols, data link layer protocols, medium access control protocols, network and/or transport protocols, etc.Other peripheral and/or co-processor configurations may similarly be substituted with equivalent success. For example, server devices often include multiple processors sharing a common memory resource. Similarly, many common device architectures pair a general purpose processor with a special purpose co-processor and a shared memory resource (such as a graphics engine, or digital signal processor (DSP)). Common examples of such processors include without limitation: graphics processing units (GPUs), video processing units (VPUs), tensor processing units (TPUs), neural network processing units (NPUs), digital signal processors (DSPs), image signal processors (ISPs). In other embodiments, the memory device receives the instruction from an application specific integrated circuit (ASIC) or other forms of processing logic e.g., field programmable gate arrays (FPGAs), programmable logic devices (PLDs), camera sensors, audio/video processors, and/or media codecs (e.g., image, video, audio, and/or any combination thereof).In one exemplary embodiment, the memory device is a resistive random access memory (ReRAM) arranged in a“crossbar” row-column configuration. While the various embodiments described herein assume a specific memory technology and specific memory structure, artisans of ordinary skill in the related arts given the contents of the present disclosure will readily appreciate that the principles described herein may be broadly extended to other technologies and/or structures. For example, certain programmable logic structures (e.g., commonly used in field programmable gate arrays (FPGAs) and programmable logic devices (PLDs)) may have similar characteristics to memory with regard to capabilities and topology. Similarly, certain processor and/or other memory technologies may vary resistance, capacitance, and/or inductance; in such cases, varying impedance properties may be used to perform analog computations. Additionally, while the“crossbar” based construction provides a physical structure that is well adapted to two-dimensional (2D) matrix structures, other topologies may be well adapted to higher order mathematical operations (e.g., matrix-matrix products via three- dimensional (3D) memory stacking, etc.)In one exemplary embodiment, the memory device further includes a controller. The controller receives the one or more instructions and parses each instruction into one or more instruction components (also commonly referred to as“instruction syllables”). In one exemplary embodiment, the instruction syllables include at least one opcode and one or more operands. For example, an instruction may be parsed into an opcode, a first source operand, and a destination operand. Other common examples of instruction components may include without limitation, a second source operand (for binary operations), a shift amount, an absolute/relative address, a register (or other reference to a data structure), an immediate data structure (i.e., a data structure provided within the instruction itself), a subordinate function, and/or branch/link values (e.g., to be executed depending on whether an instruction completes or fails).In one embodiment, each received instruction corresponds to an atomic memory controller operation. As used herein, an“atomic” instruction is an instruction that completes within a single access cycle. In contrast, a“non-atomic” instruction is an instruction that may or may not complete within a single access cycle. Even though non-atomic instructions might complete in a single cycle, they must be treated as non- atomic to prevent data race conditions. A race condition occurs where data that is being accessed by a processor instruction (either a read or write) may be accessed by another processor instruction before the first processor instruction has a chance to complete; the race condition may unpredictably result in data read/write errors. In other words, an atomic instruction guarantees that the data cannot be observed in an incomplete state.In one exemplary embodiment, an atomic instruction may identify a portion of the memory array to be converted to a matrix fabric. In some cases, the atomic instruction may identify characteristic properties of the matrix fabric. For example, the atomic instruction may identify the portion of the memory array on the basis of e.g., location within the memory array (e.g., via offset, row, column), size (number of rows, number of columns, and/or other dimensional parameters), granularity (e.g., the precision and/or sensitivity). Notably, atomic instructions may offer very fine grained control over memory device operation; this may be desirable where the memory device operation can be optimized in view of various application specific considerations.In other embodiments, a non-atomic instruction may specify portions of the memory array that are to be converted into a matrix fabric. For example, the non-atomic instruction may specify various requirements and/or constraints for the matrix fabric. The memory controller may internally allocate resources so as to accommodate the requirements and/or constraints. In some cases, the memory controller may additionally prioritize and/or de-prioritize instructions based on the current memory usage, memory resources, controller bandwidth, and/or other considerations. Such implementations may be particularly useful where memory device management is unnecessary and would otherwise burden the processor.In one embodiment, the instruction specifies a matrix operation. In one such variant, the matrix operation may be a vector-matrix product. In another variant, the matrix operation may be a matrix-matrix product. Still other variants may be substituted by artisans of ordinary skill in the related arts, given the contents of the present disclosure. Such variants may include e.g., scalar-matrix products, higher order matrix products, and/or other transformations including e.g., linear shifts, rotations, reflections, and translations.As used herein, the terms “transformation”, “transform”, etc. refer to a mathematical operation that converts an input from a first domain into a second domain. Transformations can be“injective” (every element of the first domain has a unique element in the second domain),“surjective” (every element of the second domain has a unique element in the first domain), or“bijective” (a unique one-to-one mapping of elements from the first domain to the second domain).More complex mathematically defined transformations that are regularly used in the computing arts include Fourier transforms (and its derivatives, such as the discrete cosine transform (DCT)), Hilbert transforms, Laplace transforms, and Legendre transforms. In one exemplary embodiment of the present disclosure, matrix coefficient values for mathematically defined transformations can be calculated ahead of time and stored within a look-up-table (LUT) or other data structure. For example, twiddle factors for the fast Fourier transform (FFT) and/or DCT can be calculated and stored within a LUT. In other embodiments, matrix coefficient values for mathematically defined transformations can be calculated by the memory controller during (or in preparation for) the matrix fabric conversion process. Other transformations may not be based on a mathematical definition per se, but may instead by defined based on e.g., an application, another device, and/or a network entity. Such transformations may be commonly used in encryption, decryption, geometric modeling, mathematical modeling, neural networks, network management, and/or other graph theory based applications. For example, wireless networks may use a codebook of predetermined antenna weighting matrixes so as to signal the most commonly used beamforming configurations. In other examples, certain types of encryption may agree upon and/or negotiate between different encryption matrices. In such embodiments, the codebook or matrix coefficient values may be agreed ahead of time, exchanged in an out-of-band manner, exchanged in-band, or even arbitrarily determined or negotiated.Empirically determined transformations may also be substituted with equivalent success given the contents of the present disclosure. For example, empirically derived transformations that are regularly used in the computing arts include radio channel coding, image signal processing, and/or other mathematically modeled environmental effects. For example, a multi-path radio environment can be characterized by measuring channel effects on e.g., reference signals. The resulting channel matrix can be used to constructively interfere with signal reception (e.g., improving signal strength) while simultaneously destructively interfering with interference (e.g., reducing noise). Similarly, an image that has a skewed hue can be assessed for overall color balance, and mathematically corrected. In some cases, an image may be intentionally skewed based on e.g., user input, so as to impart an aesthetic“warmth” to an image.Various embodiments of the present disclosure may implement “unary” operations within a memory device. Other embodiments may implement“binary”, or even higher order“N-ary” matrix operations. As used herein, the terms“unary”, “binary”, and“N-ary” refer to operations that take one, two, or N input data structures, respectively. In some embodiments, binary and/or N-ary operations may be subdivided into one or more unary matrix in-place operators. As used herein, an“in-place” operator refers to a matrix operation that stores or translates its result its own state (e.g., its own matrix coefficient values). For example, a binary operation may be decomposed into two (2) unary operations; a first in-place unary operation is executed (the result is stored “in-place”). Thereafter, a second unary operation can be performed on the matrix fabric to yield the binary result (for example, a multiply-accumulate operation). Still other embodiments may serialize and/or parallelize matrix operations based on a variety of considerations. For example, sequentially related operations may be performed in a“serial” pipeline. For example, image processing computations, such as are described in co-owned and co-pending U.S. Patent Application Serial No. 16/002,644 filed June 7, 2018 and entitled“AN IMAGE PROCESSOR FORMED IN AN ARRAY OF MEMORY CELLS”, previously incorporated supra , configures a number of matrix fabric and MMU processing elements to pipeline e.g., defect correction, color interpolation, white balance, color adjustment, gamma lightness, contrast adjustment, color conversion, down-sampling, etc. Pipelined processing can often produce very high throughput data with minimal matrix fabric resources. In contrast, unrelated operations may be performed in“parallel” with separate resources. For example, the first stage of a 64-point FFT can be handled with thirty two (32) separate matrix fabrics operating configured as 2-point FFTs. Highly parallelized operation can greatly reduce latency; however the overall memory fabric resource utilization may be very high.In one exemplary embodiment, the instruction is received from a processor via a dedicated interface. Dedicated interfaces may be particularly useful where the matrix computation fabric is treated akin to a co-processor or a hardware accelerator. Notably, dedicated interfaces do not require arbitration, and can be operated at very high speeds (in some cases, at the native processor speed). In other embodiments, the instruction is received via a shared interface.The shared interface may be multiplexed in time, resource (e.g., lanes, channels, etc.), or other manner with other concurrently active memory interface functionality. Common examples of other memory interface functionality include without limitation: data input/output, memory configuration, processor-in-memory (PIM) communication, direct memory access, and/or any other form of blocking memory access. In some variants, the shared interface may include one or more queuing and/or pipelining mechanisms. For example, some memory technologies may implement a pipelined interface so as to maximize memory throughput.In some embodiments, the instructions may be received from any entity having access to the memory interface. For example, a camera co-processor (image signal processor (ISP)) may be able to directly communicate with the memory device to e.g., write captured data. In certain implementations, the camera co-processor may be able offload its processing tasks to a matrix fabric of the memory device. For example, the ISP may accelerate/offload/parallelize e.g., color interpolation, white balance, color correction, color conversion, etc. In other examples, a baseband co-processor (BB) may be able to may be able to directly communicate with the memory device to e.g., read/write data for transaction over a network interface. The BB processor may be able to offload e.g., FFT/IFFT, channel estimation, beamforming calculations, and/or any number of other networking tasks to a matrix fabric of a memory device. Similarly, video and/or audio codecs often utilize DCT/IDCT transformations, and would benefit from matrix fabric operations. Still other variants of the foregoing will be readily appreciated by artisans of ordinary skill in the related arts, given the contents of the present disclosure.Various implementations of the present disclosure may support a queue of multiple instructions. In one exemplary embodiment, matrix operations may be queued together. For example, multiple vector-matrix multiplications may be queued together in order to effectuate a matrix multiplication. Similarly, as previously noted, a higher order transform (e.g., FFT1024) may be achieved by queuing multiple iterations of a lower order constituent transform (e.g., FFT512, etc.) In yet another example, ISP processing for an image may include multiple iterations over the iteration space (each iteration may be queued in advance). Still other queuing schemes may be readily substituted by artisans of ordinary skill in the related arts with equal success, given the contents of the present disclosure.In some cases, matrix operations may be cascaded together to achieve matrix operations of a higher rank. For example, a higher order FFT (e.g., 1024x1024) can be decomposed into multiple iterations of lower rank FFTs (e.g., four (4) iterations of 512x512 FFTs, sixteen (16) iterations of 256x256 FFTs, etc.). In other examples, arbitrarily sized N-point DFTs (e.g., that is not a power of 2) can be implemented by cascading DFTs of other sizes. Still other examples of cascaded and/or chained matrix transformations may be substituted with equivalent success, the foregoing being purely illustrative.As previously alluded to, the ReRAM’s non-volatile nature retains memory contents even when the ReRAM is unpowered. Thus, certain variants of the processor- memory architecture may enable one or more processors to independently power the memory. In some cases, the processor may power the memory when the processor is inactive (e.g., keeping the memory active while the processor is in low power). Independent power management of the memory may be particularly useful for e.g., performing matrix operations in memory, even while the processor is asleep. For example, the memory may receive a plurality of instructions to execute; the processor can transition into a sleep mode until the plurality of instructions have been completed. Still other implementations may use the non-volatile nature of ReRAM to hold memory contents while the memory is powered off; for example, certain video and/or image processing computations may be held within ReRAM during inactivity.At step 604 of the method 600, a memory array (or portion thereol) may be converted into a matrix fabric based on the instruction. As used herein, the term“matrix fabric” refers to a plurality of memory cells having a configurable impedance that, when driven with an input vector, yield an output vector and/or matrix. In one embodiment, the matrix fabric may be associated with a portion of the memory map. In some such variants, the portion is configurable in terms of its size and/or location. For example, a configurable memory register may determine whether a bank is configured as a memory or as a matrix fabric. In other variants, the matrix fabric may reuse and/or even block memory interface operation. For example, the memory device may allow the memory interface may be GPIO based (e.g., in one configuration, the pins of the memory interface may selectively operate as ADDR/DATA during normal operation, or e.g., FFT16, etc. during matrix operation.)In one embodiment, the instruction identifies a matrix fabric characterized by structurally defined coefficients. In one exemplary embodiment, a matrix fabric contains the coefficients for a structurally defined matrix operation. For example, a matrix fabric for an 8x8 FFT is an 8x8 matrix fabric that has been pre-populated with structurally defined coefficients for an FFT. In some variants, the matrix fabric may be pre-populated with coefficients of a particular sign (positive, negative) or of a particular radix (the most significant bits, least significant bits, or intermediary bits).As used herein, the term“structurally defined coefficients” refer to the fact that the coefficients of the matrix multiplication are defined by the matrix structure (e.g., the size of the matrix), not the nature of the operation (e.g., multiplying operands). For example, a structurally defined matrix operation may be identified by e.g., a row and column designation (e.g., 8x8, 16x16, 32x32, 64x64, 128x128, 256x256, etc.) While the foregoing discussions are presented in the context of full rank matrix operations, deficient matrix operators may be substituted with equivalent success. For example, a matrix operation may have asymmetric columns and/or rows (e.g., 8x16, 16x8, etc.) In fact, many vector-based operations may be treated as a row with a single column, or a column with a single row (e.g., 8x1, 1x8).In some hybrid hardware/software embodiments, controlling logic (e.g., a memory controller, processor, PIM, etc.) may determine whether resources exist to provide the matrix fabric. In one such embodiment, a matrix operation may be evaluated by a pre-processor to determine whether or not it should be handled within software or within dedicated matrix fabric. For example, if the existing memory and/or matrix fabric usage consumes all of the memory device resources, then the matrix operation may need to be handled within software rather than via the matrix fabric. Under such circumstances, the instruction may be returned incomplete (resulting in traditional matrix operations via processor instructions). In another such example, configuring a temporary matrix fabric to handle a simple matrix operation may yield such little return, that the matrix operation should be handled within software.Various considerations may be used in determining whether a matrix fabric should be used. For example, memory management may allocate portions of the memory array for memory and/or matrix fabric. In some implementations, portions of the memory array may be statically allocated. Static allocations may be preferable to reduce memory management overhead and/or simplify operational overhead (wear leveling, etc.). In other implementations, portions of the memory array may be dynamically allocated. For example, wear-leveling may be needed to ensure that a memory uniformly degrades in performance (rather than wearing out high usage areas). Still other variants may statically and/or dynamically allocate different portions; for example, a subset of the memory and/or matrix fabric portions may be dynamically and/or statically allocated.As a brief aside, wear leveling memory cells can be performed in any discrete amount of memory (e.g., a bank of memory, a chunk of memory, etc.) Wear leveling matrix fabric may use similar techniques; e.g., in one variant, wear leveling matrix fabric portions may require that the entire matrix fabric is moved in aggregate (the crossbar structure cannot be moved in pieces). Alternatively, wear leveling matrix fabric portions may be performed by first decomposing the matrix fabric into constituent matrix computations and dispersing the constituent matrix computations to other locations. More directly, matrix fabric wear leveling may indirectly benefit from the“logical” matrix manipulations that are used in other matrix operations (e.g., decomposition, cascading, parallelization, etc.). In particular, decomposing a matrix fabric into its constituent matrix fabrics may enable better wear leveling management with only marginally more complex operation (e.g., the additional step of logical combination via MMU).In one exemplary embodiment, conversion includes reconfiguring the row decoder to operate as a matrix fabric driver that variably drives multiple rows of the memory array. In one variant, the row driver converts a digital value to an analog signal.In one variant, digital-to-analog conversion includes varying a conductance associated with a memory cell in accordance with a matrix coefficient value. Additionally, conversion may include reconfiguring the column decoder to perform analog decoding. In one variant, the column decoder is reconfigured to sense analog signals corresponding to a column of varying conductance cells that are driven by corresponding rows of varying signaling. The column decoder converts an analog signal to a digital value. While the foregoing construction is presented in one particular row- column configuration, other implementations may be substituted with equal success. For example, a column driver may convert a digital value to an analog signal, and a row decoder may convert an analog signal to a digital value. In another such example, a three-dimension (3D) row-column-depth memory may implement 2D matrices in any permutation (e.g., row-driver/column-decoder, row-driver/depth-decoder, column- driver/depth-decoder, etc.) and/or 3D matrix permutations (e.g., row-driver/column- decoder-driver/depth-decoder).In one exemplary embodiment, the matrix coefficient values correspond to a structurally determined value. Structurally determined values may be based on the nature of the operation. For example, a fast Fourier transform (FFT) transform on a vector of length N (where N is a power of 2) can be performed with FFT butterfly operations (of 2x2) or some higher order of butterfly (e.g., 4x4, 8x8, 16x16, etc.) Notably, the intermediate constituent FFT butterfly operation weighting is defined as a2 iknfunction of the unit circle (e.g., e « ) where both n and k are determined from the FFT vector length N in other words, the FFT butterfly weighting operations are structurally defined according to the length of vector of length N. As a practical matter, a variety of different transformations are similar in this regard. For example, the discrete Fourier transform (DFT) and discrete cosine transform (DCT), both use structurally defined coefficients.In one exemplary embodiment, the matrix fabric itself has structurally determined dimensions. Structurally determined dimensions may be based on the nature of the operation; for example, an ISP white balance processing may use a 3x3 matrix (corresponding to different values of Red (R), Green (G), Blue (B), Luminance (Y), Chrominance Red (Cr), Chrominance Blue (Cb), etc.) In another such example, channel matrix estimations and/or beamforming codebooks are often defined in terms of the number of multiple-input-multiple-output (MIMO) paths. For example, a 2x2 MIMO channel has a corresponding 2x2 channel matrix and a corresponding 2x2 beamforming weighting. Various other structurally defined values and/or dimensions useful for matrix operations may be substituted by artisans of ordinary skill in the related arts, given the contents of the present disclosure.Certain variants may additionally subdivide matrix coefficient values so as to handle manipulations that may be impractical to handle otherwise. Under such circumstances, a matrix fabric may include only a portion of the matrix coefficient values (to perform only a portion of the matrix operation). For example, performing signed operation and/or higher level radix computations may require levels of manufacturing tolerance that are prohibitively expensive. Signed matrix operation may be split into positive and negative matrix operations (which are later summed by a matrix multiplication unit (MMU) described elsewhere herein). Similarly, high radix matrix operation may be split into e.g., a most significant bit (MSB) portion, a least significant bit (LSB) portion, and/or any intermediary bits (which may be bit shifted and summed by the aforementioned MMU). Still other variants would be readily appreciated by artisans of ordinary skill, given the contents of the present disclosure.In one exemplary embodiment, the matrix coefficient values are determined ahead of time and stored in a look-up-table for later reference. For example, a matrix operation that has both structurally determined dimensions and structurally determined values may be stored ahead of time. As but one such example, an FFT of eight (8) elements has structurally determined dimensions (8x8) and structurally determined values (e.g., e~, e~, e~, etc. ) A FFT8 instruction may result in the configuration of an 8x8 matrix fabric that is pre-populated with the corresponding FFT8 structurally determined values. As another such example, antenna beamforming coefficients are often defined ahead of time within a codebook; a wireless network may identify a corresponding index within the codebook to configure antenna beamforming. For example, a MIMO codebook may identify the possible configurations for a 4x4 MIMO system; during operation, the selected configuration can be retrieved from a codebook based an index thereto.While the foregoing examples are presented in the context of structurally defined dimensions and/or values, other embodiments may use dimensions and/or values that are defined based on one or more other system parameters. For example, less granularity may be required for low power operation. Similarly, as previously alluded to, various processing considerations may weigh in favor of (or against) performing matrix operations within a matrix fabric. Additionally, matrix operation may affect other memory considerations including without limitation: wear leveling, memory bandwidth, process-in-memory bandwidth, power consumption, row column and/or depth decoding complexity, etc. Artisans of ordinary skill in the related arts given the contents of the present disclosure may substitute a variety of other considerations, the foregoing being purely illustrative.At step 606 of the method 600, one or more matrix multiplication units may be configured on the basis of the instruction. As previously alluded to, certain matrix fabrics may implement logical (mathematical identities) to handle a single stage of a matrix operation; however, multiple stages of matrix fabrics may be cascaded together to achieve more complex matrix operations. In one exemplary embodiment, a first matrix is used to calculate positive products of a matrix operation and a second matrix is used to calculate the negative products of a matrix operation. The resulting positive and negative products can be compiled within an MMU to provide a signed matrix multiplication. In one exemplary embodiment, a first matrix is used to calculate a first radix portion of a matrix operation and a second matrix is used to calculate a second radix portion of a matrix operation. The resulting radix portions can be bit shifted and/or summed within an MMU to provide a larger radix product.As a brief aside, logical matrix operations are distinguished from analog matrix operations. The exemplary matrix fabric converts analog voltages or current into digital values that are read by the matrix multiplication unit (MMU). Logical operations can manipulate digital values via mathematical properties (e.g., via matrix decomposition, etc.); analog voltages or current cannot be manipulated in this manner. More generally, different logical manipulations can be performed with groups of matrices. For example, a matrix can be decomposed or factorized into one or more constituent matrices. Similarly, multiple constituent matrices can be aggregated or combined into a single matrix. Additionally, matrices may be expanded in row and/or column to create a deficient matrix of larger dimension (but identical rank). Such logic may be used to implement many higher order matrix operations. For example, multiplying two matrices together may be decomposed as a number of vector-matrix multiplications. These vector-matrix multiplications may be further implemented as multiply-accumulate logic within a matrix multiplication unit (MMU). In other words, even non-unary operations may be handled as a series of piece-wise unary matrix operations. More generally, artisans of ordinary skill in the related arts will readily appreciate that any matrix operation which can be expressed in whole, or in part, as a unary operation may greatly benefit from the various principles described herein.Various embodiments of the present disclosure use matrix multiplication units (MMUs) as glue logic between multiple constituent matrix fabrics. Additionally, MMU operation may be selectively switched for connectivity to various rows and/or columns. Not all matrix fabrics may be used concurrently; thus, depending on the current processing and/or memory usage, matrix fabrics may be selectively connected to MMUs. For example, a single MMU may be dynamically connected to different matrix fabrics.In some embodiments, controlling logic (e.g., a memory controller, processor, PIM, etc.) may determine whether resources exist to provide the MMU manipulations within e.g., column decoder or elsewhere. For example, the current MMU load may be evaluated by a pre-processor to determine whether or not an MMU may be heavily loaded. Notably, the MMU is primarily used for logical manipulations, thus any processing entity with equivalent logical functionality may assist with the MMU’s tasks. For example, a processor-in-memory (PIM) may offload MMU manipulations. Similarly, matrix fabric results may be directly provided to the host processor (which can perform logical manipulations in software).More generally, various embodiments of the present disclosure contemplate sharing MMU logic among multiple different matrix fabrics. The sharing may be based on e. g. , a time sharing scheme. F or example, the MMU may be assigned to a first matrix fabric during one time slot, and a second matrix fabric during another time slot. In other words, unlike the physical structure of the matrix fabric (which is statically allocated for the duration of the matrix operation), the MMU performs logical operations that can be scheduled, subdivided, allocated, reserved, and/or partitioned in any number of ways. More generally, various embodiments of the matrix fabric are based on memory and non-volatile. As a result, the matrix fabric may be configured in advance, and read from when needed; the non-volatile nature ensures that the matrix fabric retains contents without requiring processing overhead even if e.g., the memory device is powered off.If both matrix fabrics and corresponding matrix multiplication units (MMUs) are successfully converted and configured, then at step 608 of the method 600, the matrix fabric is driven based on the instruction and a logical result is calculated with the one or more matrix multiplication units at step 610. In one embodiment, one or more operands are converted into an electrical signal for analog computation via the matrix fabric. The analog computation results from driving an electrical signal through the matrix fabric elements; for example, the voltage drop is a function of a coefficient of the matrix fabric. The analog computation result is sensed and converted back to a digital domain signal. Thereafter, the one or more digital domain values are manipulated with the one or more matrix multiplication units (MMUs) to create a logical result.It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. Furthermore, features from two or more of the methods may be combined. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples.Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.It will be further appreciated that while certain steps and aspects of the various methods and apparatus described herein may be performed by a human being, the disclosed aspects and individual methods and apparatus are generally computerized/computer-implemented. Computerized apparatus and methods are necessary to fully implement these aspects for any number of reasons including, without limitation, commercial viability, practicality, and even feasibility (i.e., certain steps/processes simply cannot be performed by a human being in any viable fashion).The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable apparatus (e.g., storage medium). Computer-readable media include both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.
A processor includes circuitry to decode at least one instruction and an execution unit. The decoded instruction may compute a floating point result. The execution unit includes circuitry to execute the instruction to determine the floating point result, compute the amount of precision lost in a mantissa of the floating point result, compare the amount of precision lost to a numeric accumulation error precision threshold, determine whether a numeric accumulation error occurred based on the comparison, and write a value to a flag. The amount of precision lost corresponds to a plurality of bits lost in the mantissa of the floating point result. The value to be written to the flag may be based on the determination that the numeric accumulation error occurred. The flag may be for notification that the numeric accumulation error occurred.
CLAIMS:1. A processor, comprising:circuitry to decode at least one instruction, the instruction to compute a floating point result;an execution unit including circuitry to:execute the instruction to determine the floating point result;compute an amount of precision lost in a mantissa of the floating point result, the amount of precision lost corresponding to a plurality of bits lost in the mantissa of the floating point result;compare the amount of precision lost in the mantissa of the floating point result to a numeric accumulation error precision threshold;determine whether a numeric accumulation error occurred based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold; andwrite a value to a flag for notification that the numeric accumulation error occurred, the value based on the determination that the numeric accumulation error occurred.2. The processor of claim 1, wherein:the execution unit further includes circuitry to determine the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register; andthe comparison between the amount of precision lost and the numeric accumulation error precision threshold is based on the numeric accumulation error precision threshold that is determined.3. The processor of claim 1, wherein the execution unit further includes circuitry to: determine whether a mask bit is set to prevent signaling of the numeric accumulation error; andsignal an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred.4. The processor of claim 1, wherein:the execution unit further includes circuitry to: round the floating point result;store the rounded floating point result; andthe amount of precision lost in the mantissa of the floating point result represents a percentage of bits in the mantissa of the floating point result that is lost, the percentage of bits that is lost includes:a percentage of bits that is lost when the floating point result is rounded; ora percentage of bits that is lost when the rounded floating point result is stored.5. The processor of claim 1, the execution unit further comprising circuitry to:determine a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrol, using the numeric accumulation error non-zero precision flag, whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.6. The processor of claim 1, the execution unit further includes circuitry to:determine a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrol, using the numeric accumulation error non-zero precision flag, whether to ignore bits of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.7. The processor of claim 1, wherein:the floating point result is computed from source values;the instruction is a fused multiply-add instruction; andthe execution of the instruction includes circuitry to:compute a sum based on the source values; andcompute the floating point result based on the sum and at least one of the source values.8. A method for detecting numeric accumulation error, comprising:decoding at least one instruction, the instruction for computing a floating point result; executing the instruction to determine the floating point result;computing the amount of precision lost in a mantissa of the floating point result, the amount of precision lost corresponding to a plurality of bits lost in the mantissa of the floating point result;comparing the amount of precision lost in the mantissa of the floating point result to a numeric accumulation error precision threshold;determining whether a numeric accumulation error occurred based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold; andwriting a value to a flag for notification that the numeric accumulation error occurred, the value based on the determination that the numeric accumulation error occurred.9. The method of claim 8, further comprising:determining the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register; andthe comparison between the amount of precision lost and the numeric accumulation error precision threshold using the numeric accumulation error precision threshold that is determined.10. The method of claim 8, further comprising:determining whether a mask bit is set to prevent signaling of the numeric accumulation error; andsignaling an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred.11. The method of claim 8, further comprising:rounding the floating point result;storing the rounded floating point result; andthe amount of precision lost in the mantissa of the floating point result representing a percentage of bits in the mantissa of the floating point result that is lost, the percentage of bits that is lost including:a percentage of bits that is lost when rounding the floating point result; ora percentage of bits that is lost when storing the rounded floating point result.12. The method of claim 8, further comprising: determining a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrolling, using the numeric accumulation error non-zero precision flag, whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.13. The method of claim 8, further comprising:determining a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrolling, using the numeric accumulation error non-zero precision flag, whether to ignore bits of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.14. The method of claim 8, wherein:the floating point result is computed from source values;the instruction is a fused multiply-add instruction; andthe execution of the instruction includes:computing a sum based on the source values; andcomputing the floating point result based on the sum and at least one of the source values.15. An execution unit, comprising circuitry to:execute at least one instruction to determine a floating point result;compute an amount of precision lost in a mantissa of the floating point result, the amount of precision lost corresponding to a plurality of bits lost in the mantissa of the floating point result; compare the amount of precision lost in the mantissa of the floating point result to a numeric accumulation error precision threshold;determine whether a numeric accumulation error occurred based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold; andwrite a value to a flag for notification that the numeric accumulation error occurred, the value based on the determination that the numeric accumulation error occurred.16. The execution unit of claim 15, the execution unit further comprising circuitry to determine the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register, wherein the comparison between the amount of precision lost and the numeric accumulation error precision threshold is based on the numeric accumulation error precision threshold that is determined.17. The execution unit of claim 15, further comprising circuitry to:determine whether a mask bit is set to prevent signaling of the numeric accumulation error; andsignal an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred.18. The execution unit of claim 15, further comprising circuitry to round the floating point result and store the rounded floating point result, wherein the amount of precision lost in the mantissa of the floating point result represents a percentage of bits in the mantissa of the floating point result that is lost, the percentage of bits that is lost includes:a percentage of bits that is lost when the floating point result is rounded; ora percentage of bits that is lost when the rounded floating point result is stored.19. The execution unit of claim 15, further comprising circuitry to:determine a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrol, using the numeric accumulation error non-zero precision flag, whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.20. The execution unit of claim 15, further comprising circuitry to:determine a numeric accumulation error non-zero precision flag based on the instruction, a previous instruction, or a floating point control register; andcontrol, using the numeric accumulation error non-zero precision flag, whether to ignore bits of the mantissa of the floating point result with a value of zero, wherein the computation of the amount of precision lost in the mantissa of the floating point result is based on the numeric accumulation error non-zero precision flag.21. The execution unit of claim 15, wherein:the floating point result is computed from source values;the instruction is a fused multiply-add instruction; andthe execution of the instruction includes circuitry to:compute a sum based on the source values; andcompute the floating point result based on the sum and at least one of the source values.22. An apparatus for detecting numeric accumulation error, comprising means for performing any of the methods of Claims 8 to 14.
INSTRUCTION AND LOGIC FOR DETECTING NUMERIC ACCUMULATIONERRORCROSS-REFERENCE TO RELATED APPLICATIONS)[0001] This application claims the benefit of priority to U.S. Nonprovisional Patent Application No. 15/280,564 filed 29 September 2016 entitled "INSTRUCTION AND LOGIC FOR DETECTING NUMERIC ACCUMULATION ERROR", which is incorporated herein by reference in its entirety.FIELD OF THE INVENTION[0002] The present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations.DESCRIPTION OF RELATED ART[0003] Multiprocessor systems are becoming more and more common. Applications of multiprocessor systems include dynamic domain partitioning all the way down to desktop computing. In order to take advantage of multiprocessor systems, code to be executed may be separated into multiple threads for execution by various processing entities. Each thread may be executed in parallel with one another. Instructions as they are received on a processor may be decoded into terms or instruction words that are native, or more native, for execution on the processor. Processors may be implemented in a system on a chip. Floating point numbers may be added, subtracted, or multiplied. Such floating point operations may be used in deep learning mathematical simulations.DESCRIPTION OF THE FIGURES[0004] Embodiments are illustrated by way of example and not limitation in the Figures of the accompanying drawings:[0005] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure;[0006] FIGURE IB illustrates a data processing system, in accordance with embodiments of the present disclosure; [0007] FIGURE 1C illustrates other embodiments of a data processing system for performing text string comparison operations;[0008] FIGURE 2 is a block diagram of the micro-architecture for a processor that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure;[0009] FIGURE 3A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0010] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure;[0011] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure;[0012] FIGURE 3D illustrates an embodiment of an operation encoding format;[0013] FIGURE 3E illustrates another possible operation encoding format having forty or more bits, in accordance with embodiments of the present disclosure;[0014] FIGURE 3F illustrates yet another possible operation encoding format, in accordance with embodiments of the present disclosure;[0015] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure;[0016] FIGURE 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor, in accordance with embodiments of the present disclosure;[0017] FIGURE 5A is a block diagram of a processor, in accordance with embodiments of the present disclosure;[0018] FIGURE 5B is a block diagram of an example implementation of a core, in accordance with embodiments of the present disclosure;[0019] FIGURE 6 is a block diagram of a system, in accordance with embodiments of the present disclosure;[0020] FIGURE 7 is a block diagram of a second system, in accordance with embodiments of the present disclosure;[0021] FIGURE 8 is a block diagram of a third system in accordance with embodiments of the present disclosure;[0022] FIGURE 9 is a block diagram of a system-on-a-chip, in accordance with embodiments of the present disclosure; [0023] FIGURE 10 illustrates a processor containing a central processing unit and a graphics processing unit which may perform at least one instruction, in accordance with embodiments of the present disclosure;[0024] FIGURE 11 is a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure;[0025] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure;[0026] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure;[0027] FIGURE 14 is a block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0028] FIGURE 15 is a more detailed block diagram of an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0029] FIGURE 16 is a block diagram of an execution pipeline for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure;[0030] FIGURE 17 is a block diagram of an electronic device for utilizing a processor, in accordance with embodiments of the present disclosure;[0031] FIGURE 18 is a block diagram of a system for detecting numeric accumulation error, in accordance with embodiments of the present disclosure;[0032] FIGURE 19A is a block diagram of a floating point unit for accumulating floating point numbers, in accordance with embodiments of the present disclosure;[0033] FIGURE 19B is a block diagram of an execution unit for detecting numeric accumulation error in floating point numbers with aligned exponents, in accordance with embodiments of the present disclosure; and[0034] FIGURE 20 is a diagram of operation of a method for detecting numeric accumulation error, in accordance with embodiments of the present disclosure.DETAILED DESCRIPTION[0035] The following description describes an instruction and processing logic for detecting numeric accumulation error. The instruction and processing logic may be implemented on an out- of-order processor. In the following description, numerous specific details such as processing logic, processor types, micro-architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of embodiments of the present disclosure. It will be appreciated, however, by one skilled in the art that the embodiments may be practiced without such specific details. Additionally, some well-known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present disclosure.[0036] Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present disclosure may be applied to other types of circuits or semiconductor devices that may benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present disclosure are applicable to any processor or machine that stores data to memory. However, the embodiments are not limited to processors or machines that perform 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations and may be applied to any processor and machine in which manipulation or management of data may be performed. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of embodiments of the present disclosure rather than to provide an exhaustive list of all possible implementations of embodiments of the present disclosure.[0037] Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present disclosure may be accomplished by way of a data or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the disclosure. In one embodiment, functions associated with embodiments of the present disclosure are embodied in machine-executable instructions. The instructions may be used to cause a general-purpose or special-purpose processor that may be programmed with the instructions to perform the steps of the present disclosure. Embodiments of the present disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present disclosure. Furthermore, steps of embodiments of the present disclosure might be performed by specific hardware components that contain fixed-function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components. Throughout this disclosure, unless explicitly stated otherwise, a compound form of a reference numeral refers to the element generically or collectively. Thus, for example, widget lOlA or 101-1 refers to an instance of a widget class, which may be referred to collectively as widgets 101 and any one of which may be referred to genetically as widget 101.[0038] Instructions used to program circuitry to perform embodiments of the present disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions may be distributed via a network or by way of other computer-readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium may include any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[0039] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as may be useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, designs, at some stage, may reach a level of data representing the physical placement of various devices in the hardware model. In cases wherein some semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine-readable medium. A memory or a magnetic or optical storage such as a disc may be the machine-readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or retransmission of the electrical signal is performed, a new copy may be made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present disclosure. [0040] In modern processors, a number of different execution units may be used to process and execute a variety of code and instructions. Some instructions may be quicker to complete while others may take a number of clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there may be certain instructions that have greater complexity and require more in terms of execution time and processor resources, such as floating point instructions, load/store operations, data moves, etc.[0041] As more computer systems are used in internet, text, and multimedia applications, additional processor support has been introduced over time. In one embodiment, an instruction set may be associated with one or more computer architectures, including data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O).[0042] In one embodiment, the instruction set architecture (ISA) may be implemented by one or more micro-architectures, which may include processor logic and circuits used to implement one or more instruction sets. Accordingly, processors with different micro-architectures may share at least a portion of a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings, Ltd., MIPS, or their licensees or adopters, may share at least a portion a common instruction set, but may include different processor designs. For example, the same register architecture of the ISA may be implemented in different ways in different micro-architectures using new or well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file. In one embodiment, registers may include one or more registers, register architectures, register files, or other register sets that may or may not be addressable by a software programmer.[0043] An instruction may include one or more instruction formats. In one embodiment, an instruction format may indicate various fields (number of bits, location of bits, etc.) to specify, among other things, the operation to be performed and the operands on which that operation will be performed. In a further embodiment, some instruction formats may be further defined by instruction templates (or sub-formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields and/or defined to have a given field interpreted differently. In one embodiment, an instruction may be expressed using an instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies or indicates the operation and the operands upon which the operation will operate.[0044] Scientific, financial, auto-vectorized general purpose, RMS (recognition, mining, and synthesis), and visual and multimedia applications (e.g., 2D/3D graphics, image processing, video compression/decompression, voice recognition algorithms and audio manipulation) may require the same operation to be performed on a large number of data items. In one embodiment, Single Instruction Multiple Data (SIMD) refers to a type of instruction that causes a processor to perform an operation on multiple data elements. SEVID technology may be used in processors that may logically divide the bits in a register into a number of fixed-sized or variable-sized data elements, each of which represents a separate value. For example, in one embodiment, the bits in a 64-bit register may be organized as a source operand containing four separate 16-bit data elements, each of which represents a separate 16-bit value. This type of data may be referred to as 'packed' data type or 'vector' data type, and operands of this data type may be referred to as packed data operands or vector operands. In one embodiment, a packed data item or vector may be a sequence of packed data elements stored within a single register, and a packed data operand or a vector operand may a source or destination operand of a SFMD instruction (or 'packed data instruction' or a 'vector instruction'). In one embodiment, a SFMD instruction specifies a single vector operation to be performed on two source vector operands to generate a destination vector operand (also referred to as a result vector operand) of the same or different size, with the same or different number of data elements, and in the same or different data element order.[0045] SFMD technology, such as that employed by the Intel® Core™ processors having an instruction set including x86, MMX™, Streaming SFMD Extensions (SSE), SSE2, SSE3, SSE4.1, and SSE4.2 instructions, ARM processors, such as the ARM Cortex® family of processors having an instruction set including the Vector Floating Point (VFP) and/or NEON instructions, and MIPS processors, such as the Loongson family of processors developed by the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences, has enabled a significant improvement in application performance (Core™ and MMX™ are registered trademarks or trademarks of Intel Corporation of Santa Clara, Calif).[0046] In one embodiment, destination and source register s/data may be generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, whereas "SRC1" and "SRC2" may be a first and second source storage register or other storage area, and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). In one embodiment, one of the source registers may also act as a destination register by, for example, writing back the result of an operation performed on the first and second source data to one of the two source registers serving as a destination registers.[0047] FIGURE 1A is a block diagram of an exemplary computer system formed with a processor that may include execution units to execute an instruction, in accordance with embodiments of the present disclosure. System 100 may include a component, such as a processor 102 to employ execution units including circuits with logic to perform algorithms for process data, in accordance with the present disclosure, such as in the embodiment described herein. System 100 may be representative of processing systems based on the PENTRJM®III, ΡΕΝΤΠΙΜ®4, Xeon™, Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry, programmable circuitry, and software.[0048] Embodiments are not limited to computer systems. Embodiments of the present disclosure may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that may perform one or more instructions in accordance with at least one embodiment.[0049] Computer system 100 may include a processor 102 that may include one or more execution units 108 to perform an algorithm to perform at least one instruction in accordance with one embodiment of the present disclosure. One embodiment may be described in the context of a single processor desktop or server system, but other embodiments may be included in a multiprocessor system. System 100 may be an example of a 'hub' system architecture. System 100 may include a processor 102 for processing data signals. Processor 102 may include a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLrvV) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In one embodiment, processor 102 may be coupled to a processor bus 110 that may transmit data signals between processor 102 and other components in system 100. The elements of system 100 may perform conventional functions that are well known to those familiar with the art.[0050] In one embodiment, processor 102 may include a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal cache. In another embodiment, the cache memory may reside external to processor 102. Other embodiments may also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 may store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register.[0051] Execution unit 108, including circuits with logic to perform integer and floating point operations, also resides in processor 102. Processor 102 may also include a microcode (ucode) ROM that stores microcode for certain macroinstructions. In one embodiment, execution unit 108 may include circuits with logic to handle a packed instruction set 109. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general -purpose processor 102. Thus, many multimedia applications may be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This may eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.[0052] Embodiments of an execution unit 108 may also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 may include a memory 120. Memory 120 may be implemented as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 may store instructions and/or data represented by data signals that may be executed by processor 102.[0053] A system logic chip 116 may be coupled to processor bus 110 and memory 120. System logic chip 116 may include a memory controller hub (MCH). Processor 102 may communicate with MCH 116 via a processor bus 110. MCH 116 may provide a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. MCH 116 may direct data signals between processor 102, memory 120, and other components in system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 may provide a graphics port for coupling to a graphics controller 112. MCH 116 may be coupled to memory 120 through a memory interface 118. Graphics card 112 may be coupled to MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114.[0054] System 100 may use a proprietary hub interface bus 122 to couple MCH 1 16 to I/O controller hub (ICH) 130. In one embodiment, ICH 130 may provide direct connections to some I/O devices via a local I/O bus. The local I/O bus may include a high-speed I/O bus for connecting peripherals to memory 120, chipset, and processor 102. Examples may include the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. Data storage device 124 may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0055] For another embodiment of a system, an instruction in accordance with one embodiment may be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system may include a flash memory. The flash memory may be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller may also be located on a system on a chip.[0056] FIGURE IB illustrates a data processing system 140 which implements the principles of embodiments of the present disclosure. It will be readily appreciated by one of skill in the art that the embodiments described herein may operate with alternative processing systems without departure from the scope of embodiments of the disclosure.[0057] Computer system 140 comprises a processing core 159 for performing at least one instruction in accordance with one embodiment. In one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLTvV type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine-readable media in sufficient detail, may be suitable to facilitate said manufacture.[0058] Processing core 159 comprises an execution unit 142, a set of register files 145, and a decoder 144. Processing core 159 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure. Execution unit 142 may execute instructions received by processing core 159. In addition to performing typical processor instructions, execution unit 142 may perform instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 may include instructions for performing embodiments of the disclosure and other packed instructions. Execution unit 142 may be coupled to register file 145 by an internal bus. Register file 145 may represent a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area may store the packed data might not be critical. Execution unit 142 may be coupled to decoder 144. Decoder 144 may decode instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. In one embodiment, the decoder may interpret the opcode of the instruction, which will indicate what operation should be performed on the corresponding data indicated within the instruction.[0059] Processing core 159 may be coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 for communicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.[0060] One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 that may perform SFMD operations including a text string comparison operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh- Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motion compensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM).[0061] FIGURE 1C illustrates other embodiments of a data processing system that performs SFMD text string comparison operations. In one embodiment, data processing system 160 may include a main processor 166, a SFMD coprocessor 161, a cache memory 167, and an input/output system 168. Input/output system 168 may optionally be coupled to a wireless interface 169. SFMD coprocessor 161 may perform operations including instructions in accordance with one embodiment. In one embodiment, processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine-readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.[0062] In one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register files 164. One embodiment of main processor 166 comprises a decoder 165A to recognize instructions of instruction set 163 including instructions in accordance with one embodiment for execution by execution unit 162. In other embodiments, SIMD coprocessor 161 also comprises 165B at least part of decoder 165 A to decode instructions of instruction set 163. Processing core 170 may also include additional circuitry (not shown) which may be unnecessary to the understanding of embodiments of the present disclosure.[0063] In operation, main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with cache memory 167, and input/output system 168. Embedded within the stream of data processing instructions may be SIMD coprocessor instructions. Decoder 165 A of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, main processor 166 issues these SFMD coprocessor instructions (or control signals representing SFMD coprocessor instructions) on the coprocessor bus 171. From coprocessor bus 171, these instructions may be received by any attached SFMD coprocessors. In this case, SFMD coprocessor 161 may accept and execute any received SFMD coprocessor instructions intended for it.[0064] Data may be received via wireless interface 169 for processing by the SFMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SFMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. In one embodiment of processing core 170, main processor 166, and a SFMD coprocessor 161 may be integrated into a single processing core 170 comprising an execution unit 162, a set of register files 164, and a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment.[0065] FIGURE 2 is a block diagram of the micro-architecture for a processor 200 that may include logic circuits to perform instructions, in accordance with embodiments of the present disclosure. In some embodiments, an instruction in accordance with one embodiment may be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment, in-order front end 201 may implement a part of processor 200 that may fetch instructions to be executed and prepares the instructions to be used later in the processor pipeline. Front end 201 may include several units. In one embodiment, instruction pref etcher 226 fetches instructions from memory and feeds the instructions to an instruction decoder 228 which in turn decodes or interprets the instructions. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine may execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that may be used by the micro-architecture to perform operations in accordance with one embodiment. In one embodiment, trace cache 230 may assemble decoded uops into program ordered sequences or traces in uop queue 234 for execution. When trace cache 230 encounters a complex instruction, microcode ROM 232 provides the uops needed to complete the operation.[0066] Some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete an instruction, decoder 228 may access microcode ROM 232 to perform the instruction. In one embodiment, an instruction may be decoded into a small number of micro ops for processing at instruction decoder 228. In another embodiment, an instruction may be stored within microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. Trace cache 230 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from micro-code ROM 232. After microcode ROM 232 finishes sequencing micro-ops for an instruction, front end 201 of the machine may resume fetching micro-ops from trace cache 230.[0067] Out-of-order execution engine 203 may prepare instructions for execution. The out- of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic 215 allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic 215 renames logic registers onto entries in a register file. The allocator 215 also allocates an entry for each uop in one of the two uop queues, one for memory operations 207 and one for non-memory operations 205, in front of the instruction schedulers: memory scheduler 209, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. Uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. Fast scheduler 202 of one embodiment may schedule on each half of the main clock cycle while the other schedulers may only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0068] Register files 208, 210 may be arranged between schedulers 202, 204, 206, and execution units 212, 214, 216, 218, 220, 222, 224 in execution block 211. Each of register files 208, 210 perform integer and floating point operations, respectively. Each register file 208, 210, may include a bypass network that may bypass or forward just completed results that have not yet been written into the register file to new dependent uops. Integer register file 208 and floating point register file 210 may communicate data with the other. In one embodiment, integer register file 208 may be split into two separate register files, one register file for low-order thirty-two bits of data and a second register file for high order thirty -two bits of data. Floating point register file 210 may include 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0069] Execution block 211 may contain execution units 212, 214, 216, 218, 220, 222, 224. Execution units 212, 214, 216, 218, 220, 222, 224 may execute the instructions. Execution block 211 may include register files 208, 210 that store the integer and floating point data operand values that the micro-instructions need to execute. In one embodiment, processor 200 may comprise a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. In another embodiment, floating point execution blocks 222, 224, may execute floating point, MMX, SEVID, and SSE, or other operations. In yet another embodiment, floating point ALU 222 may include a 64-bit by 64- bit floating point divider to execute divide, square root, and remainder micro-ops. In various embodiments, instructions involving a floating point value may be handled with the floating point hardware. In one embodiment, ALU operations may be passed to high-speed ALU execution units 216, 218. High-speed ALUs 216, 218 may execute fast operations with an effective latency of half a clock cycle. In one embodiment, most complex integer operations go to slow ALU 220 as slow ALU 220 may include integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations may be executed by AGUs 212, 214. In one embodiment, integer ALUs 216, 218, 220 may perform integer operations on 64-bit data operands. In other embodiments, ALUs 216, 218, 220 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. Similarly, floating point units 222, 224 may be implemented to support a range of operands having bits of various widths. In one embodiment, floating point units 222, 224, may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.[0070] In one embodiment, uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops may be speculatively scheduled and executed in processor 200, processor 200 may also include circuits with logic to handle memory misses. If a data load misses in the data cache, there may be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations might need to be replayed and the independent ones may be allowed to complete. The schedulers and replay mechanism of one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations.[0071] The term "registers" may refer to the on-board processor storage locations that may be used as part of instructions to identify operands. In other words, registers may be those that may be usable from the outside of the processor (from a programmer's perspective). However, in some embodiments registers might not be limited to a particular type of circuit. Rather, a register may store data, provide data, and perform the functions described herein. The registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store 32-bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data. For the discussions below, the registers may be understood to be data registers designed to hold packed data, such as 64-bit wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to genetically as "SSEx") technology may hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point may be contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0072] In the examples of the following figures, a number of data operands may be described. FIGURE 3 A illustrates various packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. FIGURE 3 A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128-bit wide operands. Packed byte format 310 of this example may be 128 bits long and contains sixteen packed byte data elements. A byte may be defined, for example, as eight bits of data. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in parallel.[0073] Generally, a data element may include an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register may be 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register may be 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in FIGURE 3A may be 128 bits long, embodiments of the present disclosure may also operate with 64-bit wide or other sized operands. Packed word format 320 of this example may be 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. Packed doubleword format 330 of FIGURE 3 A may be 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty-two bits of information. A packed quadword may be 128 bits long and contain two packed quad-word data elements.[0074] FIGURE 3B illustrates possible in-register data storage formats, in accordance with embodiments of the present disclosure. Each packed data may include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For another embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One embodiment of packed half 341 may be 128 bits long containing eight 16-bit data elements. One embodiment of packed single 342 may be 128 bits long and contains four 32-bit data elements. One embodiment of packed double 343 may be 128 bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits, 512-bits or more.[0075] FIGURE 3C illustrates various signed and unsigned packed data type representations in multimedia registers, in accordance with embodiments of the present disclosure. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element may be stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits may be used in the register. This storage arrangement may increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation may now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element may be the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero may be stored in a SEVID register. Signed packed word representation 347 may be similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element may be the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 may be similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit may be the thirty-second bit of each doubleword data element.[0076] FIGURE 3D illustrates an embodiment of an operation encoding (opcode). Furthermore, format 360 may include register/memory operand addressing modes corresponding with a type of opcode format described in the "IA-32 Intel Architecture Software Developer's Manual Volume 2: Instruction Set Reference," which is available from Intel Corporation, Santa Clara, CA on the world-wide- web (www) at intel.com/design/litcentr. In one embodiment, an instruction may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. In one embodiment, destination operand identifier 366 may be the same as source operand identifier 364, whereas in other embodiments they may be different. In another embodiment, destination operand identifier 366 may be the same as source operand identifier 365, whereas in other embodiments they may be different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 may be overwritten by the results of the text string comparison operations, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. In one embodiment, operand identifiers 364 and 365 may identify 32-bit or 64-bit source and destination operands.[0077] FIGURE 3E illustrates another possible operation encoding (opcode) format 370, having forty or more bits, in accordance with embodiments of the present disclosure. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. An instruction according to one embodiment may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers374 and 375 and by prefix byte 378. In one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. In one embodiment, destination operand identifier 376 may be the same as source operand identifier 374, whereas in other embodiments they may be different. For another embodiment, destination operand identifier 376 may be the same as source operand identifier 375, whereas in other embodiments they may be different. In one embodiment, an instruction operates on one or more of the operands identified by operand identifiers 374 and375 and one or more operands identified by operand identifiers 374 and 375 may be overwritten by the results of the instruction, whereas in other embodiments, operands identified by identifiers 374 and 375 may be written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale-index -base and displacement bytes.[0078] FIGURE 3F illustrates yet another possible operation encoding (opcode) format, in accordance with embodiments of the present disclosure. 64-bit single instruction multiple data (SEVID) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for another embodiment, operations may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor may operate on eight, sixteen, thirty-two, and 64-bit values. In one embodiment, an instruction may be performed on integer data elements. In some embodiments, an instruction may be executed conditionally, using condition field 381. For some embodiments, source data sizes may be encoded by field 383. In some embodiments, Zero (Z), negative (N), carry (C), and overflow (V) detection may be done on SEVID fields. For some instructions, the type of saturation may be encoded by field 384.[0079] FIGURE 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline, in accordance with embodiments of the present disclosure. FIGURE 4B is a block diagram illustrating an in-order architecture core and circuitry for register renaming, circuitry for out-of-order issue/execution to be included in a processor, in accordance with embodiments of the present disclosure. The solid lined boxes in FIGURE 4A illustrate the in-order pipeline, while the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline. Similarly, the solid lined boxes in FIGURE 4B illustrate the in-order architecture logic, while the dashed lined boxes illustrates the register renaming logic and out-of-order issue/execution logic.[0080] In FIGURE 4A, a processor pipeline 400 may include a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write-back/memory- write stage 418, an exception handling stage 422, and a commit stage 424.[0081] In FIGURE 4B, arrows denote a coupling between two or more units and the direction of the arrow indicates a direction of data flow between those units. FIGURE 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both may be coupled to a memory unit 470.[0082] Core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. In one embodiment, core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like.[0083] Front end unit 430 may include a branch prediction unit 432 coupled to an instruction cache unit 434. Instruction cache unit 434 may be coupled to an instruction translation lookaside buffer (TLB) 436. TLB 436 may be coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. Decode unit 440 may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which may be decoded from, or which otherwise reflect, or may be derived from, the original instructions. The decoder may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read-only memories (ROMs), etc. In one embodiment, instruction cache unit 434 may be further coupled to a level 2 (L2) cache unit 476 in memory unit 470. Decode unit 440 may be coupled to a rename/allocator unit 452 in execution engine unit 450.[0084] Execution engine unit 450 may include rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler units 456. Scheduler units 456 represent any number of different schedulers, including reservations stations, central instruction window, etc. Scheduler units 456 may be coupled to physical register file units 458. Each of physical register file units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed),, etc. Physical register file units 458 may be overlapped by retirement unit 154 to illustrate various ways in which register renaming and out-of- order execution may be implemented (e.g., using one or more reorder buffers and one or more retirement register files, using one or more future files, one or more history buffers, and one or more retirement register files; using register maps and a pool of registers; etc.). Generally, the architectural registers may be visible from the outside of the processor or from a programmer's perspective. The registers might not be limited to any known particular type of circuit. Various different types of registers may be suitable as long as they store and provide data as described herein. Examples of suitable registers include, but might not be limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. Retirement unit 454 and physical register file units 458 may be coupled to execution clusters 460. Execution clusters 460 may include a set of one or more execution units 162 and a set of one or more memory access units 464. Execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler units 456, physical register file units 458, and execution clusters 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments may be implemented in which only the execution cluster of this pipeline has memory access units 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[0085] The set of memory access units 464 may be coupled to memory unit 470, which may include a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which may be coupled to data TLB unit 472 in memory unit 470. L2 cache unit 476 may be coupled to one or more other levels of cache and eventually to a main memory. [0086] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 400 as follows: 1) instruction fetch 438 may perform fetch and length decoding stages 402 and 404; 2) decode unit 440 may perform decode stage 406; 3) rename/allocator unit 452 may perform allocation stage 408 and renaming stage 410; 4) scheduler units 456 may perform schedule stage 412; 5) physical register file units 458 and memory unit 470 may perform register read/memory read stage 414; execution cluster 460 may perform execute stage 416; 6) memory unit 470 and physical register file units 458 may perform write-back/memory-write stage 418; 7) various units may be involved in the performance of exception handling stage 422; and 8) retirement unit 454 and physical register file units 458 may perform commit stage 424.[0087] Core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).[0088] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads) in a variety of manners. Multithreading support may be performed by, for example, including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof. Such a combination may include, for example, time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology.[0089] While register renaming may be described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include a separate instruction and data cache units 434/474 and a shared L2 cache unit 476, other embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that may be external to the core and/or the processor. In other embodiments, all of the cache may be external to the core and/or the processor.[0090] FIGURE 5 A is a block diagram of a processor 500, in accordance with embodiments of the present disclosure. In one embodiment, processor 500 may include a multicore processor. Processor 500 may include a system agent 510 communicatively coupled to one or more cores 502. Furthermore, cores 502 and system agent 510 may be communicatively coupled to one or more caches 506. Cores 502, system agent 510, and caches 506 may be communicatively coupled via one or more memory control units 552. Furthermore, cores 502, system agent 510, and caches 506 may be communicatively coupled to a graphics module 560 via memory control units 552.[0091] Processor 500 may include any suitable mechanism for interconnecting cores 502, system agent 510, and caches 506, and graphics module 560. In one embodiment, processor 500 may include a ring-based interconnect unit 508 to interconnect cores 502, system agent 510, and caches 506, and graphics module 560. In other embodiments, processor 500 may include any number of well-known techniques for interconnecting such units. Ring-based interconnect unit 508 may utilize memory control units 552 to facilitate interconnections.[0092] Processor 500 may include a memory hierarchy comprising one or more levels of caches within the cores, one or more shared cache units such as caches 506, or external memory (not shown) coupled to the set of integrated memory controller units 552. Caches 506 may include any suitable cache. In one embodiment, caches 506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.[0093] In various embodiments, one or more of cores 502 may perform multi-threading. System agent 510 may include components for coordinating and operating cores 502. System agent unit 510 may include for example a power control unit (PCU). The PCU may be or include logic and components needed for regulating the power state of cores 502. System agent 510 may include a display engine 512 for driving one or more externally connected displays or graphics module 560. System agent 510 may include an interface for communications busses for graphics. In one embodiment, the interface may be implemented by PCI Express (PCIe). In a further embodiment, the interface may be implemented by PCI Express Graphics (PEG) 514. System agent 510 may include a direct media interface (DMI) 516. DMI 516 may provide links between different bridges on a motherboard or other portion of a computer system. System agent 510 may include a PCIe bridge 518 for providing PCIe links to other elements of a computing system. PCIe bridge 518 may be implemented using a memory controller 520 and coherence logic 522.[0094] Cores 502 may be implemented in any suitable manner. Cores 502 may be homogenous or heterogeneous in terms of architecture and/or instruction set. In one embodiment, some of cores 502 may be in-order while others may be out-of-order. In another embodiment, two or more of cores 502 may execute the same instruction set, while others may execute only a subset of that instruction set or a different instruction set.[0095] Processor 500 may include a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which may be available from Intel Corporation, of Santa Clara, Calif. Processor 500 may be provided from another company, such as ARM Holdings, Ltd, MIPS, etc. Processor 500 may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. Processor 500 may be implemented on one or more chips. Processor 500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or MOS.[0096] In one embodiment, a given one of caches 506 may be shared by multiple ones of cores 502. In another embodiment, a given one of caches 506 may be dedicated to one of cores 502. The assignment of caches 506 to cores 502 may be handled by a cache controller or other suitable mechanism. A given one of caches 506 may be shared by two or more cores 502 by implementing time-slices of a given cache 506.[0097] Graphics module 560 may implement an integrated graphics processing subsystem. In one embodiment, graphics module 560 may include a graphics processor. Furthermore, graphics module 560 may include a media engine 565. Media engine 565 may provide media encoding and video decoding.[0098] FIGURE 5B is a block diagram of an example implementation of a core 502, in accordance with embodiments of the present disclosure. Core 502 may include a front end 570 communicatively coupled to an out-of-order engine 580. Core 502 may be communicatively coupled to other portions of processor 500 through cache hierarchy 503.[0099] Front end 570 may be implemented in any suitable manner, such as fully or in part by front end 201 as described above. In one embodiment, front end 570 may communicate with other portions of processor 500 through cache hierarchy 503. In a further embodiment, front end 570 may fetch instructions from portions of processor 500 and prepare the instructions to be used later in the processor pipeline as they are passed to out-of-order execution engine 580.[00100] Out-of-order execution engine 580 may be implemented in any suitable manner, such as fully or in part by out-of-order execution engine 203 as described above. Out-of- order execution engine 580 may prepare instructions received from front end 570 for execution. Out-of-order execution engine 580 may include an allocate module 1282. In one embodiment, allocate module 1282 may allocate resources of processor 500 or other resources, such as registers or buffers, to execute a given instruction. Allocate module 1282 may make allocations in schedulers, such as a memory scheduler, fast scheduler, or floating point scheduler. Such schedulers may be represented in FIGURE 5B by resource schedulers 584. Allocate module 1282 may be implemented fully or in part by the allocation logic described in conjunction with FIGURE 2. Resource schedulers 584 may determine when an instruction is ready to execute based on the readiness of a given resource's sources and the availability of execution resources needed to execute an instruction. Resource schedulers 584 may be implemented by, for example, schedulers 202, 204, 206 as discussed above. Resource schedulers 584 may schedule the execution of instructions upon one or more resources. In one embodiment, such resources may be internal to core 502, and may be illustrated, for example, as resources 586. In another embodiment, such resources may be external to core 502 and may be accessible by, for example, cache hierarchy 503. Resources may include, for example, memory, caches, register files, or registers. Resources internal to core 502 may be represented by resources 586 in FIGURE 5B. As necessary, values written to or read from resources 586 may be coordinated with other portions of processor 500 through, for example, cache hierarchy 503. As instructions are assigned resources, they may be placed into a reorder buffer 588. Reorder buffer 588 may track instructions as they are executed and may selectively reorder their execution based upon any suitable criteria of processor 500. In one embodiment, reorder buffer 588 may identify instructions or a series of instructions that may be executed independently. Such instructions or a series of instructions may be executed in parallel from other such instructions. Parallel execution in core 502 may be performed by any suitable number of separate execution blocks or virtual processors. In one embodiment, shared resources— such as memory, registers, and caches— may be accessible to multiple virtual processors within a given core 502. In other embodiments, shared resources may be accessible to multiple processing entities within processor 500.[00101] Cache hierarchy 503 may be implemented in any suitable manner. For example, cache hierarchy 503 may include one or more lower or mid-level caches, such as caches 572, 574 through logic block 576. In one embodiment, cache hierarchy 503 may include an LLC 595 communicatively coupled to caches 572, 574. In another embodiment, LLC 595 may be implemented in a module 590 accessible to all processing entities of processor 500. In a further embodiment, module 590 may be implemented in an uncore module of processors from Intel, Inc. Module 590 may include portions or subsystems of processor 500 necessary for the execution of core 502 but might not be implemented within core 502. Besides LLC 595, Module 590 may include, for example, hardware interfaces, memory coherency coordinators, interprocessor interconnects, instruction pipelines, or memory controllers. Access to RAM 599 available to processor 500 may be made through module 590 and, more specifically, LLC 595. Furthermore, other instances of core 502 may similarly access module 590. Coordination of the instances of core 502 may be facilitated in part through module 590.[00102] FIGURES 6-8 may illustrate exemplary systems suitable for including processor 500, while FIGURE 9 may illustrate an exemplary system on a chip (SoC) that may include one or more of cores 502. Other system designs and implementations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, may also be suitable. In general, a huge variety of systems or electronic devices that incorporate a processor and/or other execution logic as disclosed herein may be generally suitable.[00103] FIGURE 6 illustrates a block diagram of a system 600, in accordance with embodiments of the present disclosure. System 600 may include one or more processors 610, 615, which may be coupled to graphics memory controller hub (GMCH) 620. The optional nature of additional processors 615 is denoted in FIGURE 6 with broken lines.[00104] Each processor 610,615 may be some version of processor 500. However, it should be noted that integrated graphics logic and integrated memory control units might not exist in processors 610,615. FIGURE 6 illustrates that GMCH 620 may be coupled to a memory 640 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non-volatile cache.[00105] GMCH 620 may be a chipset, or a portion of a chipset. GMCH 620 may communicate with processors 610, 615 and control interaction between processors 610, 615 and memory 640. GMCH 620 may also act as an accelerated bus interface between the processors 610, 615 and other elements of system 600. In one embodiment, GMCH 620 communicates with processors 610, 615 via a multi-drop bus, such as a frontside bus (FSB) 695.[00106] Furthermore, GMCH 620 may be coupled to a display 645 (such as a flat panel display). In one embodiment, GMCH 620 may include an integrated graphics accelerator. GMCH 620 may be further coupled to an input/output (I/O) controller hub (ICH) 650, which may be used to couple various peripheral devices to system 600. External graphics device 660 may include be a discrete graphics device coupled to ICH 650 along with another peripheral device 670.[00107] In other embodiments, additional or different processors may also be present in system 600. For example, additional processors 610, 615 may include additional processors that may be the same as processor 610, additional processors that may be heterogeneous or asymmetric to processor 610, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There may be a variety of differences between the physical resources 610, 615 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst processors 610, 615. For at least one embodiment, various processors 610, 615 may reside in the same die package.[00108] FIGURE 7 illustrates a block diagram of a second system 700, in accordance with embodiments of the present disclosure. As shown in FIGURE 7, multiprocessor system 700 may include a point-to-point interconnect system, and may include a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Each of processors 770 and 780 may be some version of processor 500 as one or more of processors 610,615.[00109] While FIGURE 7 may illustrate two processors 770, 780, it is to be understood that the scope of the present disclosure is not so limited. In other embodiments, one or more additional processors may be present in a given processor.[00110] Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 may also include as part of its bus controller units point-to-point (P-P) interfaces 776 and 778; similarly, second processor 780 may include P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in FIGURE 7, EVICs 772 and 782 may couple the processors to respective memories, namely a memory 732 and a memory 734, which in one embodiment may be portions of main memory locally attached to the respective processors.[00111] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. In one embodiment, chipset 790 may also exchange information with a high-performance graphics circuit 738 via interface 792 over a high-performance graphics bus 739.[00112] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00113] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited.[00114] As shown in FIGURE 7, various I/O devices 714 may be coupled to first bus716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may include instructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures may be possible. For example, instead of the point- to-point architecture of FIGURE 7, a system may implement a multi-drop bus or other such architecture.[00115] FIGURE 8 illustrates a block diagram of a third system 800 in accordance with embodiments of the present disclosure. Like elements in FIGURES 7 and 8 bear like reference numerals, and certain aspects of FIGURE 7 have been omitted from FIGURE 8 in order to avoid obscuring other aspects of FIGURE 8.[00116] FIGURE 8 illustrates that processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. For at least one embodiment, CL 872, 882 may include integrated memory controller units such as that described above in connection with FIGURES 5 and 7. In addition, CL 872, 882 may also include I/O control logic. FIGURE 8 illustrates that not only memories 832, 834 may be coupled to CL 872, 882, but also that I/O devices 814 may also be coupled to control logic 872, 882. Legacy I/O devices 815 may be coupled to chipset 890.[00117] FIGURE 9 illustrates a block diagram of a SoC 900, in accordance with embodiments of the present disclosure. Similar elements in FIGURE 5 bear like reference numerals. Also, dashed lined boxes may represent optional features on more advanced SoCs. An interconnect units 902 may be coupled to: an application processor 910 which may include a set of one or more cores 502A-N, including respective local caches 504A-N, and shared cache units 506; a system agent unit 510; a bus controller units 916; an integrated memory controller units 914; a set or one or more media processors 920 which may include integrated graphics logic 908, an image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays.[00118] FIGURE 10 illustrates a processor containing a central processing unit(CPU) and a graphics processing unit (GPU), which may perform at least one instruction, in accordance with embodiments of the present disclosure. In one embodiment, an instruction to perform operations according to at least one embodiment could be performed by the CPU. In another embodiment, the instruction could be performed by the GPU. In still another embodiment, the instruction may be performed through a combination of operations performed by the GPU and the CPU. For example, in one embodiment, an instruction in accordance with one embodiment may be received and decoded for execution on the GPU. However, one or more operations within the decoded instruction may be performed by a CPU and the result returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as the primary processor and the GPU as the co-processor.[00119] In some embodiments, instructions that benefit from highly parallel, throughput processors may be performed by the GPU, while instructions that benefit from the performance of processors that benefit from deeply pipelined architectures may be performed by the CPU. For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of the GPU and be executed accordingly, whereas more sequential applications, such as operating system kernel or application code may be better suited for the CPU.[00120] In FIGURE 10, processor 1000 includes a CPU 1005, GPU 1010, image processor 1015, video processor 1020, USB controller 1025, UART controller 1030, SPI/SDIO controller 1035, display device 1040, memory interface controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, and I2S/I2C controller 1070. Other logic and circuits may be included in the processor of FIGURE 10, including more CPUs or GPUs and other peripheral interface controllers.[00121] One or more aspects of at least one embodiment may be implemented by representative data stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium ("tape") and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as the Cortex™ family of processors developed by ARM Holdings, Ltd. and Loongson IP cores developed the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees, such as Texas Instruments, Qualcomm, Apple, or Samsung and implemented in processors produced by these customers or licensees.[00122] FIGURE 11 illustrates a block diagram illustrating the development of IP cores, in accordance with embodiments of the present disclosure. Storage 1130 may include simulation software 1120 and/or hardware or software model 1110. In one embodiment, the data representing the IP core design may be provided to storage 1130 via memory 1140 (e.g., hard disk), wired connection (e.g., internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model may then be transmitted to a fabrication facility where it may be fabricated by a 3rdparty to perform at least one instruction in accordance with at least one embodiment.[00123] In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and be translated or emulated on a processor of a different type or architecture (e.g., ARM). An instruction, according to one embodiment, may therefore be performed on any processor or processor type, including ARM, x86, MIPS, a GPU, or other processor type or architecture.[00124] FIGURE 12 illustrates how an instruction of a first type may be emulated by a processor of a different type, in accordance with embodiments of the present disclosure. In FIGURE 12, program 1205 contains some instructions that may perform the same or substantially the same function as an instruction according to one embodiment. However the instructions of program 1205 may be of a type and/or format that is different from or incompatible with processor 1215, meaning the instructions of the type in program 1205 may not be able to execute natively by the processor 1215. However, with the help of emulation logic, 1210, the instructions of program 1205 may be translated into instructions that may be natively be executed by the processor 1215. In one embodiment, the emulation logic may be embodied in hardware. In another embodiment, the emulation logic may be embodied in a tangible, machine-readable medium containing software to translate instructions of the type in program 1205 into the type natively executable by processor 1215. In other embodiments, emulation logic may be a combination of fixed-function or programmable hardware and a program stored on a tangible, machine-readable medium. In one embodiment, the processor contains the emulation logic, whereas in other embodiments, the emulation logic exists outside of the processor and may be provided by a third party. In one embodiment, the processor may load the emulation logic embodied in a tangible, machine-readable medium containing software by executing microcode or firmware contained in or associated with the processor.[00125] FIGURE 13 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, in accordance with embodiments of the present disclosure. In the illustrated embodiment, the instruction converter may be a software instruction converter, although the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIGURE 13 shows a program in a high level language 1302 may be compiled using an x86 compiler 1304 to generate x86 binary code 1306 that may be natively executed by a processor with at least one x86 instruction set core 1316. The processor with at least one x86 instruction set core 1316 represents any processor that may perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. x86 compiler 1304 represents a compiler that may be operable to generate x86 binary code 1306 (e.g., object code) that may, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1316. Similarly, FIGURE 13 shows the program in high level language 1302 may be compiled using an alternative instruction set compiler 1308 to generate alternative instruction set binary code 1310 that may be natively executed by a processor without at least one x86 instruction set core 1314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). Instruction converter 1312 may be used to convert x86 binary code 1306 into code that may be natively executed by the processor without an x86 instruction set core 1314. This converted code might not be the same as alternative instruction set binary code 1310; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, instruction converter 1312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1306.[00126] FIGURE 14 is a block diagram of an instruction set architecture 1400 of a processor, in accordance with embodiments of the present disclosure. Instruction set architecture 1400 may include any suitable number or kind of components.[00127] For example, instruction set architecture 1400 may include processing entities such as one or more cores 1406, 1407 within a processor subsystem 1405 and a graphics processing unit 1415. Cores 1406, 1407 may be communicatively coupled to the rest of instruction set architecture 1400 through any suitable mechanism, such as through a bus or cache. In one embodiment, cores 1406, 1407 may be communicatively coupled through an L2 cache control 1408, which may include a bus interface unit 1409 and an L2 cache 1411. Cores 1406, 1407 and graphics processing unit 1415 may be communicatively coupled to each other and to the remainder of instruction set architecture 1400 through interconnect 1410. In one embodiment, graphics processing unit 1415 may use a video code 1420 defining the manner in which particular video signals will be encoded and decoded for output. [00128] Instruction set architecture 1400 may also include any number or kind of interfaces, controllers, or other mechanisms for interfacing or communicating with other portions of an electronic device or system. Such mechanisms may facilitate interaction with, for example, peripherals, communications devices, other processors, or memory. In the example of FIGURE 14, instruction set architecture 1400 may include a liquid crystal display (LCD) video interface 1425, a subscriber interface module (SIM) interface 1430, a boot ROM interface 1435, a synchronous dynamic random access memory (SDRAM) controller 1440, a flash controller 1445, and a serial peripheral interface (SPI) master unit 1450. LCD video interface 1425 may provide output of video signals from, for example, GPU 1415 and through, for example, a mobile industry processor interface (MIPI) 1490 or a high-definition multimedia interface (HDMI) 1495 to a display. Such a display may include, for example, an LCD. SIM interface 1430 may provide access to or from a SIM card or device. SDRAM controller 1440 may provide access to or from memory such as an SDRAM chip or module. Flash controller 1445 may provide access to or from memory such as flash memory or other instances of RAM. SPI master unit 1450 may provide access to or from communications modules, such as a Bluetooth module 1470, high-speed 3G modem 1475, global positioning system module 1480, or wireless module 1485 implementing a communications standard such as 802.11. Instruction set architecture 1400 may further include a power control unit 1455.[00129] FIGURE 15 is a more detailed block diagram of an instruction set architecture 1500 of a processor, in accordance with embodiments of the present disclosure. Instruction architecture 1500 may implement one or more aspects of instruction set architecture 1400. Furthermore, instruction set architecture 1500 may illustrate modules and mechanisms for the execution of instructions within a processor.[00130] Instruction architecture 1500 may include a memory system 1540 communicatively coupled to one or more execution entities 1565. Furthermore, instruction architecture 1500 may include a caching and bus interface unit such as unit 1510 communicatively coupled to execution entities 1565 and memory system 1540. In one embodiment, loading of instructions into execution entities 1564 may be performed by one or more stages of execution. Such stages may include, for example, instruction prefetch stage 1530, dual instruction decode stage 1550, register rename stage 155, issue stage 1560, and writeback stage 1570.[00131] In one embodiment, memory system 1540 may include an executed instruction pointer 1580. Executed instruction pointer 1580 may store a value identifying the oldest, undispatched instruction within a batch of instructions. The oldest instruction may correspond to the lowest Program Order (PO) value. A PO may include a unique number of an instruction. Such an instruction may be a single instruction within a thread represented by multiple strands. A PO may be used in ordering instructions to ensure correct execution semantics of code. A PO may be reconstructed by mechanisms such as evaluating increments to PO encoded in the instruction rather than an absolute value. Such a reconstructed PO may be known as an "RPO." Although a PO may be referenced herein, such a PO may be used interchangeably with an RPO. A strand may include a sequence of instructions that are data dependent upon each other. The strand may be arranged by a binary translator at compilation time. Hardware executing a strand may execute the instructions of a given strand in order according to PO of the various instructions. A thread may include multiple strands such that instructions of different strands may depend upon each other. A PO of a given strand may be the PO of the oldest instruction in the strand which has not yet been dispatched to execution from an issue stage. Accordingly, given a thread of multiple strands, each strand including instructions ordered by PO, executed instruction pointer 1580 may store the oldest— illustrated by the lowest number— PO in the thread.[00132] In another embodiment, memory system 1540 may include a retirement pointer 1582. Retirement pointer 1582 may store a value identifying the PO of the last retired instruction. Retirement pointer 1582 may be set by, for example, retirement unit 454. If no instructions have yet been retired, retirement pointer 1582 may include a null value.[00133] Execution entities 1565 may include any suitable number and kind of mechanisms by which a processor may execute instructions. In the example of FIGURE 15, execution entities 1565 may include ALU/multiplication units (MUL) 1566, ALUs 1567, and floating point units (FPU) 1568. In one embodiment, such entities may make use of information contained within a given address 1569. Execution entities 1565 in combination with stages 1530, 1550, 1555, 1560, 1570 may collectively form an execution unit.[00134] Unit 1510 may be implemented in any suitable manner. In one embodiment, unit 1510 may perform cache control. In such an embodiment, unit 1510 may thus include a cache 1525. Cache 1525 may be implemented, in a further embodiment, as an L2 unified cache with any suitable size, such as zero, 128k, 256k, 512k, 1M, or 2M bytes of memory. In another, further embodiment, cache 1525 may be implemented in error-correcting code memory. In another embodiment, unit 1510 may perform bus interfacing to other portions of a processor or electronic device. In such an embodiment, unit 1510 may thus include a bus interface unit 1520 for communicating over an interconnect, intraprocessor bus, interprocessor bus, or other communication bus, port, or line. Bus interface unit 1520 may provide interfacing in order to perform, for example, generation of the memory and input/output addresses for the transfer of data between execution entities 1565 and the portions of a system external to instruction architecture 1500.[00135] To further facilitate its functions, bus interface unit 1510 may include an interrupt control and distribution unit 1511 for generating interrupts and other communications to other portions of a processor or electronic device. In one embodiment, bus interface unit 1510 may include a snoop control unit 1512 that handles cache access and coherency for multiple processing cores. In a further embodiment, to provide such functionality, snoop control unit 1512 may include a cache-to-cache transfer unit 1513 that handles information exchanges between different caches. In another, further embodiment, snoop control unit 1512 may include one or more snoop filters 1514 that monitors the coherency of other caches (not shown) so that a cache controller, such as unit 1510, does not have to perform such monitoring directly. Unit 1510 may include any suitable number of timers 1515 for synchronizing the actions of instruction architecture 1500. Also, unit 1510 may include an AC port 1516.[00136] Memory system 1540 may include any suitable number and kind of mechanisms for storing information for the processing needs of instruction architecture 1500. In one embodiment, memory system 1540 may include a load store unit 1546 for storing information such as buffers written to or read back from memory or registers and a data cache 1542. In another embodiment, memory system 1540 may include a translation lookaside buffer (TLB) 1545 that provides look-up of address values between physical and virtual addresses. In yet another embodiment, bus interface unit 1520 may include a memory management unit (MMU) 1544 for facilitating access to virtual memory. In still yet another embodiment, memory system 1540 may include a prefetcher 1543 for requesting instructions from memory before such instructions are actually needed to be executed, in order to reduce latency.[00137] The operation of instruction architecture 1500 to execute an instruction may be performed through different stages. For example, using unit 1510 instruction prefetch stage 1530 may access an instruction through prefetcher 1543. Instructions retrieved may be stored in instruction cache 1532. Prefetch stage 1530 may enable an option 1531 for fast-loop mode, wherein a series of instructions forming a loop that is small enough to fit within a given cache are executed. In one embodiment, such an execution may be performed without needing to access additional instructions from, for example, instruction cache 1532. Determination of what instructions to prefetch may be made by, for example, branch prediction unit 1535, which may access indications of execution in global history 1536, indications of target addresses 1537, or contents of a return stack 1538 to determine which of branches 1557 of code will be executed next. Such branches may be possibly prefetched as a result. Branches 1557 may be produced through other stages of operation as described below. Instruction prefetch stage 1530 may provide instructions as well as any predictions about future instructions to dual instruction decode stage.[00138] Dual instruction decode stage 1550 may translate a received instruction into microcode-based instructions that may be executed. Dual instruction decode stage 1550 may simultaneously decode two instructions per clock cycle. Furthermore, dual instruction decode stage 1550 may pass its results to register rename stage 1555. In addition, dual instruction decode stage 1550 may determine any resulting branches from its decoding and eventual execution of the microcode. Such results may be input into branches 1557.[00139] Register rename stage 1555 may translate references to virtual registers or other resources into references to physical registers or resources. Register rename stage 1555 may include indications of such mapping in a register pool 1556. Register rename stage 1555 may alter the instructions as received and send the result to issue stage 1560.[00140] Issue stage 1560 may issue or dispatch commands to execution entities 1565.Such issuance may be performed in an out-of-order fashion. In one embodiment, multiple instructions may be held at issue stage 1560 before being executed. Issue stage 1560 may include an instruction queue 1561 for holding such multiple commands. Instructions may be issued by issue stage 1560 to a particular processing entity 1565 based upon any acceptable criteria, such as availability or suitability of resources for execution of a given instruction. In one embodiment, issue stage 1560 may reorder the instructions within instruction queue 1561 such that the first instructions received might not be the first instructions executed. Based upon the ordering of instruction queue 1561, additional branching information may be provided to branches 1557. Issue stage 1560 may pass instructions to executing entities 1565 for execution.[00141] Upon execution, writeback stage 1570 may write data into registers, queues, or other structures of instruction set architecture 1500 to communicate the completion of a given command. Depending upon the order of instructions arranged in issue stage 1560, the operation of writeback stage 1570 may enable additional instructions to be executed. Performance of instruction set architecture 1500 may be monitored or debugged by trace unit 1575.[00142] FIGURE 16 is a block diagram of an execution pipeline 1600 for an instruction set architecture of a processor, in accordance with embodiments of the present disclosure. Execution pipeline 1600 may illustrate operation of, for example, instruction architecture 1500 of FIGURE 15.[00143] Execution pipeline 1600 may include any suitable combination of steps or operations. In 1605, predictions of the branch that is to be executed next may be made. In one embodiment, such predictions may be based upon previous executions of instructions and the results thereof. In 1610, instructions corresponding to the predicted branch of execution may be loaded into an instruction cache. In 1615, one or more such instructions in the instruction cache may be fetched for execution. In 1620, the instructions that have been fetched may be decoded into microcode or more specific machine language. In one embodiment, multiple instructions may be simultaneously decoded. In 1625, references to registers or other resources within the decoded instructions may be reassigned. For example, references to virtual registers may be replaced with references to corresponding physical registers. In 1630, the instructions may be dispatched to queues for execution. In 1640, the instructions may be executed. Such execution may be performed in any suitable manner. In 1650, the instructions may be issued to a suitable execution entity. The manner in which the instruction is executed may depend upon the specific entity executing the instruction. For example, at 1655, an ALU may perform arithmetic functions. The ALU may utilize a single clock cycle for its operation, as well as two shifters. In one embodiment, two ALUs may be employed, and thus two instructions may be executed at 1655. At 1660, a determination of a resulting branch may be made. A program counter may be used to designate the destination to which the branch will be made. 1660 may be executed within a single clock cycle. At 1665, floating point arithmetic may be performed by one or more FPUs. The floating point operation may require multiple clock cycles to execute, such as two to ten cycles. At 1670, multiplication and division operations may be performed. Such operations may be performed in four clock cycles. At 1675, loading and storing operations to registers or other portions of pipeline 1600 may be performed. The operations may include loading and storing addresses. Such operations may be performed in four clock cycles. At 1680, write-back operations may be performed as required by the resulting operations of 1655-1675.[00144] FIGURE 17 is a block diagram of an electronic device 1700 for utilizing a processor 1710, in accordance with embodiments of the present disclosure. Electronic device 1700 may include, for example, a notebook, an ultrabook, a computer, a tower server, a rack server, a blade server, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device.[00145] Electronic device 1700 may include processor 1710 communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. Such coupling may be accomplished by any suitable kind of bus or interface, such as I2C bus, system management bus (SMBus), low pin count (LPC) bus, SPI, high definition audio (HDA) bus, Serial Advance Technology Attachment (SAT A) bus, USB bus (versions 1, 2, 3), or Universal Asynchronous Receiver/Transmitter (UART) bus. [00146] Such components may include, for example, a display 1724, a touch screen1725, a touch pad 1730, a near field communications (NFC) unit 1745, a sensor hub 1740, a thermal sensor 1746, an express chipset (EC) 1735, a trusted platform module (TPM) 1738, BlOS/firmware/flash memory 1722, a digital signal processor 1760, a drive 1720 such as a solid state disk (SSD) or a hard disk drive (HDD), a wireless local area network (WLAN) unit 1750, a Bluetooth unit 1752, a wireless wide area network (WW AN) unit 1756, a global positioning system (GPS) 1755, a camera 1754 such as a USB 3.0 camera, or a low power double data rate (LPDDR) memory unit 1715 implemented in, for example, the LPDDR3 standard. These components may each be implemented in any suitable manner.[00147] Furthermore, in various embodiments other components may be communicatively coupled to processor 1710 through the components discussed above. For example, an accelerometer 1741, ambient light sensor (ALS) 1742, compass 1743, and gyroscope 1744 may be communicatively coupled to sensor hub 1740. A thermal sensor 1739, fan 1737, keyboard 1746, and touch pad 1730 may be communicatively coupled to EC 1735. Speaker 1763, headphones 1764, and a microphone 1765 may be communicatively coupled to an audio unit 1764, which may in turn be communicatively coupled to DSP 1760. Audio unit 1764 may include, for example, an audio codec and a class D amplifier. A SEVI card 1757 may be communicatively coupled to WW AN unit 1756. Components such as WLAN unit 1750 and Bluetooth unit 1752, as well as WW AN unit 1756 may be implemented in a next generation form factor (NGFF).[00148] Embodiments of the present disclosure involve an instruction and processing logic for detecting numeric accumulation error associated with floating point numbers. FIGURE 18 is an illustration of an example embodiment of a system 1800 having an instruction and logic for detecting numeric accumulation error, in accordance with embodiments of the present disclosure. System 1800 may include any suitable number and kind of elements to perform the operations described herein, including a processor, SoC, integrated circuit, or other mechanism. Furthermore, although specific elements of system 1800 may be described herein as performing a specific function, any suitable portion of 1800 may perform the functionality described herein. For example, system 1800 may include processor 1802. Although processor 1802 is shown and described as an example in FIGURE 18, any suitable mechanism may be used. Processor 1802 may include any suitable mechanism for detecting numeric accumulation error. In one embodiment, such mechanisms may be implemented in hardware. Processor 1802 may be implemented fully or in part by the elements described in FIGURES 1-17.[00149] In one embodiment, system 1800 may include a numeric accumulation error detection unit (NAEDU) 1826 for floating point arithmetic. System 1800 may include numeric accumulation error detection unit 1826 in any suitable portion of system 1800. In one embodiment, numeric accumulation error detection unit 1826 may be implemented as an execution unit 1822 within an in-order or out-of-order execution pipeline 1816. In another embodiment, numeric accumulation error detection unit 1826 may be implemented within intellectual property (IP) core(s) 1828 separate from main core(s) 1814 of processor 1802. Numeric accumulation error detection unit 1826 may be implemented by any suitable combination of circuitry or hardware computational logic of a processor.[00150] Floating point units may be used in deep-learning, convolutional neural networks (CNN), and/or other applications, including mobile and desktop computing, to represent numbers in scientific notation. This representation may include a sign designator, an exponent, and a mantissa. The mantissa may represent a fraction and an implicit or explicit leading whole number, in which the fraction and whole number are separated by a radix point, such as a decimal point or binary point. As the exponent increases, the scale of the floating point number may also increase, which may result in reduced precision. Floating point applications may perform many iterations of a floating point operation, such as an addition, subtraction, and/or multiplication. When the application begins execution, the exponent may be relatively small, such as near zero or even negative, which may result in increased precision.[00151] For example, a binary floating-point number with an exponent of -5 and a mantissa of 1.1101 may be equal to 0.000011101 with a floating point precision of 2-9. As the application executes, the exponent may increase, which may result in reduced precision. For example, the binary floating-point number with the mantissa of 1.1101 may eventually have an exponent of 1, or be equal to 11.101 with a floating point precision of 2-3. If the operand 1.1101 x 2-5were added to the accumulated value 1.1101 x 21, the new accumulated value may be 1.1101011101 x 21. However, if the new accumulated value were stored with a floating point representation allocating four bits for the mantissa with an implicit leading bit, the four most significant bits ("1101") of the mantissa in the new accumulated value may be maintained in a floating point result, while the remaining bits ("01 1101") of the mantissa in the new accumulated value may be discarded or lost in the floating point result. Thus, the floating point result may be 1.1101 x 21, which may be the same value as the original accumulated value without the addition of the operand. As illustrated by this example, the floating point precision may not be capable of properly representing some values that require additional floating point precision, which may produce an inaccurate result.[00152] Although a floating point application may increase floating point accuracy by increasing the number of bits available to represent a floating point value, such increases may reduce performance and/or require additional power, hardware circuitry, and/or silicon area. Thus, it may be desirable to begin execution of a floating point application with a minimal number of bits in a floating point value, and then increase the number of bits in the floating point value at a later point in time to increase the floating point accuracy if needed. Such an approach with variable floating point accuracy during the execution of a floating point application may reduce power, hardware circuitry, and silicon area, while increasing performance. To facilitate such an approach, processor 1802 may implement a numeric accumulation error detection unit 1826 to inform or notify a floating point application of an inaccurate result. Although a floating point application may determine an inexact result by checking for an inexact exception, such a notification may not provide the application with the flexibility to tune the accuracy of the result to the needs of the application. For example, an application may not be sensitive to results that are inaccurate due to the loss of several bits of data in the mantissa of a floating point result. Such an application may benefit from being able to receive notifications when the inaccuracy exceeds a threshold, rather than to receive notifications when there is any inaccuracy, including an inaccuracy of one bit.[00153] Numeric accumulation error detection unit 1826 may include circuitry to provide the ability to detect, flag, and/or signal a numeric accumulation error caused by a floating point operation. Although the term accumulation is used, numeric accumulation error detection unit 1826 may include circuitry to support the detection of numeric accumulation errors for addition, subtraction, and/or multiplication operations involving floating point numbers. Floating point addition, for example, may involve four steps, including aligning the exponents between the operands, adding the mantissas of the operands, normalizing the result, and rounding of the result. Numeric accumulation error may occur when the result of a floating point operation represents a loss in accuracy of the actual mathematical operation. A loss in accuracy may occur, for instance, when aggregating small values into an increasingly larger sum, in which small values added later in the aggregation may not contribute to the sum. Such a lack of contribution may occur based on the standard scale of the sum, which is represented as a floating point number, and the rounding scheme for floating point operations. As described above, the scale of a floating point number is variable based on the exponent and the size of the mantissa. System 1800 may support any suitable type of floating point rounding scheme including, but not limited to, truncation, rounding up, rounding down, rounding to the nearest even value, rounding away from zero, and rounding toward zero. Numeric accumulation error detection unit 1826 may compare the rounded result to the actual result to determine the number of bits lost. The rounded result may be referred to as a floating point result or rounded floating point result, and the actual result may be referred to as a floating point sum, floating point difference, floating point product, actual floating point result, or raw floating point result.[00154] To facilitate such a notification, numeric accumulation error detection unit1826 may be tuned to detect errors and provide a notification according to the needs of the application. Numeric accumulation error detection unit 1826 may have a threshold associated with the amount of inaccuracy accepted or required before a numeric accumulation error notification is provided. In one embodiment, the threshold may provide the ability to enable or disable numeric accumulation error detection. In another embodiment, accumulation error detection unit 1826 may have a numeric accumulation error detection flag associated with whether numeric accumulation error detection is enabled. If the flag is set, then numeric accumulation error detection may be enabled, and if the flag is not set, then numeric accumulation error detection may not be enabled. In the alternative, if the flag is set, numeric accumulation error detection may not be enabled, and if the flag is not set, numeric accumulation error detection may be enabled.[00155] The threshold may be defined in any manner suitable to tune the detection and notification of numeric accumulation errors. In one embodiment, the threshold may be a number. If the number of bits of inaccuracy in the mantissa of the floating point result is greater than or equal to the threshold, a numeric accumulation error notification may be provided. The threshold may span a range representing a loss in accuracy of more than one bit. The minimum of the range may be two bits. For example, if one bit in the mantissa of the floating point result is inaccurate, notification of a numeric accumulation error may not be provided. However, if two bits in the mantissa of the floating point result are inaccurate, notification of a numeric accumulation error may be provided. The maximum of the range may be based on the number of bits in the floating point representation of the operands, including the number of bits for the exponent and/or the number of bits for the mantissa, and/or the type of floating point operation. For floating point operations involving multiplication, the floating point product may have a maximum number of bits in the mantissa equivalent to twice as many mantissa bits as the operands. For example, multiplying two floating point values with 10-bit mantissae may result in a floating point product with a 20-bit mantissa. If that floating point result is then represented with a 10-bit mantissa, 10 bits of the 20-bit mantissa from the floating point product may be discarded or lost. For floating point operations involving addition or subtraction, the floating point sum or floating point difference may have a maximum number of bits in the mantissa equivalent to 2n, in which n is the number of bits representing the exponent. For example, a floating point difference with an exponent of 8 bits may have a maximum of 28or 256 bits in the mantissa. If that floating point result is then represented with a 10-bit mantissa, 246 bits of the 256-bit mantissa from the floating point difference may be discarded or lost. In some embodiments, this maximum may be reduced to reserve one or more exponent representations for one or more special values, such as zero, negative zero, infinity, negative infinity, and not a number (NaN). For example, if an exponent of all zeros is reserved for the floating-point representation of the value 0, the maximum number of bits in the mantissa would be one less, or 255 bits. Although mantissae with specific numbers of bits are described, the mantissa may be of any suitable length for floating point representation. Moreover, the span of the range may be further narrowed to reduce the complexity of numeric accumulation error detection unit 1826. For example, the minimum of the range may be more than 2 bits and the maximum of the range may be less than 255 bits. Further, if a numeric accumulation error notification is provided when the number of bits of inaccuracy in the mantissa of the floating point result is greater than the threshold, the range may be adjusted downward by one bit.[00156] In another embodiment, the threshold may be defined as a number corresponding to the percentage of the mantissa that may be inaccurate before a numeric accumulation error notification is provided. For example, the threshold may be a 4-bit value. If the threshold is set to "0000," numeric accumulation error detection may be disabled. If the threshold is set to "0001," 10% of the mantissa may be inaccurate before a numeric accumulation error notification is provided. If the threshold is set to " 1010," 100% of the mantissa may be inaccurate before a numeric accumulation error notification is provided. The percentage may correspond to the mantissa of the rounded floating point result or the actual floating point result. The rounded floating point result may have a defined number of bits in its mantissa, while the actual floating point result may have a variable number of bits in its mantissa based on the number of bits in the floating point representation of the operands, including the number of bits for the exponent and/or the number of bits for the mantissa, and/or the type of floating point operation. Accordingly, a threshold set to 100% of the mantissa, for example, may correspond to 100%) of the bits in the rounded floating point result's mantissa, or 100%) of the actual floating point result' s mantissa.[00157] During execution of a floating point application, or any other application, instructions may be received from instruction stream 1804, which may reside within a memory subsystem of system 1800. Instruction stream 1804 may be included in any suitable portion of processor 1802 or system 1800. In one embodiment, instruction stream 1804A may be included in an SoC, system, or other mechanism. In another embodiment, instruction stream 1804B may be included in a processor, integrated circuit, or other mechanism. Processor 1802 may include a front end 1806 to receive or retrieve instructions from any suitable location, including a cache or memory. Instructions may include instruction stream 1804. Front end 1806 may include a fetcher 1808 to fill the pipeline efficiently with possible instructions to execute. Front end 1806 may include an instruction decoder 1810 to decode an instruction into opcodes for execution, which may determine the meaning, side effects, data required, data consumed, and data to be produced for the instruction. A binary translator 1812 may be used to optimize or improve the efficiency of code.[00158] The decoded instructions may be passed to an out-of-order or in-order execution unit in an execution pipeline 1816. Execution pipeline 1816 may include a rename and allocate unit 1818 for renaming instructions for out-of-order execution, and a reorder buffer (ROB) coextensive with a retirement unit 1824 so that instructions may appear to be retired in the order that they were received. Rename and allocate unit 1818 may further rename or allocate resources for execution of multiple instructions in parallel. Scheduler 1820 may schedule or allocate instructions to execute on execution units 1822 when inputs are available. Outputs of execution units 1822 may queue in the ROB 1824. Front end 1806 may attempt to anticipate any behaviors that will prevent instructions from executing in a sequential stream and may fetch streams of instructions that might execute. When there is, for example, a misprediction of a branch, the ROB 1824 may inform the front-end 1806 and a different set of instructions might be executed instead. Front end 1806 may store data such as metadata for branch prediction. The instructions may be retired as if they were executed in program order. Various portions of such execution pipelining may be performed by one or more cores 1814. Each core 1814 may include one or more threads or logical cores for execution.[00159] The execution units 1822, which may include a floating point unit, may communicate with a numeric accumulation error detection unit 1826. Although numeric accumulation error detection unit 1826 is described as a portion of a core, the circuitry may reside in any suitable portion of processor 1802, including but not limited to IP Core(s) 1828. processor 1802 may recognize, either implicitly or through decoding and execution of specific instructions, that a floating point computation needs to be checked for numeric accumulation error. In such cases, the check may be offloaded to numeric accumulation error detection unit 1826. In one embodiment, numeric accumulation error detection unit 1826 may be targeted by specific instructions to be executed in instruction stream 1804. Such specific instructions may be generated by, for example, a compiler, or may be designed by a drafter of code resulting in instruction stream 1804. The instruction may be included in a library defined for execution by processor 1802 or numeric accumulation error detection unit 1826. In another embodiment, numeric accumulation error detection unit 1826 may be targeted by portions of processor 1802. For example, when processor 1802 recognizes an attempt in instruction stream 1804 to execute a floating-point instruction with a desired precision or threshold for detecting numeric accumulation error, processor 1802 may direct the floating-point instruction to numeric accumulation error detection unit 1826, or may direct execution unit(s) 1822 to interface with numeric accumulation error detection unit 1826.[00160] Instructions 1830 may represent floating point operations that may configure or use numeric accumulation error detection unit 1826. A numeric accumulation error precision (NAEP) control, which may also be referred to as a threshold, may be defined using a separate instruction or by including an extra operand in an instruction calculating a floating-point result. The threshold may indicate the amount of precision to be lost in the mantissa of the floating point result before notification of a numeric accumulation error is provided. The amount of precision to be lost in the mantissa may correspond to a minimum of two bits to be lost before notification of the error is provided. In one embodiment, the threshold may define the number of bits to be lost before a numeric accumulation error notification is provided. For example, a threshold setting of 8 may require a precision loss of 8 bits before a numeric accumulation error is indicated. In another embodiment, the threshold may define the percentage of total bits to be lost before a numeric accumulation error notification is provided. For example, a threshold setting of 20% may require a loss of 20 bits in a 100-bit mantissa before a numeric accumulation error is indicated.[00161] Moreover, a numeric accumulation error for non-zero bits (NAENZ) control, which may be a bit or flag, may be defined using the same instruction used to set NAEP, a separate instruction, or by including an extra operand in an instruction calculating a floatingpoint result. The NAENZ control may modify detection of the numeric accumulation error. A value of 1 may correspond to modification and a value of 0 may correspond to no modification, or a value of 1 may correspond to no modification and a value of 0 may correspond to modification. In one embodiment, the NAENZ control may modify the detection of a numeric accumulation error by not counting trailing bits with the value of zero when determining whether a numeric accumulation error occurred. In another embodiment, the NAENZ control may modify the detection of a numeric accumulation error by not counting any bits with the value of zero that reside in the bits that are lost when the actual floating point result becomes the rounded floating point result.[00162] In one embodiment, a SETCONTROLFPQQ instruction with NAEP and/orNAENZ control parameters may define the appropriate field or fields in a control register associated with numeric accumulation error control or with floating point unit control. In another embodiment, an instruction with a floating-point result may include NAEP and/or NAENZ control parameters. For example, basic floating-point instructions, such as FADD, FADDP, FIADD, FSUB, FSUBP, FISUB, FMUL, FMULP, and FFMUL, may include NAEP and/or NAENZ control parameters in addition to the typical operands, which may include two source floatingpoint registers and a destination floating-point register. Similarly, the ADD, SUB, and MUL instructions, or the vector forms of these instructions, VADD, VSUB, and VMUL, can include NAEP and/or NAENZ control parameters. These parameters may be included regardless of the data type of the operation, including scalar single-precision (SS), scalar double-precision (SD), packed single-precision (PS), and packed double-precision (PD). Likewise, complex arithmetic instructions, such as ADDSUBPD and ADDSUBPS, may include NAEP and/or NAENZ parameters. Finally, fused multiply-add (FMA) instructions, such as VFMADD, VFNMADD, VFMSUB, VFNMSUB, VFMADD SUB, and VFMSUBADD, may include NAEP and/or NAENZ control parameters to configure numeric accumulation error detection, including the threshold for detecting numeric accumulation errors and the non-zero control for modifying the detection of numeric accumulation errors. The FMA instructions may include any suitable data type, including those listed for vector instructions above. The FMA instruction may also include any suitable number of source operands and any suitable order in which to process the source operands, including an FMA instruction that may compute the floating point sum of two source operands and then may compute the floating point product of the floating point sum and a third source operand. These parameters, NAEP and/or NAENZ, along with any other parameters of the instructions defining a floating-point result, may be in any suitable form, including parameter flags for the floating point instruction, explicit parameters, required parameters, optional parameters with an assumed default value, or inherent parameters stored in registers or other known locations that do not require the information to be explicitly passed as a parameter.[00163] Although various operations are described in this disclosure as performed by specific components of processor 1802, the functionality may be performed by any suitable portion of processor 1802.[00164] FIGURE 19A illustrates a block diagram with selected elements from a floating point unit for accumulating floating point numbers, in accordance with embodiments of the present disclosure. Although accumulation with addition is shown, numeric accumulation errors may occur with any suitable floating point operation, including addition, subtraction, and/or multiplication. Accumulation of floating point numbers may occur in floating point unit 1900, which may include a floating point addition unit 1918 A. Floating point addition unit 1918 A may have one floating point output, which may be new accumulated value 1920, and two floating point inputs: operand 1902A and old accumulated value 1912. Although two inputs are shown, floating point addition unit 1918 A may support any suitable number of inputs for floating point addition.[00165] Operand 1902A may include an exponent 1904A and a mantissa 1906A.The exponent may be an absolute value, or may represent an offset from an absolute value. The mantissa may include an implicit or explicit leading whole number, such as an integer. In the example shown in FIGURE 19 A, operand 1902 A includes exponent 1904 A with a value of 7, and mantissa 1906 A with a value of 1.1001010110. Thus, operand 1902 A may be represented as 1.1001010110 X 27, or 11001010.110. Although an exponent of 7 and a particular mantissa is shown, any exponent or mantissa suitable for floating point representation may be used.[00166] Operand 1902A may be added to old accumulated value 1912, which may include exponent 1914 and mantissa 1916. Exponent 1914 may have a value of 15, and mantissa 1916 may have a value of 1.1101111011. Old accumulated value 1912 may be represented as 1.1101111011 X 215, or 111011110110000, which represents the value 30,640 in decimal. Old accumulated value 1912 may have originally been a smaller number that increased in value over time as additional floating point numbers were accumulated together. Although old accumulated value is shown with an exponent of 15 and a specific mantissa, any exponent or mantissa suitable for floating point representation may be used.[00167] As noted above, floating point addition may include four steps. The first step may include aligning the exponents of the source operands. FIGURE 19B illustrates a block diagram with selected elements from an execution unit for detecting numeric accumulation error in floating point numbers with aligned exponents, in accordance with embodiments of the present disclosure. For example, accumulation of floating point numbers may occur in an execution unit 1930, such as execution unit(s) 1822, floating point unit, or numeric accumulation error detection unit 1826, which may include a floating point addition unit 1918B that expects floating point source operands with aligned exponents. Although unit 1918B expects aligned exponents, unit 1918B may perform the alignment of the exponents itself.[00168] The alignment of exponents between source floating point operands may require the adjustment of the exponent and mantissa for one or more inputs. In the example shown, inputs with smaller exponents may be aligned with the input having the largest exponent. Although such an alignment to the largest value is shown, it may be possible to align to the lowest input or to an input falling between other inputs in magnitude. Operand 1902B, for instance, is shown with an aligned exponent 1904B and an aligned mantissa 1906B. Aligned exponent 1904B is the same value as the exponent 1914 for the old accumulated value input 1912. After alignment, aligned mantissa 1906B has a leading digit or bit of 0. Mantissa 1906A is shown shifted to the right by the difference between the exponents (i.e., 15— 7 = 8). Accordingly, mantissa 1906B includes eight leading zeros and the original mantissa as 1910.[00169] Operand 1902B may be added to old accumulated value 1912 to produce new accumulated value 1920. New accumulated value 1920 may have an exponent 1922, shown with a value of 15, and a mantissa 1926. Mantissa 1926 may need to be rounded to store the result, such that the new accumulated value has the same size representation as the old accumulated value. The rounding may result in only a portion of the mantissa being stored. For example, the most significant portion 1924 of mantissa 1926 may be retained while the least significant portion 1928 may be lost. After rounding, the new accumulated value 1920 may include exponent 1922 and the most least significant portion of the mantissa 1924, and may exclude the least significant portion of the mantissa 1928. In this case, only three bits (1908) from operand 1902B may affect the new accumulated value 1920. Although rounding is shown as truncation of the mantissa, any suitable form of rounding may be performed, including rounding up, rounding down, rounding to the nearest even value, rounding away from zero, and rounding toward zero.[00170] While an absolute loss in precision may be handled by signaling that the result is inexact, such a signal does not define the significance of the loss, or the number of bits lost. Some floating point applications may be fault tolerant, in which a loss of several bits of precision, such as the eight bits in the least significant portion of the mantissa 1926, is not important. Accordingly, a numeric accumulation error detection unit may include NAEP control and NAENZ control to direct how many bits in the least significant portion of the mantissa of the rounded floating point result can be lost before an error notification is provided. In the example shown, if the NAEP control directs that 8 bits can be lost, the numeric accumulation error detection unit may not indicate an error for new accumulated value 1920. However, if the NAEP control directs that 3 bits can be lost, the numeric accumulation error detection unit may indicate an error for new accumulated value 1920 because more than three bits are lost in the rounded floating point result. Although this illustration shows NAEP control directing that 3 bits and 8 bits can be lost, NAEP control may direct any number of bits equal to or greater than 2 bits can be lost for floating point operations. Moreover, although a fixed number of bits is illustrated, the NAEP control may be defined as a percentage of the mantissa for the rounded floating point result or the actual floating point result.[00171] An application using floating point operations may use lower-precision floating point values until an exception is raised due to the detection of a numeric accumulation error or until the application checks a register with a flag whose value indicates a numeric accumulation error. In one embodiment, the application may respond to the exception by using higher-precision floating point values. In another embodiment, the application may store the result prior to the numeric accumulation error in an intermediate location, and use the stored result in an operation resulting in a higher-precision floating point result.[00172] As noted previously, although a floating point addition operation is shown in FIGURES 19A and 19B, any suitable operation for floating point applications may be used, including addition, subtraction, and/or multiplication. Multiplication, for example, may result in even greater losses in information stored in the mantissae of the source operands. Multiplication may result in a mantissa of the product that is twice the size of the source mantissae (e.g., two floating point numbers with a mantissa of 10-bits would result in a product with a mantissa of 20- bits). To store the product with the same sized floating point representation as the operands, the lower half of the mantissa may be discarded. Before being discarded, the numeric accumulation error detection unit may evaluate the discarded bits of the mantissa to determine whether to provide notification of a numeric accumulation error. In one embodiment, the evaluation may include determining the number of bits discarded from the mantissa. In another embodiment, the evaluation may include determining the number of non-zero bits discarded from the mantissa. In a further embodiment, the evaluation may include determining the number of leading non-zero bits discarded from the mantissa. For example, the evaluation may ignore the trailing non-zero bits discarded from the mantissa and may then determine the number of bits discarded.[00173] FIGURE 20 is a block diagram of an example method 2000 for detecting an numeric accumulation error, according to embodiments of the present disclosure. Method 2000 may be implemented by any of the elements shown in FIGURES 1-19. Method 2000 may be initiated by any suitable criteria and may initiate operation at any suitable point. In one embodiment, method 2000 may initiate operation at 2005. Method 2000 may include greater or fewer steps than those illustrated. Moreover, method 2000 may execute its steps in an order different than those illustrated in Figure 20. Method 2000 may terminate at any suitable step. Furthermore, method 2000 may repeat operation at any suitable step. Method 2000 may perform any of its steps in parallel with other steps of method 2000, or in other methods.[00174] At 2005, in one embodiment at least one instruction, which may compute a floating point result, may be received. The instructions may be received, decoded, and executed. The instructions may specify that numeric accumulation error detection unit 1826 is to determine whether numeric accumulation error occurred during the execution of the instruction. In another embodiment, an explicit instruction may be received to enable detection of numeric accumulation error before an instruction to compute a floating-point result. The explicit instruction may be received, decoded, and executed. [00175] In one embodiment, an instruction may specifically designate handling by a numeric accumulation error detection unit 1826. In another embodiment, it may be determined that an instruction can be handled by a numeric accumulation error detection unit. Inputs relevant to detecting numeric accumulation error may be handed off to a numeric accumulation error detection unit for processing. Method step 2005 may be performed by, for instance, a front end, a core, an execution unit, or other suitable elements of a processor.[00176] At 2010, in one embodiment a numeric accumulation error precision threshold may be determined. The determination may be based on the instruction, a previous instruction, or a floating point control register. A detected numeric accumulation error may need to be equal to or greater than the threshold in order for method 2000 to provide notification that a numeric accumulation error has occurred. Notification may include any suitable mechanism, including signaling an interrupt or writing a value to a flag. In one embodiment, the threshold may be static or fixed by the processor using any suitable mechanism, including circuitry and microcode. If the threshold is static or fixed, method 2000 may not need to determine its value. For example, if the threshold is fixed to the value of 2, method 200 would not need to determine its value. Rather, for a numeric accumulation error precision threshold of 2, method 2000 would be able to provide notification of a numeric accumulation error if the amount of precision lost is equal to or greater than 2 bits. Although 2 bits are described, the threshold may be static or fixed to any suitable value compared to the amount of precision lost in the mantissa of the floating point result, in which the amount of precision lost may correspond to a plurality of bits lost in the mantissa of the floating point result. In another embodiment, the threshold may be determined from a general purpose register. In further embodiment, the threshold may be determined from a field in the floating point control register. In yet another embodiment, the threshold may be determine from an immediate value associated with the at least one instruction. The threshold may represent any suitable type of limit corresponding to an amount of precision lost in the mantissa of the floating point result, including the number of bits lost and the percentage of bits lost.[00177] At 2015, in one embodiment a numeric accumulation error non-zero precision flag may be determined. The determination may be based on the instruction, a previous instruction, or a floating point result. The determination may occur in the numeric accumulation error detection unit 1826 or in any other suitable part of processor 1802, including execution unit(s) 1822. The non-zero precision flag may indicate that all types of data should be monitored or non-zero data should be monitored. A flag set to monitor all types of data may detect numeric accumulation errors for bits that are either zero or one in value. A flag set to monitor non-zero data may detect numeric accumulation errors for bits that are non-zero. The bits with a value of zero may be ignored by any suitable mechanism, including masking of the bits or shifting of the bits. In one embodiment, the non-zero precision flag may be used to control whether to ignore at least one trailing bit, or at least one least significant bit, of the mantissa of the floating point result with a value of zero. In another embodiment, the non-zero precision flag may be used to control whether to ignore bits of the mantissa of the floating point result with a value of zero, whether the bits associated with the least significant bits in the mantissa or the most significant bits in the mantissa.[00178] At 2020, in one embodiment a floating point operation may be performed to determine a floating point result. The floating point operation may be computed by an execution unit, such as a floating point unit, or by logic in the IP cores 1828. Accordingly, the floating point result, including the associated floating point exponent and mantissa, may be a function of the related calculation. The related calculation may include the addition, subtraction, and/or multiplication of source values in which the result may use floating point representation. At 2025, the floating point result may be rounded. Rounding may enable the floating point result to be represented in floating point format with a limited number of bits. For example, the number of bits for the floating point format may be limited to the same number of bits as the operands of the floating point operation. Rounding may include any suitable form of rounding, including truncation, rounding up, rounding down, rounding to the nearest even value, rounding away from zero, and rounding toward zero. At 2030, the rounded floating point result may be stored. Storage may include any suitable location, including a temporary location, a register, a cache, a queue, or in a memory subsystem. Storage may further reduce the number of bits representing the floating point value. For example, a 64-bit floating point result may be reduced to 32-bits for storage.[00179] At 2035, in one embodiment it may be determined whether a non-zero precision flag is set to monitor non-zero data. Method 2000 may proceed to method step 2045 if the flag is set. Otherwise, method 2000 may proceed to method step 2040. At 2040, in one embodiment the amount of precision lost for all types of data in the result of the floating point operation may be computed. The computation may involve rounding the result of the floating point operation before computing the amount of precision lost. The rounding of the result may use the same or a different rounding mode as the one performed on the result before storing the result. In one embodiment, computing the amount of precision lost may include counting the number of bits of precision lost. In another embodiment, computing the amount of precision lost may include computing a percentage of bits of precision lost, which may include counting the number of bits of precision lost and dividing such a count by the total number of bits in the floating point mantissa, or in any other portion of a floating point number. At 2045, in one embodiment the amount of non-zero precision lost in the result of the floating point operation may be computed. In one embodiment, the trailing bits of the result of the floating point operation may be truncated before computing the amount of precision lost in the result of the floating point operation. In another embodiment, computing the amount of precision lost may include counting the number of non-zero bits lost in the result.[00180] At 2050, in one embodiment it may be determined whether the amount of precision lost is greater than the threshold, which may be determined in method step 2010. The threshold may default to zero, which may disable numeric accumulation error detection, or to value corresponding to the amount of precision lost before notification of a numeric accumulation error is provided. Although a comparison of greater than or equal to is described, the comparison may be any type of comparison suitable to determine whether a threshold is met and/or exceeded, including comparisons for values less than, less than or equal to, or greater than or equal to the threshold. The comparison may involve comparing two numbers or two percentages corresponding to an amount of precision lost. Method 2000 may proceed to method step 2070 if the amount of precision lost is not greater than the threshold. Otherwise, a numeric accumulation error has occurred and method 2000 may proceed to method step 2055.[00181] At 2055, in one embodiment a value may be written to a flag, which may be for notification that a numeric accumulation error has occurred. The value may be based on the determination that a numeric accumulation error occurred. The flag may be initialized for all floating point instructions to indicate that no numeric accumulation error occurred and then may be updated after a floating point operation using numeric accumulation error detection unit 1826. The flag may reside within a status register field, which may correspond to the execution of floating point instructions. A programmer may check the status of the flag in the status register field to determine the presence of the numeric accumulation error. In one embodiment, the flag may include a register containing the amount of precision lost, which may represent the number of bits of precision lost or a percentage of precision lost. For example, if three bits of precision are lost, the register may include a value indicating that three bits are lost.[00182] An application or programmer may use the status of the flag, or the register information to continue execution. In one embodiment, the application or programmer may respond to a numeric accumulation error by switching execution to floating point operations with a higher precision. For instance, if a numeric accumulation error occurs as a result of a 64-bit floating point operation, the application or programmer may switch to computing 128-bit floating point results. In another embodiment, the application or programmer may respond to a numeric accumulation error by storing the previous or old accumulation value in an intermediate location, such as a register, cache, or memory, and may continue performing floating point operations with the same precision. For example, if a numeric accumulation error occurs as a result of a 64-bit floating point operation, the application or programmer may store the previous result, before the operation that caused the numeric accumulation error, in an intermediate location, and may begin performing 64-bit floating point operations with an initialized value. If the floating point operation is floating point addition, for instance, the initialized value may be zero. The application or programmer may then perform a 128-bit floating point operation including the intermediate value and the floating point result corresponding to the initialized value.[00183] At 2060, in one embodiment, it may be determined whether a mask bit is set to control whether to signal a numeric accumulation error exception. The mask bit may be set by the compiler of the code or the drafter of the code using any suitable mechanism including parameter flags for the floating point instruction, explicit parameters, required parameters, optional parameters with an assumed default value, or inherent parameters stored in registers or other known locations that do not require the information to be explicitly passed as a parameter. The mask bit may reside in a control field, such as in the MXCSR control field, corresponding to the execution of floating point instructions.[00184] At 2065, in one embodiment a numeric accumulation error may be signaled by reporting an exception based on the determination that the mask bit is not set. If the mask bit is set, however, the numeric accumulation error may not be reported. In one embodiment, the value written in method step 2055 may be controlled by a separate mechanism, such as the numeric accumulation error precision threshold, than the signaling of an exception in method step 2065. In another embodiment, the mask bit may control both the value written in method step 2055 and the signaling of an exception in method step 2065.[00185] At 2070, the instruction may be retired by, for example, a retirement unit.Method 2000 may optionally repeat or terminate.[00186] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the disclosure may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.[00187] Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system may include any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00188] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00189] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00190] Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00191] Accordingly, embodiments of the disclosure may also include non- transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00192] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part-on and part-off processor.[00193] Thus, techniques for performing one or more instructions according to at least one embodiment are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on other embodiments, and that such embodiments not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.[00194] In some embodiments of the present disclosure, a processor may include circuitry to decode at least one instruction and an execution unit. The instruction may be to compute a floating point result. The execution unit may include circuitry to execute the instruction to determine the floating point result, compute an amount of precision lost in a mantissa of the floating point result, compare the amount of precision lost to a numeric accumulation error precision threshold, determine whether the numeric accumulation error occurred, and write a value to a flag based on the determination that the numeric accumulation error occurred. The amount of precision lost may correspond to a plurality of bits lost in a mantissa of the floating point result. The determination whether the numeric accumulation error occurred may be based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold. The flag may be for notification that the numeric accumulation error occurred.[00195] In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register. The comparison between the amount of precision lost and the numeric accumulation error precision threshold may be based on the numeric accumulation error precision threshold that is determined. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine whether a mask bit is set to prevent signaling of the numeric accumulation error and signal an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred. In combination with any of the above embodiments, in an embodiment the execution unit includes circuitry to round the floating point result and store the rounded floating point result. The amount of precision lost in the mantissa of the floating point result may represent a percentage of bits in the mantissa of the floating point result that may be lost. The percentage of bits that may be lost may include a percentage of bits that may be lost when the floating point result is rounded and/or a percentage of bits that may be lost when the rounded floating point result is stored. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore bits of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the floating point result may be computed from source values and the instruction may be a fused multiply-add instruction. The circuitry to execute the instruction may include circuitry to compute a sum based on the source values and compute the floating point result based on the sum and at least one of the source values.[00196] In some of the present embodiments, a method may include decoding at least one instruction, the instruction for computing a floating point result, executing the instruction to determine the floating point result, computing the amount of precision lost in a mantissa of the floating point result, comparing the amount of precision lost in the mantissa of the floating point result to a numeric accumulation error precision threshold, determining whether the numeric accumulation error occurred, and writing a value to a flag. The amount of precision lost may correspond to a plurality of bits lost in the mantissa of the floating point result. Determining whether the numeric accumulation error occurred may be based on the comparison between the amount of precision lost and the numeric accumulation error precision threshold. The value to be written to the flag may be based on the determination the numeric accumulation error occurred. The flag may be for notification that the numeric accumulation error occurred. [00197] In combination with any of the above embodiments, the method may include determining the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register. The comparison between the amount of precision lost and the numeric accumulation error precision threshold may use the numeric accumulation error precision threshold that is determined. In combination with any of the above embodiments, in an embodiment the method may include determining whether a mask bit is set to prevent signaling of the numeric accumulation error and signaling an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred. In combination with any of the above embodiments, in an embodiment the method may include rounding the floating point result and storing the rounded floating point result. The amount of precision lost in the mantissa of the floating point result may represent a percentage of bits in the mantissa of the floating point result that may be lost. The percentage of bits that may be lost may include a percentage of bits that may be lost when rounding the floating point result or a percentage of bits that may be lost when storing the rounded floating point result. In combination with any of the above embodiments, in an embodiment the method may include determining a numeric accumulation error non-zero precision flag and controlling whether to ignore at least one trailing bit of the mantissa with a value of zero. Determining the numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. Controlling whether to ignore at least one trailing bit of the mantissa may use the numeric accumulation error non-zero precision flag. Computing the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the method may include determining a numeric accumulation error non-zero precision flag and controlling whether to ignore bits of the mantissa of the floating point result with a value of zero. Determining the numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. Controlling whether to ignore at least one trailing bit of the mantissa may use the numeric accumulation error non-zero precision flag. Computing the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the floating point result may be computed from source values and the instruction may be a fused multiply- add instruction. Executing the instruction may include computing a sum based on the source values and computing the floating point result based on the sum and at least one of the source values. [00198] In some embodiments of the present disclosure, a system may include circuitry to decode at least one instruction and an execution unit. The instruction may be to compute a floating point result. The execution unit may include circuitry to execute the instruction to determine the floating point result, compute an amount of precision lost in a mantissa of the floating point result, compare the amount of precision lost to a numeric accumulation error precision threshold, determine whether the numeric accumulation error occurred, and write a value to a flag based on the determination that the numeric accumulation error occurred. The amount of precision lost may correspond to a plurality of bits lost in a mantissa of the floating point result. The determination whether the numeric accumulation error occurred may be based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold. The flag may be for notification that the numeric accumulation error occurred.[00199] In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register. The comparison between the amount of precision lost and the numeric accumulation error precision threshold may be based on the numeric accumulation error precision threshold that is determined. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine whether a mask bit is set to prevent signaling of the numeric accumulation error and signal an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred. In combination with any of the above embodiments, in an embodiment the execution unit includes circuitry to round the floating point result and store the rounded floating point result. The amount of precision lost in the mantissa of the floating point result may represent a percentage of bits in the mantissa of the floating point result that may be lost. The percentage of bits that may be lost may include a percentage of bits that may be lost when the floating point result is rounded and/or a percentage of bits that may be lost when the rounded floating point result is stored. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore bits of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the floating point result may be computed from source values and the instruction may be a fused multiply-add instruction. The circuitry to execute the instruction may include circuitry to compute a sum based on the source values and compute the floating point result based on the sum and at least one of the source values.[00200] In some embodiments of the present disclosure, an execution unit may include circuitry to execute at least one instruction to determine the floating point result, compute an amount of precision lost in a mantissa of the floating point result, compare the amount of precision lost to a numeric accumulation error precision threshold, determine whether the numeric accumulation error occurred, and write a value to a flag based on the determination that the numeric accumulation error occurred. The amount of precision lost may correspond to a plurality of bits lost in a mantissa of the floating point result. The determination whether the numeric accumulation error occurred may be based on the comparison between the amount of precision lost in the mantissa of the floating point result and the numeric accumulation error precision threshold. The flag may be for notification that the numeric accumulation error occurred.[00201] In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register. The comparison between the amount of precision lost and the numeric accumulation error precision threshold may be based on the numeric accumulation error precision threshold that is determined. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine whether a mask bit is set to prevent signaling of the numeric accumulation error and signal an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred. In combination with any of the above embodiments, in an embodiment the execution unit includes circuitry to round the floating point result and store the rounded floating point result. The amount of precision lost in the mantissa of the floating point result may represent a percentage of bits in the mantissa of the floating point result that may be lost. The percentage of bits that may be lost may include a percentage of bits that may be lost when the floating point result is rounded and/or a percentage of bits that may be lost when the rounded floating point result is stored. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore at least one trailing bit of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the execution unit may include circuitry to determine a numeric accumulation error non-zero precision flag and to control whether to ignore bits of the mantissa of the floating point result with a value of zero. The numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The circuitry to control may use the numeric accumulation error non-zero precision flag. The computation of the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the floating point result may be computed from source values and the instruction may be a fused multiply-add instruction. The circuitry to execute the instruction may include circuitry to compute a sum based on the source values and compute the floating point result based on the sum and at least one of the source values.[00202] In some of the present embodiments, an apparatus may include a means for decoding at least one instruction, the instruction a means for computing a floating point result, a means for executing the instruction to determine the floating point result, a means for computing the amount of precision lost in a mantissa of the floating point result, a means for comparing the amount of precision lost in the mantissa of the floating point result to a numeric accumulation error precision threshold, a means for determining whether the numeric accumulation error occurred, and a means for writing a value to a flag. The amount of precision lost may correspond to a plurality of bits lost in the mantissa of the floating point result. The means for determining whether the numeric accumulation error occurred may be based on the means for comparing the amount of precision lost to the numeric accumulation error precision threshold. The value to be written to the flag may be based on the determination the numeric accumulation error occurred. The flag may be a means for notification that the numeric accumulation error occurred.[00203] In combination with any of the above embodiments, the apparatus may include a means for determining the numeric accumulation error precision threshold based on the instruction, a previous instruction, or a floating point control register. The means for comparing the amount of precision lost to the numeric accumulation error precision threshold may use the numeric accumulation error precision threshold that is determined. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for determining whether a mask bit is set to prevent a means for signaling of the numeric accumulation error and the means for signaling an exception based on the determination that the mask bit is not set and based on the determination that the numeric accumulation error occurred. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for rounding the floating point result and a means for storing the rounded floating point result. The amount of precision lost in the mantissa of the floating point result may represent a percentage of bits in the mantissa of the floating point result that may be lost. The percentage of bits that may be lost may include a percentage of bits that may be lost when rounding the floating point result or a percentage of bits that may be lost when storing the rounded floating apparatus result. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for determining a numeric accumulation error non-zero precision flag and a means for controlling whether to ignore at least one trailing bit of the mantissa with a value of zero. The means for determining the numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The means for controlling whether to ignore at least one trailing bit of the mantissa may use the numeric accumulation error non-zero precision flag. The means for computing the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the apparatus may include a means for determining a numeric accumulation error non-zero precision flag and a means for controlling whether to ignore bits of the mantissa of the floating point result with a value of zero. The means for determining the numeric accumulation error non-zero precision flag may be based on the instruction, a previous instruction, or a floating point control register. The means for controlling whether to ignore at least one trailing bit of the mantissa may use the numeric accumulation error non-zero precision flag. The means for computing the amount of precision lost in the mantissa of the floating point result may be based on the numeric accumulation error non-zero precision flag. In combination with any of the above embodiments, in an embodiment the floating point result may be computed from source values and the instruction may be a fused multiply-add instruction. The means for executing the instruction may include a means for computing a sum based on the source values and a means for computing the floating point result based on the sum and at least one of the source values.
A technique for designing circuits including receiving a data object (514) representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component arranged in a first topology; identifying the first sub-circuit in the data object (518) by comparing the first topology to a stored topology, the stored topology associated with the first process technology; identifying a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit; determining a set of performance parameter values (520) for the first sub-circuit based on a first machine learning model of the first sub-circuit and the identified set of physical parameters; converting the identified first sub-circuit to a second sub-circuit (522) for the second process technology based on the determined set of performance parameter values; and outputting the second sub-circuit (526).
CLAIMSWhat is claimed is:1. A method comprising: receiving a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology; identifying the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology; identifying sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit; determining a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters; converting the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values; and outputting the second sub-circuit.2. The method of claim 1, wherein converting the identified first sub-circuit to the second sub circuit comprises: determining a second set of sub-circuit physical parameters associated with a third electrical component and a fourth electrical component of the second sub-circuit based on a second ML model, for the second process technology, and the set of sub-circuit performance parameter values; and associating sub-circuit physical parameters of the second set of sub-circuit physical parameters with the third electrical component and the fourth electrical component of the second sub-circuit.3. The method of claim 2, wherein the third electrical component and the fourth electrical component correspond to the first electrical component and the second electrical component, respectively.4. The method of claim 2, wherein the first ML model and second ML model comprise neural networks.5. The method of claim 1, wherein the second process technology comprises a second semiconductor manufacturing process associated with smaller electrical components as compared to a first process technology.6. The method of claim 1, further comprising verifying the second sub-circuit based on a circuit simulation of the second sub-circuit.7. The method of claim 1, wherein sub-circuit performance parameters of the set of sub-circuit performance parameters are determined based on a type of the identified first sub-circuit.8. The method of claim 1, wherein identifying the first sub-circuit is based on a set of rules.9. A non-transitory program storage device comprising instructions stored thereon to cause one or more processors to: receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology; identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology; identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit; determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters; convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values; and output the second sub -circuit.10. The non-transitory program storage device of claim 9, wherein the instructions for converting the identified first sub-circuit to the second sub-circuit comprises instructions to cause the one or more processors to: determine a second set of sub-circuit physical parameters associated with a third electrical component and a fourth electrical component of the second sub-circuit based on a second ML model, for the second process technology, and the set of sub-circuit performance parameter values; and associate sub-circuit physical parameters of the second set of sub-circuit physical parameters with the third electrical component and the fourth electrical component of the second sub-circuit.11. The non-transitory program storage device of claim 10, wherein the third electrical component and the fourth electrical component correspond to the first electrical component and the second electrical component, respectively.12. The non-transitory program storage device of claim 9, wherein the first ML model and second ML model comprise neural networks.13. The non-transitory program storage device of claim 9, wherein the second process technology comprises a second semiconductor manufacturing process associated with smaller electrical components as compared to a first process technology.14. The non-transitory program storage device of claim 9, wherein the instructions further comprise instructions to cause the one or more processors to verify the second sub-circuit based on a circuit simulation of the second sub-circuit.15. The non-transitory program storage device of claim 9, wherein performance parameters of the set of performance parameters are determined based on a type of the identified first sub-circuit.16. The non-transitory program storage device of claim 9, wherein identifying the first sub circuit is based on a set of rules.17. An el ectroni c devi ce, compri sing : a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to: receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology; identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology; identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit; determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub circuit physical parameters; convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values; and output the second sub -circuit.18. The electronic device of claim 17, wherein the instructions for converting the identified first sub-circuit to the second sub-circuit comprises instructions to cause the one or more processors to: determine a second set of sub-circuit physical parameters associated with a third electrical component and a fourth electrical component of the second sub-circuit based on a second ML model, for the second process technology, and the set of sub-circuit performance parameter values; and associate sub-circuit physical parameters of the second set of sub-circuit physical parameters with the third electrical component and the fourth electrical component of the second sub-circuit.19. The electronic device of claim 18, wherein the third electrical component and the fourth electrical component correspond to the first electrical component and the second electrical component, respectively.20. The electronic device of claim 19, wherein the first ML model and second ML model comprise neural networks.21. The electronic device of claim 18, wherein sub-circuit performance parameters of the set of sub-circuit performance parameters are determined based on a type of the identified first sub-circuit.
AUTOMATED ANALOG AND MIXED-SIGNAL CIRCUIT DESIGN AND VALIDATIONBACKGROUND[0001] Analog circuits are often used to sense, interact with, and/or control real-world signals. Real world signals or information are analog as they are a continuous quantity. For example, temperature varies across an infinite range (e.g., has infinite values) rather than just by discrete integer values. In comparison, digital circuits operate on discrete values, ones and zeros, which are used to represent analog signals or information. To help digital circuits handle analog signals or information, digital circuits can interact with or incorporate analog circuits. For example, a temperature sensor may include one or more analog circuits to sample a temperature, one or more hybrid circuits to convert the sampled temperature to a digital value, and one or more digital circuits to process the digital value. Similarly, a digital circuit may process an audio file, a hybrid circuit may perform a digital to analog conversion, an analog circuit may amplify the analog signal, and a speaker may output the actual sound encoded in the audio file. It may be understood that, as used herein, an analog circuit may refer to either analog or hybrid circuits (e.g., mixed signal circuits), which may include both analog and digital portions.[0002] As integrated circuits advance, a number of components that can be fit in an area of a semiconductor die has increased rapidly. This reduction in size, also known as die shrink, helps reduce costs and improve performance of the resulting circuit chips. While, die shrinking and semiconductor scaling techniques are relatively straight forward for digital circuits, scaling analog circuits is much more difficult. For example, analog circuits may be more substantially affected by voltage headroom, gain degradation, signal to noise ratio adjustments, etc., as compared to digital circuits. Circuit geometry and configurations, in an analog or hybrid sub-circuit, such as a differential pair, may influence the performance of not only the differential pair, but may also influence the performance of other sub-circuits, such as a current mirror, in another part of the overall circuit. Additionally, different process nodes or semiconductor process technologies may influence how the circuit geometry and configuration affect the performance. Depending on the purpose of the overall circuit, this performance difference may be unacceptable. Scaling as between different sized process nodes may also affect sub-circuits differently such that each sub- circuit, or even individual components, may have a different scaling factor. Some analog circuits may need extensive manual changes or redesigns when attempting to scale a design between process nodes.SUMMARY[0003] This disclosure relates to techniques for designing circuits. More particularly, but not by way of limitation, aspects of the present disclosure relate a method including receiving a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identifying the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identifying sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit, determining a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, converting the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and outputting the second sub-circuit.[0004] Another aspect of the present disclosure relates to a non-transitory program storage device including instructions stored thereon to cause one or more processors to receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub circuit, determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and output the converted first sub-circuit.[0005] Another aspect of the present disclosure relates to an electronic device including a memory; and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit for a first process technology, the circuit including a first sub-circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify the first sub-circuit in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub circuit, determine a set of sub-circuit performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters, convert the identified first sub-circuit to a second sub-circuit for a second process technology based on the determined set of sub-circuit performance parameter values, and output the converted first sub-circuit.[0006] Another aspect of the present disclosure relates to a method comprising receiving a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receiving a set of stored topologies, identifying the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determining, based on the connections of the first electrical component, a coupling between the first electrical component and a second electrical component, determining the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and outputting the identified first topology.[0007] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receive a set of stored topologies, identify the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determine, based on the connections of the first electrical component, a coupling between the first electrical component and a second electrical component, determine the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and output the identified first topology.[0008] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, receive a set of stored topologies, identify the first electrical component, second electrical component, and connections of the first electrical component and second electrical component, determine, based on the connections of the first electrical component, a coupling between the first electrical component and a second electrical component, determine the first topology based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies, and output the identified first topology.[0009] Another aspect of the present disclosure relates to a method comprising receiving a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identifying the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identifying a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determining a set of performance parameter values for the first sub-circuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, converting the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub-circuit having a third electrical component and a fourth electrical component arranged in a second topology, and outputting the second sub-circuit. [0010] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify a type of the first sub-circuit based on connections of the first electrical component and the second electrical component, identify the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determine a set of performance parameter values for the first sub circuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, convert the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub circuit having a third electrical component and a fourth electrical component arranged in a second topology, and output the second sub-circuit.[0011] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a data object representing a circuit for a process technology, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology, identify a type of the first sub-circuit based on connections of the first electrical component and the second electrical component, identify the first sub-circuit in the circuit by comparing the first topology to a stored topology, the stored topology associated with the first process technology, identify a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit, determine a set of performance parameter values for the first sub circuit based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values, convert the identified first sub-circuit to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub circuit having a third electrical component and a fourth electrical component arranged in a second topology, and output the second sub-circuit. [0012] Another aspect of the present disclosure relates to a method comprising receiving an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determining a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determining a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generating a data object representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and outputting the data object.[0013] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determine a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determine a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generate a data object representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and output the data object.[0014] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive an indication of a sub-circuit type and a set of sub-circuit performance parameter values, determine a sub-circuit topology based on the sub-circuit type and the set of sub-circuit performance parameters values, determine a set of sub-circuit physical parameter values based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values, generate a data object representing a sub-circuit based on the determined set of sub-circuit physical parameters values and the determined sub-circuit topology, and output the data object.[0015] Another aspect of the present disclosure relates to a method comprising receiving a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determining a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters, simulating the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation, training a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and storing the trained ML model.[0016] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive a first set of sub circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters, simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation, train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and store the trained ML model. [0017] Another aspect of the present disclosure relates to an electronic device, comprising a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology, determine a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub circuit physical parameters, simulate the first variation of sub-circuit physical parameters in the first process technology to generate a first set of sub-circuit performance parameter values associated with the first variation, train a machine learning (ML) model of the structural sub-circuit based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology, and store the trained ML model.[0018] Another aspect of the present disclosure relates to a method comprising receiving an initial set of parameters, the initial set of parameters associated with a sub-circuit, interacting a first parameter of the initial set of parameters with other parameters of the initial set of parameters to generate a set of interacted parameters, adding the interacted parameters to the initial set parameters to generate a candidate set of parameters, performing a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, removing parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determining an accuracy of the candidate set of parameters based on the linear regression, comparing the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, outputting the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeating the steps of: interacting a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, adding the interacted parameters to the candidate set of parameters, performing the linear regression, removing parameters, determining the accuracy, comparing the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, and outputting the candidate set of parameters.[0019] Another aspect of the present disclosure relates to a non-transitory program storage device comprising instructions stored thereon to cause one or more processors to receive an initial set of parameters, the initial set of parameters associated with a sub-circuit, interact a first parameter of the initial set of parameters with other parameters of the initial set of parameters to generate a set of interacted parameters, add the interacted parameters to the initial set parameters to generate a candidate set of parameters, perform a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, remove parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determine an accuracy of the candidate set of parameters based on the linear regression, compare the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, output the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeat the steps of: interact a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, add the interacted parameters to the candidate set of parameters, perform the linear regression, remove parameters, determine the accuracy, compare the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times; and output the candidate set of parameters.[0020] Another aspect of the present disclosure relates to an electronic device, comprising: a memory, and one or more processors operatively coupled to the memory, wherein the one or more processors are configured to execute instructions causing the one or more processors to receive an initial set of parameters, the initial set of parameters associated with a sub-circuit, interact a first parameter of the initial set of parameters with other parameters of the initial set of parameters to generate a set of interacted parameters, add the interacted parameters to the initial set parameters to generate a candidate set of parameters, perform a linear regression on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters, remove parameters of the candidate set of parameters based on a comparison between the predicative value and a predetermined predictive threshold, determine an accuracy of the candidate set of parameters based on the linear regression, compare the accuracy of the candidate set of parameters to a predetermined accuracy level, wherein if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, output the candidate set of parameters, and wherein if the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, repeat the steps of: interact a second parameter of the initial set of parameters with other parameters of the candidate set of parameters, add the interacted parameters to the candidate set of parameters, perform the linear regression, remove parameters, determine the accuracy, compare the accuracy, until: the accuracy of the second candidate set of parameters has reached the predetermined accuracy, or each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, and output the candidate set of parameters.BRIEF DESCRIPTION OF THE DRAWINGS[0021] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0022] FIG. 1 illustrates an example of circuit design evolution, in accordance with aspects of the present disclosure. [0023] FIG. 2 is a block diagram of an analog circuit, in accordance with aspects of the present disclosure.[0024] FIGs. 3A-3B are a circuit diagram of an illustrative circuit block, in accordance with aspects of the present disclosure.[0025] FIG. 4 is a circuit diagram illustrating a sub-circuit, in accordance with aspects of the present disclosure.[0026] FIG. 5 is a block diagram of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. [0027] FIG. 6 is a block diagram of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. [0028] FIGs. 7A-7B illustrate an example set of known topologies of input or gain stage for a given process technology, in accordance with aspects of the present disclosure.[0029] FIG. 8 is a system diagram illustrating an overview of technique for designing a new analog circuit from an original analog circuit, in accordance with aspects of the present disclosure.[0030] FIG. 9 is a chart illustrating sets of performance parameters for certain sub-circuits, in accordance with aspects of the present disclosure.[0031] FIG. 10 illustrates an example neural network ML model, in accordance with aspects of the present disclosure.[0032] FIG. 11 illustrates a series of ML model parameters for threshold stepwise selection, in accordance with aspects of the present disclosure.[0033] FIG. 12 is a flow diagram illustrating an overview of a technique for designing circuits, in accordance with aspects of the present disclosure.[0034] FIG. 13 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0035] FIG. 14 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0036] FIG. 15 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0037] FIG. 16 is a flow diagram illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0038] FIGs. 17A-17B are flow diagrams illustrating a technique for designing circuits, in accordance with aspects of the present disclosure.[0039] FIG. 18 is a block diagram of an embodiment of a computing device, in accordance with aspects of the present disclosure.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0040] Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.[0041] As digital circuits become ever more common in our lives, interfaces between these digital circuits and the real, analog world becomes ever more prevalent. As improved manufacturing process technologies for fabricating semiconductors are developed, digital circuit sizes have steadily shrunk, allowing digital circuits to take advantage of new, smaller, process technology nodes. Generally, process technology nodes refer to a size of a transistor gate length of a particular semiconductor manufacturing process technology. However, the pace at which analog circuits have been shrinking has not kept pace as analog circuits often require extensive redesign between different semiconductor manufacturing process technologies and/or process technology nodes (hereinafter referred to as process technology), rather than a relatively simple size shrink. Additionally, circuits may be modified to enhance functionality. For example, a circuit may be modified to adjust an operating voltage of the circuit to help reduce power requirements, or the circuit may be modified to expand an operating range for the circuit. For a particular process technology, aspects of each electrical component of the analog circuit and how the electrical component may interact with characteristics of the manufacturing process may influence the performance of the overall circuit in non-linear and difficult to predict ways. This makes simply resizing or copying a circuit from one process technology to another difficult. Similarly, these interactions make modifying the functionality of a circuit difficult to implement.[0042] Manufacturing process technologies for fabricating semiconductors have evolved as digital and analog circuits have become more common. FIG. 1 illustrates an example 100 of circuit design evolution, in accordance with aspects of the present disclosure. In this example 100, a circuit 102 (individually, 102 A, 102B, and 102C, and collectively 102), includes three sub-circuit blocks, such as bandgap 104 (individually, 104 A, 104B, and 104C, and collectively 104), operational amplifier 106 (individually, 106 A, 106B, and 106C, and collectively 106), and driver 108 (individually, 108 A, 108B, and 108C, and collectively 108). In this example the circuit 102A may be currently implemented in a first process technology 110. The circuit 102 may be converted from the first process technology 110 to a second process technology 112 while maintaining the same overall operating specifications, such as an operating voltage. For example, in this case, the circuit 102 may be converted from the first process technology 110 to the second process technology 112 while maintaining a 3.3 v operating voltage.[0043] Additionally, in certain cases, the circuit 102 may be redesigned, for example to enhance functionality. In this example, circuit 102B may be redesigned as circuit 102C to reduce the operating voltage while using the same process technology, here the second process technology 114. In certain cases, redesigning the circuit 102 may include updated design specifications, for example, of electrical devices of certain sub-circuit blocks, such as the operational amplifier 106C and the bandgap 104C. In other cases, re-architecting, for example to adjust a circuit layout, may be included, such as shown for the driver 108C.[0044] Presently, modifying a circuit design or converting the circuit design from one process to another process technology is largely a manual process. For example, a circuit designer may have a set of design specifications that a circuit should meet. These design specifications may be based on expected performance of the circuit, so, for example, an amplifier circuit may have design specifications for output resistance, distortion, impedance, etc. The designer may then convert each electronic component of the circuit, taking into considering the physical parameters of the electronic component in the original process technology and determining physical parameters of the electronic components in the target process technology. This determination is largely based on experience and intuition. After the electronic components are converted to the target process technology, the completed circuit may be simulated on circuit simulation software, such as simulation program with integrated circuit emphasis (SPICE), as against the design specifications. If the converted circuit does not meet the design specifications, the circuit designer may adjust the circuit, such as by changing physical parameters of certain electronic components, and simulating the circuit again. This adjustment is also largely based on experience and intuition and is generally an iterative process. It may be understood that electrical components, as used herein, refers to components or devices which make up a circuit, such as transistors, resistors, capacitors, inductors, diodes, etc. [0045] To help accelerate efforts to transition analog circuits from one process to the another, as well as the development of new and improved analog circuits, an implementation of an automated analog and mixed signal circuit design and validation is desired.[0046] In certain cases, while a circuit may be presented visually, for example by a circuit design or simulation program, the underlying representation of the circuit may be in the form of one or more netlists or in a hardware description language (HDL). A netlist, or HDL, generally is a list of the electrical components of a circuit and a list of nodes each electronic component is connected with. In certain cases, attributes, structural information, physical parameters, or other information may also be included in the netlist. Moreover, in certain embodiments, the netlist or HDL is stored in a data object.[0047] Sub-Circuits[0048] FIG. 2 is a block diagram 200 of an analog circuit, in accordance with aspects of the present disclosure. Analog circuit 202 is any type of analog or hybrid analog-digital circuit that comprises a plurality of electrical components. In most embodiments, analog circuit 202 will process, generate, transmit, or receive an analog signal by one or more of the plurality of electrical components in the analog circuit 202. Analog circuit 202 may be a part of a larger circuit ( e.g ., an integrated circuit) or analog circuit 202 may be the entire circuit (e.g., an integrated circuit). Typically, the analog circuit 202 consists of one or more circuit blocks 204. Often, circuits are designed such that particular portions of the circuit perform certain tasks. For example, a circuit 202 may be divided into portions, or circuit blocks 204, which perform a particular function. Circuit blocks 204 may be any type of Intellectual Property (“IP”) block, IP core, functional block, or collection of components. In certain embodiments, circuit block 204 is circuit 202. In addition, circuit blocks 204 may provide one or more functions for analog integrated circuit 202. In certain embodiments, circuit block 204 is analog integrated circuit 202. These circuit blocks 204 may be described, for example, in the netlist in a manner similar to software functions and referenced by, for example, another netlist or circuit block describing a larger portion of the circuit. In certain cases, circuit blocks 204 may include other circuit blocks.[0049] Analog circuit 202 may comprise one or more sub-circuits, and circuit block 204 may also comprise one or more sub-circuits. In certain embodiments, a sub-circuit may be the same as circuit block 204 and/or analog circuit 202. In other embodiments, circuit block 204 may comprise a subset of the one or more sub-circuits of circuit block 204. A sub-circuit refers to a portion of a circuit that is less than the whole circuit ( e.g ., a subset of a circuit). In alternative embodiments, the sub-circuit may refer to the whole circuit.[0050] A sub-circuit may comprise one or more of the plurality of electrical components in the analog circuit 202. Sub-circuits may be classified into sub-circuit types. A non-exhaustive list of sub-circuit types may include, but are not limited to a current mirror, a current divider, a current source, a current reference, a driver circuit, level-shift stage, gain stage, operational amplifier, a current mirror operational amplifier, inverting or non-inverting amplifier, a filter (e.g., a band pass filter, a low pass filter, or a high pass filter), an RC circuit, a resistor ladder, a voltage ladder, a power amplifier, a clock source, an analog-to-digital converter (“ADC”), a digital-to-analog converter (“DAC”), a voltage follower, a voltage regulator, a darlington transistor or pair, a boost circuit (e.g, step-up circuit), a buck circuit (e.g, a step-down circuit), a mixer, a modulator, an inverter, a signal conditioner, an integrator, a differentiator, an input stage, an output stage, or any other identifiable sub-circuit type used in analog circuits.[0051] FIGs. 3A and 3B are a circuit diagram of an illustrative circuit block 300, in accordance with aspects of the present disclosure. As shown in FIGs. 3A and 3B, the circuit block 300 may be further divided into sub-circuits 302. In this example, circuit block 300 performs the function of amplifying a signal. Sub-circuits 302 are portions of the circuit block 300 intended to perform a purpose such as providing a reference voltage, copying a current, filtering a signal, etc. Sub-circuits 302 include a set of electrical components which are structured to operate together to perform the purpose and the set of electrical components may influence one or multiple output parameters of the circuit block. Sub-circuits may often function as building blocks of the overall circuit block, performing functions common across many circuit blocks. In certain cases, sub-circuits may be classified by their functions into types or categories. Classifying sub-circuits of a circuit block helps allow circuit blocks to be analyzed for conversion and/or creation at a sub-circuit level, helping to break down a circuit block into more easily analyzed components. Examples of sub-circuit types include current mirror 304, input stages 306, output stages 308, passives 310, voltage ladders, resistor ladders, etc. In certain cases, miscellaneous blocks 312 may also be identified for the circuit block. These miscellaneous blocks 312 may include, for example, nested circuit blocks 314, single electrical components 316 which may not have been included with other identified sub-circuits, unidentified sub-circuits which may need further analysis, etc.[0052] FIG. 4 is a circuit diagram illustrating a sub-circuit 400, in accordance with aspects of the present disclosure. In this example, the sub-circuit 400 is a type of current mirror and includes two electrical components, a first transistor 402 and a second transistor 402. Each electrical component has certain physical parameters, which describe measurable physical characteristics about the electrical component, such as a channel width (W) and a channel length (L), input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. The sub-circuit physical parameters may refer to the physical parameters of electrical components of the sub-circuit and the sub-circuit physical parameters may also include operating information (e.g., operating point, bias point, quiescent point, Q-point, etc.). The operating point represents a current or voltage at the terminals of the electrical component for the electrical component to operate. Each electrical component may perform a particular role, for example, the current (IREF) that flows through the first transistor 404 is mirrored through the second transistor 402 as a function of the ratio (N) of the sizes of the transistors 402 and 404 and the first transistor 404 may act as a current to voltage convertor while the second transistor 402 may act as a voltage to current convertor. Based on the electrical components of the sub-circuit and their associated physical parameters, the overall sub circuit may be associated with a variety of sub-circuit performance parameters. While a variety of sub-circuit performance parameters may be determined, not all sub-circuit performance parameters are important for a given sub-circuit type.[0053] A sub-circuit may have a plurality of sub-circuit parameters. Sub-circuit parameters may comprise sub-circuit physical parameters of the sub-circuit, sub-circuit operational parameters of the sub-circuit, sub-circuit performance parameters of the sub-circuit, or a combination of physical parameters, operation parameters, and performance parameters of the sub-circuit. There may be a variety of sub-circuit parameters, which may describe how a particular sub-circuit performs in a variety of ways. The physical parameters of the electrical components and how electrical components of the sub-circuit are connected are factors which influence the sub-circuit parameters, but the relationship between these factors and the sub-circuit parameters are often non-linear and vary depending on the process technology. Sub-circuit parameters may be determined for a particular sub-circuit using circuit simulation software, such as SPICE simulation. For example, operating information of a sub-circuit may be determined using circuit simulation software. The operating information takes into account external influences to the circuit, for example, characteristics of a supply current and determines what the state (e.g., bias current) of electrical devices of the sub-circuit and/or circuit. In certain cases, determining operating information using circuit simulation software may be performed relatively quickly as compared to determining sub-circuit performance parameters of a sub-circuit using circuit simulation software.[0054] Of the of sub-circuit parameters, a set of sub-circuit performance parameters may be identified as being more relevant to describing the performance of a particular sub-circuit with respect to physical parameters of electrical components of the sub-circuit. This set of sub-circuit performance parameters may be determined to be more relevant based on the function of the particular sub-circuit. In certain cases, sub-circuit performance parameters of the set of sub-circuit performance parameters included for a particular type of sub-circuit may be predetermined. In certain cases, this predetermination of the set of sub-circuit performance parameters for a particular type of sub-circuit may be made based on expert knowledge and/or experience as to what sub-circuit performance parameters are more relevant for the type of sub-circuit.[0055] In certain cases, the sub-circuit performance parameters for a sub-circuit type may be predetermined algorithmically. For example, where a circuit including the sub-circuit type in question has been successfully converted from a first process technology to a second process technology, the sub-circuit type may be modeled, such as in circuit simulation software, as designed in the first process technology and modeled again as designed in the second process technology. A variety of sub-circuit performance parameters may be determined for both models and then compared to determine which performance parameters are most closely maintained after the conversion. This process may be repeated with multiple examples of the sub-circuit type, either in the same or different circuits, as well as with different topologies of the sub-circuit type to obtain a representative sample to determine the set of performance parameters that are most relevant for converting the sub-circuit type.[0056] Sub-circuit performance parameters included in the set of sub-circuit performance parameters may differ for different types of sub-circuits as the purpose played by different types of sub-circuits are different. As an example, a set of sub-circuit performance parameters for current mirrors may include current matching, output impedance, operating region, and a width and length of the transistors. The sub-circuit performance parameters included in this set of sub-circuit performance parameters may differ from sub-circuit performance parameters included in another set of sub-circuit performance parameters associated with an input state sub-circuit type. In certain cases, if a set of sub-circuit performance parameters has not been defined for a particular sub-circuit type, performance parameters of the electrical components may be used instead of sub-circuit performance parameters.[0057] FIG. 5 is a block diagram 500 of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. The example embodiment illustrated in block diagram 500 provides an overview of an exemplary technique for converting a circuit design from a first process technology to a second process technology, aspects of which are discussed in more detail below. An analog circuit may be divided into one or more circuit blocks. These circuit blocks are often designed to perform a certain function and comprise one or more sub-circuits. These sub-circuits include one or more electrical components which are structured to operate together, and a set of known sub-circuits may be identified. These known sub-circuits may be a number of arrangements of the electrical component (e.g., topologies) known to be sufficiently robust to be useable in a circuit for a process technology. Each component of a sub-circuit may be associated with certain range of physical parameters. Sets of sub-circuit physical parameters may be identified, each set having a different combination of physical parameters for the electrical components. These known sub-circuits may be modeled, for example as a netlist for use with a circuit simulator, for each set of sub-circuit physical parameters. This modeling may be based on a netlist, which generally is a list of the electrical components of a circuit and a list of nodes each electronic component is connected with. At block 502, models of these known sub-circuits may be simulated using circuit simulation software, such as SPICE. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the first process technology. A ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained at block 504 to create a set of trained ML models for a process technology. In this embodiment, these trained ML models in the ML model library 506 may receive, as input, a set of sub-circuit physical parameters for electronic components of the sub-circuit for the first process technology and predict, as output, a set of sub-circuit performance parameters for the first process technology. These trained ML models may be stored in a ML model library 506. In certain cases, the ML model library 506 may be created once for a process technology and reused as needed.[0058] Similarly, for a second process technology a set of trained ML models may be configured to receive, as input, a set of sub-circuit performance parameters and predict a set of sub-circuit physical parameters for electronic components of the sub-circuit. As described above, each component of a sub-circuit may be associated with certain range of physical parameters and sets of sub-circuit physical parameters may be identified, each set having a different combination of physical parameters for the electrical components. A set of known sub-circuits may be modeled, for example, as netlists, for each set of sub-circuit physical parameters. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the second process technology at block 508. At block 510, a ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained to create a set of trained ML models for the second process technology. In this embodiment, these trained ML models in the ML model library 512 may receive, as input, a set of sub-circuit performance parameters for a second process technology and predict, as output, sub-circuit physical parameters for electronic components of the sub-circuit for the second process technology. This set of trained ML models maybe stored in the ML model library.[0059] Thus, this example includes two sets of ML models. The first set of ML models take sub circuit physical parameters for a first process technology and predicts certain sub-circuit performance parameters for a particular sub-circuit. The second set of ML models take the certain sub-circuit performance parameters for the particular sub-circuit and predict sub-circuit physical parameters for electrical components of the particular sub-circuit for the second process technology. [0060] In this example, a representation 514 of a circuit, such as a netlist describing the circuit, may be parsed to identify one or more circuit blocks at block 516. A circuit block may be parsed to identify sub-circuits of the circuit block at block 518. A sub-circuit type may also be identified. At block 520 for each identified sub-circuit, sub-circuit physical parameters for components of the sub circuit are identified and input to a ML model corresponding to the identified sub-circuit for the first process technology (e.g., stored in ML model library 506) to predict certain sub-circuit performance parameters. These predicted certain sub-circuit performance parameters are then input to a second ML model corresponding to the identified sub-circuit for the second process technology (e.g., stored in ML model library 512) to predict certain sub-circuit physical parameters for components of the sub-circuit in the second process technology. At block 522, a representation of the sub-circuit, such as a netlist, is created for each identified sub-circuit based on the predicted certain sub-circuit physical parameters for components of each sub-circuit and the sub-circuits may be connected into circuit blocks, which in turn are connected to form an overall circuit, thus converting the original circuit to a new circuit in the second process technology. At block 524, this new circuit may be simulated to verify that the new circuit meets the design specification, and if the design specifications are met, the representation of the new circuit may be output at block 526.[0061] FIG. 6 is a block diagram 600 of an example embodiment of a technique for automated analog and mixed signal circuit design and validation, in accordance with aspects of the present disclosure. The example embodiment illustrated in block diagram 600 provides an overview of an exemplary technique to create a new circuit or optimize an existing circuit, aspects of which are discussed in more detail below. As discussed in conjunction with FIG. 5, analog circuits may be divided into circuit blocks and sub-circuits. Known sub-circuits may be modeled, for example as a netlist for use with a circuit simulator, for sets of sub-circuit physical parameters. At block 502, models of these models may be simulated using circuit simulation software, such as SPICE. Each set of sub-circuit physical parameters may be simulated to identify certain sub-circuit performance parameters associated with a given set of sub-circuit physical parameters for the first process technology. A ML model for each sub-circuit of the known sub-circuits (or those sub-circuits supported by the particular embodiment) may be trained at block 504 to create a set of trained ML models for a process technology. In this embodiment, certain trained ML models in the ML model library 506 may receive, as input, a set of sub-circuit physical parameters for electronic components of the sub-circuit for the first process technology and predict, as output, a set of sub-circuit performance parameters. Additionally, other trained ML models in the ML model library may receive, as input, sets of sub-circuit performance parameters for the first process technology and predict, as output, a set of sub-circuit physical parameters for electronic components of the sub circuit for the first process technology. These trained ML models may be stored in a ML model library 506.[0062] At block 516, a circuit block may be identified from a representation of a circuit 514. For example, an algorithm attempting to optimize an existing circuit may parse the representation of a circuit, for example stored as a data object such as a netlist, to identify a circuit block. As another example, a user attempting to create a new circuit may identify a circuit block 516 they are working on. At block 518, one or more sub-circuits of a circuit block may be identified. For example, an algorithm may parse a circuit block to identify sub-circuits of the circuit block. As another example, the user may identify a sub-circuit type that they are attempting to design. The user may alternatively or additionally identify other sub-circuits of the circuit block. At block 520, a set of performance parameter values for the sub-circuit may be identified. For example, an algorithm may, for each identified sub-circuit, identify sub-circuit physical parameters for components of the sub-circuit and input these sub-circuit physical parameters to a ML model corresponding to the identified sub-circuit for the first process technology (e.g., stored in ML model library 506) to predict a set of sub-circuit performance parameters. As another example, the user may identify certain sub-circuit performance parameters for the sub-circuit being created.[0063] At block 602, one or more sub-circuit performance parameters may be provided for optimization. The one or sub-circuit more performance parameters for optimization may be provided along with the other sub-circuit performance parameters of the set of sub-circuit performance parameters. For example, an algorithm may optimize one or more sub-circuit performance parameters from the set of sub-circuit performance parameters identified at block 520 to help enhance the performance of the sub-circuit. Alternatively, the set of sub-circuit performance parameters identified at block 520 may be provided, for example, to attempt to optimize a topology of the sub-circuit. As another example, a user may provide the set of sub-circuit performance parameters and identified sub-circuit type for the sub-circuit being created. In certain cases, an indication of a sub-circuit type and/or sub-circuit topology may also be provided. Alternatively, the sub-circuit type may be inferred, for example, based on the sub-circuit performance parameters included in the set of performance parameters. In yet other cases, the sub-circuit may be optimized based on properties of the components within the topologies, for example, such as based on size or number of components within topologies of the sub-circuit type.[0064] The topology of a sub-circuit refers to a specific arrangement of electrical components of a sub-circuit. For a sub-circuit type, there may be many practical topologies for implementing the sub-circuit. For example, FIGs. 7A-7B illustrates a set of different topologies for an input (or gain) stage sub-circuit type.[0065] At block 604 an optimized sub-circuit may be identified. For example, based on the sub circuit topology and optimized sub-circuit performance parameters, new sub-circuit physical parameters may be determined for electrical components of the sub-circuit by selecting an appropriate ML model based on the sub-circuit topology and inputting the optimized sub-circuit performance parameters to the ML model to obtain new sub-circuit physical parameters for the sub circuit topology. In certain cases, the sub-circuit topology of the optimized sub-circuit may be the same as the original sub-circuit topology. In other cases, the sub-circuit topology may be optimized. For example, the optimized sub-circuit performance parameters may be input into multiple ML models of the sub-circuit type to generate multiple sets of sub-circuit physical parameters for multiple sub-circuit topologies of the sub-circuit type. A sub-circuit topology of the multiple sub circuit topologies may then be selected by an optimization function. The optimization function may be any known optimization technique, such as cost function, loss function, etc. As an example, the optimization function may select a sub-circuit topology based on a least number of electrical components with sub-circuit physical parameters of those electrical components within a certain range, the range selected for ease of manufacture based on the first process technology. At block 524, this new optimized circuit may be simulated to verify that the new circuit meets the design specification, and if the design specifications are met, the representation of the new circuit may be output at block 526.[0066] In certain cases, one or more known sub-circuits may be identified. While there may be multiple ways to design a particular set of electrical components to perform the specific purpose of a sub-circuit, in practice, there may be a limited number of practical electrical component arrangements (e.g., topologies) sufficiently robust to be useable for expected environmental conditions (e.g., temperature range, humidity range, operating voltage, etc.) for a given process technology. For example, FIGs. 7A-7B illustrate an example set 700 of known topologies of input or gain stage for a given process technology, in accordance with aspects of the present disclosure. Of note, the set 700 of known topologies is not exhaustive. Rather, the set 700 may include topologies known to be workable and/or practically useable. In certain cases, the set 700 of known topologies for a particular sub-circuit may be predetermined, at least in part, based on expert knowledge and/or experience as to what topologies are workable and/or practically useable.[0067] In certain cases, the set 700 of known topologies may not be fixed and additional topologies may be added as needed. For example, as additional topologies are identified, these additional topologies may be added manually. In other cases, additional topologies may be identified, for example, by noting components and their connections of a new topology candidate that are not identified as a part of a known topology and matching this new topology candidate against a listing of other topology candidates previously not recognized as a part of a known topology. If there is a match, these candidate topologies may be surfaced to a user. Alternatively, a set of sub-circuit performance parameters may be algorithmically determined for the candidate topology, as described above. If the set of sub-circuit performance parameters matches the set of sub-circuit performance parameters for the corresponding type of sub-circuit, the candidate topology may be added to the set 700 of known topologies. In certain cases, sets of known topologies may be organized based on different types of sub-circuits, or a single set of known sub-circuits may include topologies for all of the types of sub-circuits.[0068] Sub-Circuit Identification[0069] FIG. 8 is a system diagram illustrating an overview of technique 800 for designing a new analog circuit from an original analog circuit, in accordance with aspects of the present disclosure. In certain cases, technique 800 may be implemented in software as one or more software programs which may include various modules. While technique 800 is described in the context of an embodiment organized with multiple modules, rules, tools, libraries, etc., it may be understood that this organization has been chosen for clarity and other embodiments may perform the techniques described in technique 800 using differing organizations. In technique 800, an existing analog circuit is described by a first data object representing an original circuit802. A data object may be a location or region of storage or memory that contains value or group of values. A data object may include an electronic file in a file system, a block storage, or any other type of electronic storage that can store data. The original circuit may be a schematic, an electrical diagram, netlist, HDL, or any type of representation or design of a circuit ( e.g ., circuit design). In addition, the original circuit may be a subset of a larger circuit (e.g., an integrated circuit). The first data object representing the original circuit 802 may be any type of electronic representation or circuit design of a circuit. The first data object representing the original circuit 802 may be associated with a first process technology, such as a current circuit manufacturing process. An indication of the current circuit manufacturing process may be obtained in any way. For example, the indication may be input by a user and/or extracted from the first data object. In certain embodiments, technique 800 may identify the first process technology from the first data object representing an original circuit 802, a circuit design associated with the circuit, a circuit block in the original circuit, a sub-circuit in the original circuit, or one or more electrical components in the original circuit.[0070] The first data object representing the original circuit 802 may include a representation of electrical components and the interconnections between the electrical components. Accordingly, first data object representing the original circuit 802 describes how this circuit is designed in the current process technology. In certain cases, the first data object representing the original circuit 802 may be described as one or more netlists, HDL, or any other electronic representation of a circuit. A netlist is an electronic representation of electrical components in a circuit and the connection between the electrical components in the circuit. In certain embodiments, the netlist may also include nodes that represent the connection between a first electrical component and a second electrical component in the circuit. The netlist may include multiple circuit blocks and may organize the circuit by each circuit block. In certain cases, the netlist, and corresponding circuit blocks, may be organized into portions that perform a particular task or function. In some embodiments, technique 800 may include a component that identifies circuit blocks in the first data object representing the original circuit 802. A circuit block parser 803 may parse the first data object to identify individual circuit blocks. A circuit block may be further parsed by a sub-circuit parser 804 to identify sub-circuits of the circuit block based on a set of sub-circuit parsing rules 806. In other embodiments, technique 800 may identify sub-circuits using the original circuit represented by the first data object. In certain embodiments, the original circuit in the first data object representing 802 is a circuit block.[0071] The sub-circuit parsing rules 806 may be based, at least in part, on the electrical components of the sub-circuit, physical parameters of the electrical components, how the electrical components of the sub-circuit are connected, what purpose the electrical components serve, what other sub-circuits or electrical components that the identified sub-circuit is connected to, etc. In certain cases, the sub-circuit parsing rules 806 may first attempt to identify a sub-circuit based on the electrical components and connections of the electrical components. In the netlist, each electrical component is identified by type (e.g., a transistor (such as an NMOS transistor or PMOS transistor), capacitor, resistor, inductor, or any type of electrical component or device) and connections (e.g., coupling) of the electrical component are provided. The parsing rules 806 may, for example, parse the netlist to group a first electrical component with one or more other electrical components that the first electrical component is connected to, and attempt to match this group of electrical components as against a set of known topologies, an example of which is shown in FIGs. 7A-7B. As an example, the rules may indicate that if a certain electrical component is a transistor with a source connected to a certain electrical component or sub-circuit, a drain connected to another certain electrical component or sub-circuit, and a gate connected to another transistor, the other transistor having certain connection, then this certain set of electrical component is a particular topology of an input or gain stage. In certain cases, a role (e.g., branch, diode connected, gain stage, cascade stage, etc.) and physical parameters (e.g., width (W), length (L), W/L ratio, etc.) of the electrical components may also be considered and recorded. For example, a current mirror sub-circuit block may include a first transistor which is diode connected and a slave current source second transistor. Although both the first and second transistors belong to the same sub-circuit block, the first and second transistors may play different roles in the sub-circuit blocks, may have different electrical component parameters, and may influence performance parameters of the structural block in different ways. The parsing may be repeated with additional other electrical components until either only one or no matching known topology is left. If only one matching topology is left, then the sub-circuit can be identified based on the matching topology. If no matching topology is left, then the last additional other electrical component added may be dropped. By dropping this last additional other electrical component, multiple matching known topologies may be left, and conflict resolution may be performed to determine which known topology of the multiple matching known topologies is a best match. In certain embodiments, the netlist may identify one or more sub-circuits, and sub-circuit parsing rules may identify the sub-circuit using the identification of the sub-circuit by the netlist. [0072] Conflict resolution may take into account the electrical components of the group of electrical components as well as one or more connections (e.g., inputs and outputs) of the group of electrical components. In certain cases, the connections as between the electrical components of the sub-circuit may be considered and if a unique match still cannot be found, then connections as between electrical components of the sub-circuit and other sub-circuits and/or other electrical components may be considered as well. For example, referring to FIGs. 2A and 2B, current mirror 220 may be identified as a current mirror as it includes a pair of transistors 222, which are connected, via a pair of resistors 224, to ground 226. Similarly, input stage 228 also includes a pair of transistors 230 But here, the pair of transistors 230 are connected to power supply line VDD 232 via other electrical components, allowing the input stage 228 to be identified as an input stage. These one or more connections may be compared to connections of the multiple matching known topologies to identify the best matching known topology. In certain cases, if no matching known topology is found, the group of electrical components may be flagged for later review and/or the electrical components may be individually analyzed. In certain cases, rather than performing a sub-circuit identification, each electrical component may be individually identified based on connections to the electrical component as well as a role of the electrical component in a functional circuit block.[0073] Sub-Circuit Performance Parameters[0074] Once a sub-circuit has been identified, a set of sub-circuit performance parameters may be determined based on the identification. In certain embodiments, the set of sub-circuit performance parameter may be determined based on the identified function of the sub-circuit, the circuit block, or the analog circuit. FIG. 9 is a chart illustrating sets of sub-circuit performance parameters for certain sub-circuits 900, in accordance with aspects of the present disclosure. How a sub-circuit performs may be described by numerous sub-circuit performance parameters, such as transconductance (Gm), channel conductance (GDS), minimum drain to source voltage at which current saturates (Vosat), drain current mismatch (Idmm), threshold voltage mismatch (Vtmm), output impedance (r0), voltage at a bulk substrate, voltage at a drain, etc. In certain cases, each type of sub circuit may be associated with a set of sub-circuit performance parameters.[0075] In certain embodiments, the sets of sub-circuit performance parameters may be defined per type of sub-circuit. Specific sub-circuit performance parameters included in the set of performance sub-circuit performance parameters may vary from one type of sub-circuit to another. Certain sub circuit performance parameters 904 may be more relevant for a particular sub-circuit type than for another sub-circuit type. For example, while current mirrors may have a certain transconductance value, the transconductance value of a current mirror may be relatively less important to the function of current mirrors 902. Rather, sub-circuit performance parameters 904 more relevant to the function of current mirrors 902, such as channel conductance, minimum drain to source voltage at which current saturates, and Idmm, may be included in the set of sub-circuit performance parameters for current mirrors. As another example, the set of sub-circuit performance parameters for a differential pair 906 may include the sub-circuit performance parameters 904 of transconductance (Gm), channel conductance (GDS), and threshold voltage mismatch (Vtmm). The sub-circuit performance parameters of the set of sub-circuit performance parameters for a particular sub-circuit may be predetermined. In certain cases, the specific sub-circuit performance parameters of the set of sub-circuit performance parameters for a particular sub-circuit may be determined, at least in part, based on expert knowledge and/or experience. In other embodiments, the relevant sub-circuit performance parameters in the set of sub-circuit performance parameters are dynamically identified by the identified sub-circuit, the identified function of the sub-circuit, the circuit block, the function of the circuit block, the circuit, or the function of the circuit. In addition, the relevant sub-circuit performance parameters in the set of sub-circuit performance parameters for a type of sub-circuit may vary based on the identified sub circuit.[0076] Returning to FIG. 8, operating simulations 808 ( e.g ., operating point simulations) may be performed, in accordance with aspects of the present disclosure. For the operating simulations 808, the circuit, or portions thereof, may be simulated in circuit simulation software to determine sub circuit operational parameters for one or more sub-circuits of the circuit. For example, a circuit block and/or sub-circuit of the original circuit 802 may be simulated in a circuit simulator, such as a SPICE simulator, to determine the sub-circuit operating point information for the sub-circuit. The sub circuit operational parameters, which may include operating point information and bias point information, refers to a voltage or current (e.g., drain source voltage (VDS), gate source voltage (Vgs), etc.) at a particular point of an electrical component with no input signal applied. In certain embodiments, operational parameters may include information or parameters corresponding to one or more operating points or bias points of an electrical component, a sub-circuit, a circuit block, or a circuit.[0077] In certain cases, the operational parameters may be based on identified sub-circuits. For example, sub-circuit operational parameters may be generated for identified sub-circuits based on a simulation of the circuit block and/or the sub-circuit. In certain cases, the operational parameters may also be generated on an electrical component level. For example, if certain electrical components of the original circuit 808 were not included in an identified sub-circuit, operational parameters may be generated for the electrical component. In other cases where electrical components are identified, operational parameters may be generated for the electrical components of the original circuit 808. In certain cases, the operational parameters may be used, along with the sub-circuit physical parameters (obtained, for example, from the data object) and sub-circuit type information by a first process technology characterization module 810 to determine sub-circuit performance parameter values for the set of sub-circuit performance parameters associated with the identified sub-circuit or electrical component for the first process technology associated with the original circuit.[0078] The first circuit process technology characterization module 810, in certain cases, creates, trains, stores, and provides machine learning models for predicting sub-circuit performance parameters based on operating information and sub-circuit physical parameters. The first process technology characterization module 810 may include trained machine learning (ML) models 812 in a ML library 506. In certain cases, there may be ML models corresponding to the known topologies that the technique 800 is configured to operate on. The trained ML model 812 may be stored and represented in a data object. The trained ML model 812 may be stored in the ML library 506. The ML library 506 may store and provide access to a plurality of ML models. In certain embodiments, the trained ML model 812 may be any set of rules, instructions, algorithms, or any type of data obj ect that recognizes patterns.[0079] A ML model 812 may be trained 504 based on a set of simulated sub-circuits 502. In certain cases, a ML model 812 may be trained based on variations of the sub-circuit for a first (e.g., source) process technology. For example, a first sub-circuit topology of the known sub-circuit topologies may be simulated 502 using a variety of sub-circuit physical parameters and operational parameters for the first process technology. This simulation may be performed using a circuit simulator, such as a SPICE simulation. The simulation generates a set of sub-circuit performance parameters corresponding to variants of the sub-circuit physical parameters and operational parameters for the first topology in the first technology process. The ML model for the first sub-circuit topology may then be trained 504 using the variants of the sub-circuit physical parameters and operational parameters to predict the corresponding sub-circuit performance parameter for that ML model for the first process technology. The simulated sub-circuits 502 and the results of the simulated sub circuits 502 may be stored and represented in a data object.[0080] In certain cases, the ML model 812 may be stored in a ML model library 506. The ML model 812 may use a variety of ML modeling techniques, including linear regression models, large margin classifiers (e.g., support vector machines), principal component analysis, tree-based techniques (e.g, random forest or gradient boosted trees), or neural networks. Linear regression models may be ML models which assumes a linear relationship between input parameters and output. Large margin classifiers may be ML models which returns a distance (e.g., margin) for an output is from a decision boundary. Support vector machines ML models plot data items in n- dimensional space based on the n features of the data input to find a hyperplane that differentiates the data items into different classes. Principal component analysis ML models create a matrix of how features of the data items relate and determine which features are more important. Random forest tree ML models create a large group of decision trees for a class prediction given a data item and generates a prediction from the group of decision trees. The prediction that is the most common in the group of decision trees is the class prediction. Gradient boosted tree ML models use a set of linked and layered decision trees where predictions are based on a weighted sum of predictions made by each layer of the group of decision trees. Neural network ML models use a set of linked and layered functions (e.g., node, neuron, etc.) which are weighted to evaluate input data. Neural network ML modeling techniques may include fully connected (where every neuron of a layer is connected to every other node of the layer), fully connected with regularization (where a regularization function is added to a fully connected neural network to help avoid over fitting), and fully connected with dropout (which removes nodes to simplify the network) and optimizers, such as adaptive moment estimation optimizer enhanced neural networks (which reduces data parameters of the network using gradient descent algorithms).[0081] A particular type of sub-circuit implemented in a given process technology may be associated with a practical range of sub-circuit physical parameters ( e.g ., physical parameters) and operational parameters for the first process technology. The practical range of sub-circuit physical parameters may be provided, for example, by a user and the practical range may be based on limitations of a process technology. For example, a current mirror sub-circuit implemented in the first process technology may have a range of acceptable input reference currents (e.g., 10hA-20mA), a minimum and maximum transistor width (e.g., 1 pm-lOOpm) and length (e.g., .1 pm-lOpm) for electrical components of the sub-circuit, etc. In other cases, the practical range of sub-circuits may be automatically determined, for example, by analyzing a range of parameters associated with a process technology or by simulating the circuit and/or sub-circuit across the range of parameters until the circuit and/or sub-circuit fails in the simulation, etc. A particular sub-circuit topology may then be simulated 502 across a selection of the practical range of sub-circuit physical parameters (e.g., physical parameters) and sub-circuit operational parameters to generate sub-circuit performance parameters (e.g, performance parameters) associated with the particular sub-circuit topology for the first process technology. For example, a particular circuit mirror topology, such as that shown in FIG. 4, may be simulated 502 with varying combinations of sub-circuit physical parameters and sub-circuit operational parameters, such as W/L of electrical components, input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. to generate sub circuit performance parameters (e.g, performance parameters) associated with the respective physical parameters and respective sub-circuit operational parameters. The sub-circuit performance parameters include, but are not limited to those sub-circuit performance parameters discussed in conjunction with FIG. 9, such as transconductance (Gm), channel conductance (GDS), minimum drain to source voltage at which current saturates (Vosat), drain current mismatch (Idmm), threshold voltage mismatch (Vtmm), output impedance (r0), voltage at a bulk substrate, voltage at a drain, etc. In certain embodiments, the sub-circuit may be simulated using varying combinations of sub-circuit parameters (including physical parameters and performance parameters) and operational parameters to generate additional sub-circuit performance parameters. Moreover, in certain cases, the combinations of sub-circuit physical parameters, sub-circuit performance parameters, and sub circuit operational parameters are not exhaustive, but rather combinations of the sub-circuit physical parameters, sub-circuit performance parameters, and sub-circuit operational parameters are selected and simulated to cover the Gaussian and Uniform distribution encompassing the cases typically identified in analog semiconductor technology manufacturing variations. For example, operating points may be selected substantially uniformly across the practical range of sub-circuit physical parameters with additional operating points selected in ranges of sub-circuit physical parameters most commonly used (or expected to be used) for a given sub-circuit or circuit.[0082] In certain cases, the set of sub-circuit physical parameters, sub-circuit operational parameters, and generated sub-circuit performance parameters resulting from the simulations may be used to train a ML model 504 corresponding to the simulated sub-circuit topology.[0083] Use of ML Models[0084] A ML model for a particular sub-circuit topology in a particular process technology may be trained based on the sub-circuit physical and operational parameters, and corresponding generated sub-circuit performance parameters. As discussed above, multiple sets of sub-circuit physical parameters, operation parameters, and corresponding generated sub-circuit performance parameters are obtained across a practical range of sub-circuit physical parameters. These sets of parameters may be divided into a training set and a test set. The ML model 812 may be trained using the training set and the training 504 of the ML model 812 may be verified by the test set. To train the ML model 812, certain parameters may be provided as the input parameters to the ML model 812, the ML model 812 then makes certain predictions based on the input parameters, and these predictions are compared to the known correct output parameters found from the simulation. Based on this comparison, the ML model 812 may be adjusted, for example by adjusting node weights, to allow the ML model 812 to make predictions that closely match the known correct output parameters. The ML model training 504 may then be verified by using the ML model 812 to make predictions using the test set and then comparing the predictions output by the ML model 812 to the known correct output associated with the test set.[0085] The sub-circuit parameters (including sub-circuit physical parameters, sub-circuit performance parameters, and sub-circuit operational parameters) for the particular sub-circuit topology may be used to train a ML model 812 for the particular sub-circuit topology for the first process technology. For example, the sub-circuit physical parameters and sub-circuit operational parameters from the simulated particular sub-circuit topology may be used as a training set to train a ML model 812 to predict certain sub-circuit performance parameters when presented with a set of sub-circuit physical parameters and sub-circuit operational parameters for the particular sub-circuit topology in the first process technology. This ML model 812 may be tested using the test set to verify the training. For example, sub-circuit physical parameters and operational parameters of the test set may be input to the ML model 812 to produce predicted sub-circuit performance parameters. These predicted sub-circuit performance parameters are then compared against the known sub-circuit performance parameters that were generated by simulating the sub-circuit using the associated sub circuit physical parameters and operation parameters to verify that the ML model 812 is producing accurate predictions. Techniques for training the ML model 812 are discussed in greater detail below. [0086] Once trained, this ML model 812 for the particular sub-circuit topology may be stored in the ML model library 506 along with other ML models for other sub-circuit topologies for the first process technology. In certain cases, the ML model library 506 may include trained ML models for identified sub-circuit topologies supported by an embodiment of technique 800.[0087] Given sub-circuit operational parameters, along with sub-circuit physical parameters for an identified sub-circuit of the original circuit 802, the source circuit process technology characterization module 810 may locate the corresponding trained ML model 812 for the identified sub-circuit from the ML model library 506 and use the located ML model to predict certain sub circuit performance parameters 818 for the identified sub-circuit.[0088] In certain cases, a second process technology characterization module 820 is similar to the first process technology characterization module 810. For example, the second circuit process technology characterization module 820 may also include trained ML models 822 in a ML library 512. In certain cases, there may be ML models corresponding to the known topologies that the technique 800 is configured to operate on. The trained ML model 822 may be stored and represented in a data object. The trained ML model 822 may be stored in the ML library 512. The ML library 512 may store and provide access to a plurality of ML models. In certain embodiments, the trained ML models 822 may be any set of rules, instructions, algorithms, or any type of data object that recognizes patterns. It may be understood that the second circuit process technology characterization module may include ML models associated with any number of circuit process technologies. In certain cases, the second circuit process technology characterization module may include ML models associated with the first process technology, for example to help optimize a sub-circuit.[0089] A ML model 822 may be trained 510 based on a set of simulated sub-circuits 508. In certain cases, the ML model 822, may be trained based on variations of a sub-circuit for a second (e.g., target) process technology. For example, the first sub-circuit topology of the known sub-circuit topologies may be simulated 508 using a variety of sub-circuit physical parameters and operational parameters for the second process technology. This simulation may be performed using a circuit simulator, such as a SPICE simulation. The simulation generates a set of sub-circuit performance parameters corresponding to each variant of the sub-circuit physical parameters and sub-circuit operational parameters for the first topology in the second process technology. The ML model 822 for the first sub-circuit topology may then be trained 510 using the variants of the sub-circuit physical parameters and operational parameters to predict the corresponding sub-circuit performance parameter for that ML model for the second process technology. The simulated sub-circuits 508 and the results of the simulated sub-circuits 508 may be stored and represented in a data object. It may be understood that for a given process technology multiple sets of sub-circuit physical parameters, sub-circuit operational parameters, and corresponding generated sub-circuit performance parameters are obtained across a practical range sub-circuit physical parameters and that the same multiple sets may be used for ML model training 504 or ML model training 510. In certain cases, the ML model 822 may be stored in a ML model library 512. The ML model 822 may also use a variety of ML modeling techniques, including linear regression models, large margin classifiers (e.g., support vector machines), principal component analysis, tree-based techniques (e.g, random forest or gradient boosted trees), or neural networks. A particular sub-circuit topology may be simulated 508 across a selection of the practical range of certain sub-circuit physical parameters and operational parameters to generate additional sub-circuit performance parameters associated with the particular sub-circuit topology for the second process technology. The practical range of sub-circuit physical parameters may be provided, for example, by a user and the practical range may be based on limitations of a process technology. For example, a particular circuit mirror topology, such as that shown in FIG. 3, may be simulated with varying combinations of sub-circuit physical parameters and sub-circuit operational parameters, such as W/L of electrical components, input and output currents, impedance, operating region (e.g., conditions), N-type/P-type, etc. to generate a set of sub circuit performance parameters associated with the respective sub-circuit physical parameters (e.g, physical parameters and/or operational parameters) for the second process technology. In other cases, the practical range of sub-circuits may be automatically determined, for example, by analyzing a range of parameters associated with a process technology or by simulating the circuit and/or sub circuit across the range of parameters until the circuit and/or sub-circuit fails in the simulation, etc. The sub-circuit physical parameters, operational parameters, and generated sub-circuit performance parameters resulting from the simulations for the particular sub-circuit topology may then be used to train a ML model 510 for the particular sub-circuit topology for the second process technology. For example, the sub-circuit physical parameters, sub-circuit operational parameters, and corresponding generated performance sub-circuit performance parameters resulting from the simulated circuit mirror topology may be used as a training set to train a ML model to predict sub circuit physical parameters and sub-circuit operational parameters when presented with a set of sub circuit performance parameters for the particular circuit mirror topology in the second process technology.[0090] The ML model 822 may be trained using the training set and the training 510 of the ML model 822 may be verified by the test set. To train the ML model 822, certain parameters may be provided as the input parameters to the ML model 822, the ML model 822 then makes certain predictions based on the input parameters, and these predictions are compared to the known correct output parameters found from the simulation. Based on this comparison, the ML model 822 may be adjusted, for example by adjusting node weights, to allow the ML model 822 to make predictions that closely match the known correct output parameters. For example, after training, the ML model 812 may predict sub-circuit physical parameters and sub-circuit operational parameters when receiving a set of sub-circuit performance values for a circuit mirror topology in the second process technology.[0091] This ML model may be tested using the test set to verify the training. The training 510 may be verified by using the ML model 822 to make predictions using the test set and then comparing the predictions output by the ML model 822 to the known correct output associated with the test set. For example, sub-circuit physical parameters and sub-circuit operational parameters of the test set may be input to the ML model to produce predicted sub-circuit performance parameters. These predicted sub-circuit performance parameters are then compared against the known sub-circuit performance parameters generated by simulating the sub-circuit using the associated sub-circuit physical parameters and sub-circuit operation parameters to verify that the ML model is producing accurate predictions. [0092] Once trained, this ML model for the particular sub-circuit topology may be stored in the set of trained ML models 822 (e.g., another ML model library) along with other ML models for other sub-circuit topologies for the second process technology. In certain cases, the set of trained ML models 822 may include trained ML models for each identified sub-circuit. In certain cases, the model library 812 and another model library 822 may be combined into a single model library. [0093] As indicated above, sub-circuit parameters (including sub-circuit physical parameters, sub circuit performance parameters, and sub-circuit operational parameters) for a particular sub-circuit topology for the second process technology may be used to train ML model 822 for the particular sub-circuit topology for the second process technology. For example, certain sub-circuit performance parameters may be used as a training set to train a ML model 812 to predict certain other sub-circuit physical parameters and sub-circuit operational parameters. This ML model 812 may be tested using the test set to verify the training. For example, the predicted other sub-circuit physical parameters and sub-circuit operational parameters are then compared against the known sub-circuit physical parameters and operational parameters, used for simulating the sub-circuit to generate the sub-circuit performance parameters, to verify that the ML model 822 is producing accurate predictions of the particular sub-circuit topology. Once trained, the ML model 822 for the particular sub-circuit topology may be stored in the ML model library 512 along with other ML models for other sub-circuit topologies for the second process technology. In certain cases, the ML model library 512 may include trained ML models for identified sub-circuit topologies supported by an embodiment of technique 800.[0094] Thus, given sub-circuit performance parameters for an identified sub-circuit of the original circuit, as represented by the data object 802, the second process technology characterization module 820 may locate the corresponding trained ML model 822 for the identified sub-circuit from the ML model library 512 and use the trained ML model 822 to predict 828 sub-circuit physical parameters and/or operational parameters for the identified sub-circuit. Once a set of sub-circuit physical parameters have been determined for the components of the identified sub-circuit, the data object representation of the identified sub-circuit is converted to the second process technology using the set of sub-circuit physical parameters for the corresponding components of the sub-circuit. For example, a netlist for the converted sub-circuit may be generated using the determined sub-circuit physical parameters.[0095] Formatting tool 830 may correct formatting, connection drawing, and/or mapping issues that may arise during the conversion. In certain cases, the formatting tool 830 may extract certain formatting, connection, and or mapping information from the original circuit design 802 for use to correct the converted data object (e.g., netlist). In certain cases, this netlist may be connected to or appended on another netlist, such as a netlist for a converted version of the circuit block and other circuit blocks, if needed, to output a data obj ect representing a new circuit 832 for the second process technology. In certain cases, the converted sub-circuit, converted circuit block, and/or new circuit in the data object representing the new circuit 832 may be simulated for example in a circuit simulator to verify that the performance of new circuit in the data object representing the new circuit 832 is within a certain range of performance of the original circuit 802. This range of performance may vary depending on the intended purpose of the new circuit 832 and this range of performance may be defined in a variety of ways, such as by a circuit designer, engineer, etc. As an example, a new control circuit may be tested to ensure that the new control circuit has an output voltage or current within a certain range, such as a percentage, of a target voltage/current for a given input setting. [0096] As indicated above, the trained ML models may be stored in ML model libraries, for various process technologies. The ML model libraries may refer to any number of data structure, object, or process used to organize, store, and/or retrieve the ML model libraries from a non- transitory data storage medium. For example, ML library 506 and ML library 512 may be logical ML libraries within a single ML library (not shown) that includes a plurality of ML libraries associated with various process technologies. In certain cases, these ML model libraries may also be used as a part of designing new analog circuits. For example, an analog chip designer may want a particular sub-circuit with certain sub-circuit performance parameters. Rather than manually determining the physical parameters of each electrical component of the sub-circuit, a trained ML model corresponding to a particular topology of a sub-circuit may be selected from the ML model library. The sub-circuit performance parameters may be provided to the selected trained ML model and appropriate sub-circuit physical parameters determined by the selected trained ML model. [0097] In certain cases, one or more techniques may be used to select the trained ML model from the ML model library. For example, as executing a trained ML model is often substantially quicker than training the ML model, a given set of sub-circuit performance parameters may be provided to any number of, or all, of the trained ML models of the ML model library corresponding to a selected sub-circuit type. A trained ML model corresponding to a certain topology for the selected sub-circuit may then be selected from the trained ML models that were capable of producing appropriate sub- circuit physical parameters. For example, the trained ML model selected may corresponding to the trained ML model with the fewest electrical components for the provided physical parameters. As another example, ML model libraries may be used to select a particular topology for a sub-circuit by providing appropriate sub-circuit parameters, such as sub-circuit performance parameters, to the ML model library (or logic associated with the ML model library). A sub-circuit type may be provided, or may be inferred based on, for example, specific sub-circuit performance parameters provided. Various (or all) ML models of different topologies associated with sub-circuit type may then be run using the provided sub-circuit performance parameters to determine a set of topologies of the sub circuit type that may be appropriate for use. A specific topology of the sub-circuit type may then be selected from the set of topologies. In certain cases this selection may be performed by a user. In some cases, one or more topologies for the sub-circuit type may be selected or suggested to the user. Topologies of the set of topologies for the sub-circuit may be analyzed, for example, based on complexity, a cost function associated with various physical parameters, overall size, etc., to provide the selection or suggestion.[0098] In the example discussed above, physical parameters may be used by ML models of a first ML library 506 to predict sub-circuit performance parameters of a particular sub-circuit designed for a first process technology. These sub-circuit performance parameters may then be used by ML models of a second ML library 512 to generate sub-circuit physical parameters of the particular sub circuit designed for a second process technology. Thus, each ML library is associated with a particular process technology. Using different ML libraries for each process technology helps enable various scenarios, such as conversions of a circuit from one process technology to another process technology, designing circuits with portions using one process technology and other portions using another process technology, searching across many process technologies to determine which process technology is most appropriate (e.g., in terms of cost, performance, etc.) for a particular circuit, etc.. In certain cases, for example when such flexibility is not required, a single ML model which is trained to directly convert sub-circuit physical parameters of a sub-circuit in the first process technology to sub-circuit physical parameters of the sub-circuit in the second process technology may be used in place of the first and second ML models.[0099] It may be understood that while discussed with respect to a sub-circuit, other sub-circuits, such as electrical components, may also be simulated across a range of sub-circuit physical parameters to predict similar sub-circuit performance parameters for training ML models for the sub- circuits.[0100] Example ML Model[0101] FIG. 10 illustrates an example neural network ML model 1000, in accordance with aspects of the present disclosure. In certain embodiments, modeling analog circuits with ML models can be performed by using sub-circuit parameters as input parameters (e.g., features) of a ML model. In alternative embodiments, modeling analog circuits with ML models can be performed using sub circuit physical parameters as the parameters of a ML model. The example neural network ML model 1000 is a simplified example presented to help understand how such neural network ML model 1000 may be trained. It may be understood that each implementation of a ML model may be trained or tuned in a different way, depending on a variety of factors including, but not limited to, a type of ML model being used, parameters being used for the ML model, relationships as among the parameters, desired speed of training, etc. In this simplified example, sub-circuit physical parameters values of W and L are parameter inputs 1002 and 1004 to the ML model 1000. Each layer (e.g., first layer 1006, second layer 1008, and third layer 1010) includes a plurality of nodes (e.g., neurons) and generally represents a set of operations performed on the parameters, such as a set of matrix multiplications. For example, each node represents a mathematical function that takes, as input (aside from the nodes of the first layer 1006), output from a previous layer and a weight. The weight is typically adjusted during ML model training and fixed after the ML model training. The specific mathematical function of the node can vary depending on ML model implementation. While the current example addresses three layers, in certain cases the ML model may include any number of layers. Generally, each layer transforms M number of input parameters to N number of output parameters. The parameter inputs to the first layer 1006 are output as input to the second layer 1008 with a set of connections. As each node of a layer (such as first layer 1006) outputs to each node in a subsequent layer (such as second layer 1008), ML model 1000 is a fully connected neural network. Other embodiments may utilize a partially connected neural network or another neural network design which may not connect each node of a layer to each node of a subsequent layer.[0102] In this example, first layer 1006 represents a function based on a set of weights that are applied to the input parameters (e.g, input parameters 1002 and 1004) to generate output from first layer 1006 that is input to the second layer 1008. Different weights may be applied for the input received from each node of the previous layer by the subsequent layer. For example, for a node of the second layer 1008, the node applies weights to input received from nodes of the first layer 1006 and the node may apply a different weight to input received from each node of the first layer 1006. Nodes compute one or more functions based on the inputs received and corresponding weights and outputs a number. For example, the node may use a linear combination function which multiplies an input values from a node of the previous layer with a corresponding weight and sums across the results of the multiplication, coupled with a non-linear activation function which acts as a floor for the resulting number for output. It may be understood that any known weighted function may be applied by the node within the scope of this disclosure. This output number may be input to subsequent layers, or if the layer is a final layer, such as third layer 1010 in this example, the number may be output as a result (e.g., output parameter). In certain cases, the functions applied by nodes of a layer may differ as between layers. The weights applied by a node may be adjusted during training based on a loss function, which is a function that describes how accurately the predictions of the neural network are as compared to the expected results, an optimization algorithm, which helps determine weight settings adjustments based on the loss function, and a backpropagation of error algorithm, which applies the weight adjustments back through the layers of the neural network. Any optimization algorithm, (e.g., gradient descent, mini-batch gradient descent, stochastic gradient descent, adaptive optimizers, momentum, etc.), loss function (e.g., mean-squared error, cross entropy, maximum likelihood, etc.), and backpropagation of error algorithm (e.g., static or recurrent backpropagation) may be used within the scope of this disclosure.[0103] Certain ML models, such as a neural network, may include hyperparameters. Hyperparameters of the ML model may refer to parameters that control the operation of the ML model which cannot be derived through training, such as a number of nodes in a layer, number of layers, learning rate, etc.Enhanced ML Modeling Techniques[0104] As indicated above, as there may be multiple topologies for multiple types of sub-circuits, enhancing ML modeling techniques to efficiently generate ML models for these topologies and sub circuits may be helpful. Generating ML models for analog and hybrid circuits which accurately predict parameters of these circuits can be challenging as analog and hybrid circuits can respond in highly non-linear ways as physical parameters of electrical components are varied. Additionally, modeling such behavior, for example as a neural network ML model, using current ML modeling techniques may require substantial training time and/or manual tuning of parameters of the model to achieve a desired accuracy. This in turn may make bulk generation of ML models challenging. To help streamline bulk ML model creation, ML model creation may be enhanced by including interaction parameters as input parameters in the ML model and performing dimensionality reduction on the input parameters using threshold stepwise selection.[0105] In certain cases, properties of the process technologies may influence the behavior of the analog circuits. To help address this, one or more process technology parameters which describe the behavior of the process technology may be included as input parameters to the ML model. Examples of such process technology parameters may include oxide thickness, channel doping concentration, electron mobility, etc.[0106] To further help address the non-linearities, parameter interactions, such as interaction parameter 1012, may be added as input parameters to the ML model 1000. Interaction parameters represent, for example, one or more functions which describe how different input parameters may interact together. For example, a function A*B may have input parameters A and B. An interaction parameter C could be created, where C=A*B, which would represent how the model responds to changes based on the multiplication of parameters A and B. As another example, an interaction parameter D could be created, where D = L/AS, which represents how the model responds to changes based on the square root of the multiplication of parameters A and B. In certain cases, these interactions may be based on circuit theory equations. As an example, an input parameter to a ML model may be based on the equation for determining transconductance of a CMOS transistor. In this example, a ML model, MLgm, may have input parameters such that MLgm= f(W, L, T, NCH, T0X, ID, IDS), where the input parameters respectively represent the electrical component width, electrical component length, temperature, N-channel doping concentration, oxide thickness, electrical component drain current bias, and the voltage across the drain-source terminals of the electrical component. One known nonlinear parameter interaction is the first order equation for transconductance, gm= . The parameter interaction, (^/^)/D, may bedetermined from the input parameter data for W, L, and ID, and provided as an input parameter, f to the ML model as an input parameter such thatar|d MLgm=/ (W, L, T, NCH, Tox, lD, IDS, fi) . Adding the nonlinear interaction as an input to the ML model helps by preemptively providing known interactions which may help reduce an amount of training needed by the ML model. [0107] In certain embodiments, attempting to characterize the non-linearities based on circuit theory, such as by including known circuit theory equations (e.g., the first order equation for transconductance), can introduce higher-order interaction terms and may increase the dimensionality (e.g., number of parameters input into the ML model). To help reduce the number of parameters for input into the ML model, dimensionality reduction may be performed. Dimensionality reduction removes parameters as inputs to the ML model if it is determined that those parameters do not impact the model behavior. Dimensionality reduction may help reduce the number of variables, identify an optimal set of parameters to be input into the ML model, and/or help reduce the processing time of the ML model. In certain cases, dimensionality reduction may be performed using threshold stepwise selection.[0108] Threshold Stepwise Selection[0109] Threshold stepwise selection helps build higher order parameter interactions by iterating over the input parameter interactions in a stepwise fashion, determining the significance the parameter interactions have on the model behavior, and removing any parameter interactions that do not meet a threshold. In this manner, higher order interactions can be determined while minimizing the dimensionality of the ML model. FIG. 11 illustrates a series of ML model parameters for threshold stepwise selection 1100, in accordance with aspects of the present disclosure. Threshold stepwise selection begins with an initial set 1102 of parameters. In this example, variables A, B, C, and D, of the initial set 1102, represent generic input parameters to a ML model (e.g., sub-circuit physical parameters, sub-circuit performance parameters, constants, parameters descriptive of a process technology, etc.), such that a result of the ML model, R, is a function of A, B, C, and D: R = / (A, B, C,D ). In certain cases, R may represent the expected results of the ML model, (e.g., sub-circuit physical parameters or sub-circuit performance parameters) of a sub-circuit being modeled by the ML model. The initial set 1102 of parameters may also include interaction parameters 1012. In a first step, a first parameter may be interacted with each of the other parameters to generate a second set 1104 of parameters. This interaction as between parameters may be based on the mathematical function of nodes in the ML model. For example, assuming generic parameter A represents input parameter 1002 and B represents input parameter 1004, of FIG. 10, then AB may represent an interaction corresponding to a function as applied in the second layer 1008 of ML model 1000 to input parameter 1002 and input parameter 1004, without the weight. Thus, higher order parameter values represent interaction values obtained from the component parameters. For example, assuming the interaction is multiplication, if A= 2 and 5=3, AB = 6. If C = 4, the ABC = 24. If A is an equation, such as a known circuit theory equation, the equation may be evaluated to obtain a value and this value used for the interaction.[0110] In this example, the parameter A is interacted with parameters B , C, and D to generate parameters AB, AC, and AD, as shown in the second set 1104 of parameters, such that R for the second set 1104 of parameters would correspond to R = f(A, B, C, D,AB,AC,AD). While parameter A is interacted in this example, it may be understood that any parameter of the initial set 1102 may be interacted with the other parameters of the first set 1102. In certain cases, the interaction may be based on mathematical functions applied as between parameters in a node of a neural network. A linear regression may then be performed on the parameters of the second set 1104. The linear regression is a linear function which attempts to model a relationship between the parameters of the second set 1104 and results of the linear regression may be compared to expected results of the ML model (e.g., as determined by a circuit simulation of the sub-circuit topology being modeled by the ML model). A statistical significance test (e.g., a null hypothesis test) may be used to determine a statistical significance value (e.g., a null hypothesis p-value) to predict each parameter’s contribution of the second set 1104 of parameters to the linear regression results. The statistical significance value may then be compared against a defined threshold for the statistical significance value. The threshold for the statistical value may be determined as a fixed value as an input to the threshold stepwise selection algorithm, or the threshold may be determined through known techniques such as Bayesian hyperparameter optimization. Bayesian hyperparameter optimization is a technique for determining hyperparameters of a ML model. The hyperparameters of the ML model may refer to parameters that control the operation of the ML model which cannot be derived through training, such as a number of nodes in a layer, number of layers, learning rate, etc. In this example, the hyperparameter to be optimized may be the threshold for the statistical significance. In a third step, parameters that do not meet the threshold for statistical significance, in this example, parameters C, AB, and AC of the second set 1104 of parameters, may be discarded.[0111] In a fourth step, the first through third steps may be repeated with each parameter of the initial set 1102 of parameters to obtain a fourth set 1110 of parameters. For example, a second parameter of the initial set 1102 may be interacted with parameters of the second set 1104 (without the parameters that did not meet the threshold for statistical significance). This interaction may be substantially similar to those interactions performed to generate the second set 1104 of parameters. A linear regression may be performed on parameters of a third set 1106 in substantially the same way as performed on parameters of the second set 1104, and parameters that do not meet the threshold for statistical significance, in this example, parameters BA and BAD of the third set 1106 of parameters, may be discarded. This interaction/linear regression/discarding parameters may be repeated for each parameter of the initial set 1102 to obtain resulting parameters of a round of stepwise threshold selection, such as the fourth set 1110 of parameters. In certain cases, this step iterates over all of the initial set 1102 of parameters even if subsequent steps have determined the parameter to not be significant in the modeling problem. Even though the parameter alone may not contribute to the model result, the parameter’s interaction with other parameters may have significance. Including all of the initial set 1102 of parameters regardless of individual significance when looping through the interactions helps ensure that significant interactions of all parameters are not lost.[0112] The resulting parameters in the fourth set 1110 may be compared to the expected results (e.g., obtained via circuit simulations) to determine an accuracy of the resulting parameters in the fourth set 1110. If the accuracy meets a threshold accuracy value, then the fourth set 1110 of parameters may be used as input parameters for the ML model for the sub-circuit. The threshold accuracy value may be determined in any way, for example, by experimentation, experience, etc. [0113] In a fifth step, if the accuracy does not meet the threshold accuracy value, the first through fourth steps may be repeated by interacting the initial set 1102 of parameters and resulting parameters (such as parameters of the fourth set 1110) until the threshold accuracy value is met by resulting parameters from a round of threshold stepwise selection, such as a final set 1108 of parameters. Doing so may result in higher order interactions parameters such as interaction parameters CBD and ABCD of the final set 1108, in this example. In certain cases, a number of repetitions in this fifth step may be limited, for example based on a predetermined number of rounds, if accuracy of the resulting parameters stops increasing, if the parameters of the resulting parameters are unchanged, etc. Of note, in this example, higher order parameters may be represented by interaction parameters resulting from interacted parameters (e.g., parameters represented in FIG. 11 by multiple letters). As shown, threshold stepwise selection allows for higher order parameters to be developed while still limiting the total number of parameters through dimensionality reduction. The final set 1108 of parameters, as determined by the threshold stepwise selection step, represent the set of parameters that are most statistically significant for the sub-circuit being modeled. By using the parameters determined to be most statistically significant by the threshold stepwise selection step as a starting point (e.g., as initial parameters for input to the ML model), an amount of time needed to train the a ML model to obtain a certain level of accuracy may be reduced.[0114] In certain cases, if the desired threshold accuracy value is not met by threshold stepwise selection, threshold stepwise selection may be applied in conjunction with stacked models to help improve accuracy. A stacked model uses information derived from an initial model, such as the final set 1108 of parameters output from threshold stepwise selection, as inputs to help guide subsequent modeling techniques. For example, if after applying a predetermined number of rounds of threshold stepwise selection, the desired threshold accuracy value is not met, the parameters selected during the last round of threshold stepwise selection may be used as used as input to a ML model, such as a neural network trained on the sub-circuit physical parameters and simulated sub-circuit performance parameters. This ML model may then be further tuned using any known ML tuning technique. For example, Bayesian hyperparameter optimization may also be applied to the ML model to tune the hyperparameters of the ML model. Bayesian hyperparameter optimization is a technique for determining hyperparameters of a ML model based on a probability model of how a hyperparameter influences the accuracy of the ML model as different hyperparameters are adjusted based on a validation score. The validation score may be determined by adjusting the hyperparameter of the ML model, training the ML model to generate predictions of the ML model with the adjusted hyperparameter, and evaluating these predictions against expected results to calculate the validation score.[0115] FIG. 12 is a flow diagram illustrating an overview of a technique for designing analog circuits 1200, in accordance with aspects of the present disclosure. At block 1202, a data object representing a circuit for a first process technology may be received, the circuit including a first sub circuit, the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, which is a list of electrical components and connections of those electrical components. At block 1204, the first sub-circuit may be identified in the data object by comparing the first topology to a stored topology, the stored topology associated with the first process technology. For example, the functional circuit block may be a portion of the analog circuit which represents a set of circuits that perform a function, such as amplifying a signal, comparing two signals, creating a clock signal, etc., and functional circuit block may be located by the boundaries of a function in the netlist, such as the beginning and end of a function. The netlist may be parsed to locate these functional circuit blocks. The functional circuit blocks include one or more sub-circuits. Sub-circuits may be made of one or more electrical components which together perform a specific purpose in the functional circuit block. There may be a relatively limited number of arrangements of electrical components capable of practically performing the purpose of a sub circuit. These arrangements of electrical components may be predetermined, for example based on chip design experience, as a set of predetermined sub-circuits. In certain cases, this set of predetermined sub-circuits may not be exhaustive and may contain sub-circuits determined to be more likely to be found in analog circuit. In certain cases, the first sub-circuit may be identified based on a set of rules. In certain cases, these rules may be based, at least in part, on connections of the first sub-circuit.[0116] At block 1206, sub-circuit physical parameter values associated with the first electrical component and the second electrical component of the first sub-circuit are identified. For example, the netlist may include physical parameters associated with electrical components of the circuit. Additionally, operating point simulations may be used to obtain operating parameters for the sub circuit. At block 1208, a set of sub-circuit performance parameter values for the first sub-circuit are determined based on a first machine learning (ML) model of the first sub-circuit and the identified sub-circuit physical parameters For example, different types of sub-circuits may be associated with different sets of performance parameters. Examples of performance parameters include transconductance, channel conductance, minimum drain to source voltage, threshold voltage mismatch, etc. In certain cases, performance parameter values for a set of physical parameters associated with the identified first sub-circuit may be determined based on a first ML model of the identified sub-circuit. For example, physical parameters associated with the identified first sub circuit may be input to a first trained ML model of the identified sub-circuit for the first process technology to determine performance parameter values for the identified first sub-circuit.[0117] At block 1210, the identified first sub-circuit to a second sub-circuit for a second process technology is converted based on the determined set of sub-circuit performance parameter values. For example, a second ML model may be selected based on the type of the identified first sub-circuit. The second ML model may be configured to determine a second set of sub-circuit physical parameters associated with a third electrical component and a fourth electrical component of the second sub-circuit based on a second ML model, for the second process technology, and the set of sub-circuit performance parameter values, and associate sub-circuit physical parameters of the second set of sub-circuit physical parameters with the third electrical component and the fourth electrical component of the second sub-circuit. For example, performance parameters may be input to the second trained ML model of the identified sub-circuit for the second process technology to determine physical parameter values for electrical components of the second sub-circuit for the second process technology. In certain cases, the first and second trained ML models may be neural networks. A netlist for the second sub-circuit in the second process technology may then be determined based on the physical parameter values. At block 1212, the converted second sub-circuit may be output. For example, the netlist for the second sub-circuit may be output. In certain cases, the second process technology comprises a second semiconductor manufacturing process associated with smaller circuit electrical components as compared to a first process technology of the analog circuit. For example, the second process technology may be associated with smaller sized transistors, as compared to the first process technology. In certain cases, the second sub-circuit may be verified based on a circuit simulation of the second sub-circuit and performance parameters associated with the first sub-circuit. For example, the output netlist may be simulated on a circuit simulator to verify that performance parameters of the second sub-circuit are within a threshold amount of performance parameters associated with the first sub-circuit.[0118] FIG. 13 is a flow diagram illustrating an overview of a technique for designing analog circuits 1300, in accordance with aspects of the present disclosure. At block 1302, a data object representing a circuit is received, the circuit including a sub-circuit, the sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, including one or more circuit blocks. These circuit blocks each include one or more electrical components, such as transistors, resistors, capacitors, inductors, diodes, etc. of the circuit block. At block 1304, a set of stored topologies are received. For example, a library of trained ML models, including trained ML models for known sub-circuits may be stored and accessed from a memory storage. At block 1306, the first electrical component, second electrical component, and connections of the first electrical component and second electrical component may be identified. For example, a first electrical component of the functional circuit block may be identified based on a set of predefined electrical component types stored in a memory storage. For example, electrical components play a particular role within a functional circuit block and the role of a first electrical component may be determined based on what other electrical components the first electrical component is connected to. This role, along with a type of electrical component, may be used to identify the first electrical component from a set of predetermined electrical components. In certain cases, the first circuit may be identified based on a set of rules. At block 1308, a coupling between the first electrical component and a second electrical component is determined, based on the connections of the first electrical component. For example, the netlist may include a description of connections as between electrical components and this description may be parsed to determine connections as between electrical components. Parsing may be performed using a set of rules. In certain cases, rules of the set of parsing rules may be based, at least in part, on an identified type of the first electrical component, connections of the first electrical component, and an identified type of the second electrical component. As an example, this set of rules may describe the possible connections of the electrical component and mapping those connections to various sub-circuit types or topologies. In certain cases, rules of the set of parsing rules may be based, at least in part, on physical parameters of the first electrical component and second electrical component.[0119] At block 1310, the first topology is determined based on a comparison between the identified first electrical component, the identified second electrical component, the determined coupling between the first electrical component and the second electrical component, and topologies of the set of stored topologies. At block 1312, the identified first topology may be output. For example, the identified topology may be output for use by one or more ML models for predicting sub-circuit performance parameters or sub-circuit physical parameters. In certain cases, a determination, based on the comparison, is made that multiple topologies of the set of stored topologies could match. In such cases, a third electrical component and connections of the third electrical component may be identified and, based on the connections of the third electrical component, a coupling between the third electrical component and either the first electrical component or the second electrical component is determined. The topologies of the set of stored topologies are compared to the identified first electrical component, the identified second electrical component, the identified third electrical component, the determined coupling between the first electrical component and the second electrical component, and the identified coupling between the third electrical component and either the first electrical component or the second electrical component to identify the first topology. For example, if multiple matches between a set of electrical components and topologies of the set of known topologies are found, the set of electrical components may be expanded to include additional electrical components coupled to the current electrical components of the set of electrical components. Matching against the set of known topologies may then be performed again with the expanded set of electrical components.[0120] FIG. 14 is a flow diagram illustrating a technique for identifying sub-circuits 1400, in accordance with aspects of the present disclosure. At block 1402, a data object representing a circuit for a process technology is received, the circuit including a first sub-circuit and the first sub-circuit including a first electrical component and a second electrical component, the first electrical component and the second electrical component arranged in a first topology. For example, the analog circuit may be described as a netlist, including one or more circuit blocks. The functional circuit blocks include one or more sub-circuits. The sub-circuits may be made of a set of electrical components which together perform a specific purpose in the functional circuit block. At block 1404, the first sub-circuit in the circuit is identified by comparing the first topology to a stored topology, the stored topology associated with the first process technology. For example, there may be a relatively limited number of arrangements of electrical components capable of practically performing the purpose of a sub-circuit. These arrangements of electrical components may be predetermined, for example based on chip design experience, as a set of predetermined sub-circuits. In certain cases, this set of predetermined sub-circuits may not be exhaustive and may contain sub circuits determined to be more likely to be found in analog circuits. The first sub-circuit may be compared to the set of predetermined sub-circuits.[0121] At block 1406, a first set of physical parameter values associated with first electrical component and the second electrical component of the first sub-circuit is identified. For example, the netlist may include physical parameters associated with electrical components of the circuit. Additionally, operating point simulations may be used to obtain operating parameters for the sub circuit. At block 1406, a set of performance parameter values for the first sub-circuit is determined based on a first machine learning (ML) model of the first sub-circuit and the identified set of physical parameter values. For example, different types of sub-circuits may be associated with different sets of performance parameters. Examples of performance parameters include transconductance, channel conductance, minimum drain to source voltage, threshold voltage mismatch, etc. In certain cases, performance parameter values for a set of physical parameters associated with the identified first sub-circuit may be determined based on a first ML model of the identified sub-circuit. For example, physical parameters associated with the identified first sub-circuit may be input to a first trained ML model of the identified sub-circuit for the first process technology to determine performance parameter values for the identified first sub-circuit. At block 1408, the identified first sub-circuit is converted to a second sub-circuit for the process technology based on the determined set of performance parameter values, the second sub-circuit having a third electrical component and a fourth electrical component arranged in a second topology. In certain cases, a type of the first sub circuit is identified based on connections of the first electrical component and the second electrical component. The determined set of performance parameter values are input to one or more ML models of the identified type of the first sub-circuit for the processing technology one or more sets of physical parameter values corresponding to one or more topologies associated with the type of the first sub-circuit are received. The second topology is selected from the one or more topologies. In certain cases, selecting the second topology is based on an optimization function. This optimization function may be based on a number of electrical components of topologies of the one or more topologies. In certain cases, the optimization function is based on physical parameter values corresponding to one or more topologies Physical parameters values of a set of physical parameter values corresponding to the selected second topology are associated with the third electrical component and the fourth electrical component.[0122] FIG. 15 is a flow diagram illustrating a technique for designing circuits 1500, in accordance with aspects of the present disclosure. At block 1502, an indication of a sub-circuit type and a set of sub-circuit performance parameter values may be received. For example, a user may provide an indication of a type of sub-circuit and one or more sub-circuit performance parameters values for the sub-circuit type. At block 1504, a sub-circuit topology may be determined based on the sub-circuit type and the set of sub-circuit performance parameters values. For example, a specific sub-circuit topology for the sub-circuit type may be provided and a ML model for the sub-circuit type may be identified. As another example, the sub-circuit performance parameter values may be provided to multiple sub-circuit ML models corresponding to the sub-circuit type. This set of sub-circuit ML models, and corresponding sub-circuit topologies, may be obtained from a ML model library. The sub-circuit performance parameter values may be input to sub-circuit ML models of the set of sub circuit ML models to determine corresponding sub-circuit physical parameters for the sub-circuit topologies corresponding to the sub-circuit ML models. In certain cases, if sub-circuit physical parameters for a sub-circuit topology cannot be determined for the sub-circuit performance parameters, then the sub-circuit topology, may be removed from the set of sub-circuit topologies. An optimization function may then be applied to the sub-circuit topologies of the set of sub-circuit topologies to select a sub-circuit topology. The optimization function may be any known optimization technique, such as cost function, loss function, etc. As an example, the optimization function may select a sub-circuit topology based on a least number of electrical components with sub-circuit physical parameters of those electrical components within a certain range, the range selected for ease of manufacture based on the first process technology.[0123] At block 1506, a set of sub-circuit physical parameter values are determined based on a first machine learning (ML) model of the sub-circuit topology and the set of sub-circuit performance parameter values. In certain cases, the set of sub-circuit physical parameters values may be determined as a part of determining a sub-circuit topology. At block 1508, a data obj ect representing a sub-circuit based on the determined set of sub-circuit physical parameter values and the determined sub-circuit topology is generated. For example, a netlist representation of the sub-circuit may be generated using the determined sub-circuit topology and the determined sub-circuit physical parameter values. At block 1510, the data object is output.[0124] FIG. 16 is a flow diagram illustrating a technique for designing circuits 1600, in accordance with aspects of the present disclosure. At block 1602, a first set of sub-circuit physical parameters for electrical components of a sub-circuit, and an indication of a first process technology is received. For example, physical parameters for electrical components of a first sub-circuit may be received, along with a description of how those electrical components are connected, as well as information related to the process technology the first sub-circuit is associated with may be received. In certain cases, a set of performance parameters may also be received, the performance parameters indicating which performance parameters may be applicable for the first sub-circuit. At block 1604, a first variation of sub-circuit physical parameters for the electrical components of the structural sub-circuit is determined, the first variation including at least one sub-circuit physical parameter that vary from sub-circuit physical parameters of the first set of sub-circuit physical parameters. In certain cases, determining sets of variations of physical parameters for the electrical components of the sub-circuit includes determining variations of physical parameters for the electrical components based on a practical range of physical parameter values for the first process technology. At block 1606, the first variation of sub-circuit physical parameters in the first process technology is simulated to generate a first set of sub-circuit performance parameter values associated with the first variation. For example, for a particular sub-circuit, sets of physical parameters may be generated by simulating the particular sub-circuit with sets of physical parameter values. Physical parameter values of these sets of physical parameter values may vary across ranges of practical values associated with respective physical parameter values. In certain cases, the sets of variations of physical parameters are identified to show non-linear behavior of the sub-circuit. In certain cases, the sets of variations of physical parameters for the sub-circuit may be simulated using a simulation program with integrated circuit emphasis (SPICE) circuit model of the sub-circuit.[0125] At block 1608, a machine learning (ML) model of the structural sub-circuit is trained based on a set of variations, the set of variations including the first variation and set of sub-circuit physical parameters associated with the first variation, for the first process technology. In certain cases, the ML model of the sub-circuit comprises one of a linear regression, large margin classifier, principle component analysis, tree based, or neural network machine learning model. In certain cases, training the ML model includes identifying a set of parameters for input to the ML model. In certain cases, the set of parameters for input to the ML model is based on one of: the sets of physical parameters or generated performance parameters and one of: one or more parameters associated with the first process technology or one or more parameters associated with the second process technology. At block 1610, the trained ML model is stored. In certain cases, the library of trained ML models includes a trained ML model for each sub-circuit of a set of predetermined sub-circuits. In certain cases, in the library of trained ML models, each trained ML model is associated with a specific sub circuit and each trained ML model may differ from other trained ML models in the library of trained ML models.[0126] FIGs. 17A-17B are a flow diagram illustrating a technique for circuit modeling 1700, in accordance with aspects of the present disclosure. At block 1702, an initial set of parameters are received, the initial set of parameters associated with a sub-circuit. For example, a set of sub-circuit performance parameters or sub-circuit physical parameters for a ML model of a sub-circuit may be received. At block 1704, a first parameter of the initial set of parameters is interacted with other parameters of the initial set of parameters to generate a set of interacted parameters. For example, the first parameter may be interacted with another parameter of the set of parameters to generate an interacted parameter. At block 1706, the interacted parameter is added to the initial set parameters to generate a candidate set of parameters. For example, the interacted parameter may be added to the set of parameters. At block 1708, a linear regression may be performed on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters. For example, the linear regression attempts to model a relationship between the parameters as compared to expected results of the ML model and a statistical significance test may be applied to the results of the linear regression to determine a statistical significance value of parameters of the set of parameters. In certain cases, this linear regression equation may be based on a Taylor series regression.[0127] At block 1710, parameters of the candidate set of parameters are removed based on a comparison between the predicative value and a predetermined predictive threshold. For example, statistical significance value of parameters of the set of parameters may be compared to a predefined threshold and parameters which do not meet the predefined threshold may be removed from the set of parameters. In certain cases, statistical p-values may be compared against a minimum p-value and variables with p-values less than the minimum p-value may be removed from the candidate set. Multiple variables may be removed from the candidate set of variables in each round. At block 1712, an accuracy of the candidate set of parameters may be determined based on the set of expected parameter values. For example, the candidate set of parameters may be compared to the expected results to determine the accuracy. Predicted values based on the candidate set of variables may be compared against the expected set of parameter values to determine the accuracy for the candidate set of variables. In certain cases, each parameter of the initial set of parameters may be interacted with the other parameters of the initial set of parameters prior to the accuracy determination. For example, each of the original variables may be interacted with candidate variables of the set of candidate variables, even if the original variable is removed from the set of candidate variables. At block 1714, the accuracy of the candidate set of parameters may be compared to a predetermined accuracy level. At block 1716, if the accuracy of the candidate set of parameters reaches the predetermined accuracy level, the candidate set of parameters is output at block 1718. If the accuracy of the candidate set of parameters does not reached a predetermined accuracy level, certain steps may be repeated.[0128] At block 1720, a second parameter of the initial set of parameters is interacted with other parameters of the candidate set of parameters. This interaction may be similar to the interaction discussed in conjunction with block 1704 where another parameter is interacted with another parameter of the set of parameters to generate the interacted parameter. At block 1722, the interacted parameter is added to the candidate set of parameters. For example, the interacted parameter may be added to the set of parameters. At block 1724, the linear regression may be performed on parameters of the candidate set of parameters against a set of expected parameter values to determine a predictive value for parameters of the candidate set of parameters. At block 1726, parameters of the candidate set of parameters are removed based on a comparison between the predicative value and a predetermined predictive threshold. At block 1728, the accuracy of the candidate set of parameters may be determined based on the set of expected parameter values. At block 1730, the accuracy of the candidate set of parameters may be compared to a predetermined accuracy level. At block 1732, if the accuracy of the second candidate set of parameters has reached the predetermined accuracy, the candidate set of parameters are output at block 1718. At block 1732, if each parameter of the initial set of parameters has been interacted with other parameters of the candidate set a predetermined number of times, the candidate set of parameters are output at block 1718. Otherwise, blocks 1720-1730 may be repeated with another parameter of the initial set of parameters.[0129] In certain cases, the initial set of parameters may include one or more parameter values based on properties of the process technology. In certain cases, the initial set of parameters may include one or more parameter values based on theoretical interactions between one or more parameter values of the first set of parameters.[0130] . In certain cases where the accuracy has not reached the predetermined accuracy level, a second ML model may be trained based on the set of selected variables and parameter values of the second set of parameter values. For example, where the sufficient level of accuracy has not been met and the repeating ended after each variable in the original set of variables has been interacted a predetermined number of times, a final set of candidate variables may be used to train another ML model. If the other ML model is sufficiently accurate, the other ML model may be stored instead of the linear regression equation, for example, in a ML library. Additionally, an accuracy for the second ML model may be determined. Further, a determination may be made that the accuracy of the second ML is greater than the predetermined accuracy level, and the set of selected variables and second ML model may be stored as the first ML model for the sub-circuit for the process technology. In certain cases, the second ML model may be a neural network. In certain cases, Bayesian hyperparameter optimization may be applied to the second ML model. In certain cases, the hyperparameters being optimized by the Bayesian hyperparameter optimization include one of: a number of layers of neurons of the neural network, a number of neurons in each layer of the neural network, and a weight decay value.[0131] As illustrated in FIG. 18, device 1800 includes a processing element such as processor 1805 that contains one or more hardware processors, where each hardware processor may have a single or multiple processor cores. Examples of processors include, but are not limited to, a central processing unit (CPU) or a microprocessor. Although not illustrated in FIG. 18, the processing elements that make up processor 1805 may also include one or more other types of hardware processing components, such as graphics processing units (GPUs), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or digital signal processors (DSPs). In certain cases, processor 1805 may be configured to perform functions described in conjunction with FIGs 5, 6, 8, 11, and 12-17. It may also be understood that while described in conjunction with a single device, the functions described may be performed by any number of processing elements and that these processing elements may associated with multiple devices that are communicatively coupled. For example, generation of ML models, ML libraries, netlists, etc. may be performed on a separate device as compared to the conversion or optimization of a circuit. In certain cases, these various devices may be networked by any known networking technology, examples of which include ethemet, wireless fidelity (Wi-Fi), internet, etc. In certain cases, data objects may be provided and/or received via non-transitory computer readable storage medium.[0132] FIG. 18 illustrates that memory 1810 may be operatively and communicatively coupled to processor 1805. Memory 1810 may be a non-transitory computer readable storage medium configured to store various types of data. For example, memory 1810 may include one or more volatile devices such as random access memory (RAM). In certain cases, the SRAM and circuits as described in FIGs. 4-8 may be incorporated as part of the memory 1810. Non-volatile storage devices 1820 can include one or more disk drives, optical drives, solid-state drives (SSDs), tap drives, flash memory, electrically programmable read only memory (EEPROM), and/or any other type memory designed to maintain data for a duration time after a power loss or shut down operation. The non volatile storage devices 1820 may also be used to store programs that are loaded into the RAM when such programs executed.[0133] Persons of ordinary skill in the art are aware that software programs may be developed, encoded, and compiled in a variety of computing languages for a variety of software platforms and/or operating systems and subsequently loaded and executed by processor 1805. In one embodiment, the compiling process of the software program may transform program code written in a programming language to another computer language such that the processor 1805 is able to execute the programming code. For example, the compiling process of the software program may generate an executable program that provides encoded instructions (e.g., machine code instructions) for processor 1805 to accomplish specific, non-generic, particular computing functions.[0134] After the compiling process, the encoded instructions may then be loaded as computer executable instructions or process steps to processor 1805 from storage 1820, from memory 1810, and/or embedded within processor 1805 (e.g., via a cache or on-board ROM). Processor 1805 may be configured to execute the stored instructions or process steps in order to perform instructions or process steps to transform the computing device into a non-generic, particular, specially programmed machine or apparatus. Stored data, e.g., data stored by a storage device 1820, may be accessed by processor 1805 during the execution of computer executable instructions or process steps to instruct one or more components within the computing device 1800. Storage 1820 may be partitioned or split into multiple sections that may be accessed by different software programs. For example, storage 1820 may include a section designated for specific purposes, such as storing program instructions or data for updating software of the computing device 1800. In one embodiment, the software to be updated includes the ROM, or firmware, of the computing device. In certain cases, the computing device 1800 may include multiple operating systems. For example, the computing device 1800 may include a general-purpose operating system which is utilized for normal operations. The computing device 1800 may also include another operating system, such as a bootloader, for performing specific tasks, such as upgrading and recovering the general-purpose operating system, and allowing access to the computing device 1800 at a level generally not available through the general-purpose operating system. Both the general-purpose operating system and another operating system may have access to the section of storage 1820 designated for specific purposes.[0135] The one or more communications interfaces may include a radio communications interface for interfacing with one or more radio communications devices. In certain cases, elements coupled to the processor may be included on hardware shared with the processor. For example, the communications interfaces 1825, storage, 1820, and memory 1810 may be included, along with other elements such as the digital radio, in a single chip or package, such as in a system on a chip (SOC). Computing device may also include input and/or output devices, not shown, examples of which include sensors, cameras, human input devices, such as mouse, keyboard, touchscreen, monitors, display screen, tactile or motion generators, speakers, lights, etc. Processed input, for example from the radar device 1830, may be output from the computing device 1800 via the communications interfaces 1825 to one or more other devices. [0136] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.[0137] For example, first process technology characterization module 810 may be implemented using any number of determination techniques, such as statistical regression analysis and statistical classifiers such as neural networks, decision trees, Bayesian classifiers, fuzzy logic-based classifiers, deep learning, and statistical pattern recognition[0138] Likewise, and as another example, second process technology characterization module 820 may be implemented using any number of determination techniques, such as statistical regression analysis and statistical classifiers such as neural networks, decision trees, Bayesian classifiers, fuzzy logic-based classifiers, deep learning, and statistical pattern recognition.
PROBLEM TO BE SOLVED: To provide additional security measures that reduce or eliminate the risk that an unauthorized user of a credit card can conduct business on the Internet.SOLUTION: An e-commerce transaction is conducted between a merchant system and a telecommunications device on a consumer's account. The merchant system obtains authorization from an authentication device of the consumer before completing the e-commerce transaction. A registry server, accessible by the merchant system, may be used to maintain a database of telecommunication devices authorized to conduct e-commerce transactions on the consumer's account.
A processor configured to conduct electronic commerce with a telecommunication device on a consumer account, the processor further configured to obtain authorization from the consumer's authenticator prior to terminating the electronic commerce Merchant system that has been.The merchant system of claim 1, wherein the authentication device is wireless.The merchant system of claim 2, wherein the processor is further configured to obtain authorization from the authenticator through SMS in a wireless network.3. The merchant system according to claim 2, wherein the authentication device is a mobile phone.The merchant of claim 1, wherein the processor is further configured to obtain information from a registry server that the telecommunications device is an authorized device for conducting the electronic commerce before closing the electronic commerce. system.The authentication device is a mobile phone, and the processor receives information from the telecommunication device including the IP address of the telecommunication device and the phone number of the mobile phone and the received information to the registry server 6. The merchant system of claim 5, further configured to send and obtain confirmation that the first telecommunication device is an authorized device for conducting the electronic commerce.A processor configured to maintain a database of authorized telecommunications devices for conducting electronic commerce on a consumer account, wherein the processor consumes each of the authorized telecommunications devices in the database; Registry server that maps to information that identifies the authenticator of the person.The registry server according to claim 7, wherein the authentication device is wireless.The registry server according to claim 8, wherein the authentication device is a mobile phone.10. The registry server according to claim 9, wherein the information for identifying the mobile phone is a mobile phone number.The processor receives a request to add the telecommunications device from the telecommunications device to the database and, in response to the request, obtains an authorization to add the telecommunications device to the database. The registry server of claim 7, further configured to communicate with an authentication device.The registry server of claim 11, wherein the authentication device is wireless and the processor is further configured to communicate with the authentication device via SMS in a wireless network.The processor is further configured to communicate with a merchant system to confirm that a telecommunications device attempting to conduct electronic commerce with the merchant system is mapped to information identifying the authenticator in the database. The registry server of claim 7 wherein:14. The registry server according to claim 13, wherein the authentication device includes a mobile phone, and the information for identifying the authentication device is a phone number of the mobile phone.The registry server of claim 14, wherein the processor is further configured to map the mobile phone number to an IP address of each telecommunication device.An authentication device for a consumer comprising a processor configured to communicate with a merchant system to authorize an electronic commerce transaction between the merchant system and a telecommunications device on the consumer's account.The authentication device according to claim 16, wherein the authentication device is wireless.The authentication device according to claim 17, wherein the authentication device is a mobile phone.The processor is further configured to communicate with a registry server to maintain a database including telecommunications devices authorized to conduct electronic commerce with the merchant system on the consumer's account. 16 authentication devices.Configured to send a request to a registry server to add the telecommunications device to a database authorizing the telecommunications device for electronic commerce with a merchant system on a consumer account, wherein the request is the consumer A telecommunications device comprising a processor including information identifying an authenticator.21. The telecommunications device of claim 20, wherein the authentication device is a mobile phone and the information identifying the authentication device is a mobile phone number of the mobile phone.The telecommunications device of claim 21, further comprising a user interface, wherein the processor is further configured to send the request to the registry server in response to the entry of the mobile phone number on the user interface. .A method for conducting electronic commerce, wherein electronic commerce between a merchant system and a telecommunications device is performed on a consumer's account, and authorization is obtained from the consumer's authentication device prior to closing the electronic commerce. A method comprising:Maintaining a database of authorized telecommunications devices to conduct electronic commerce on the consumer's account, and verifying that the telecommunications device is in the database before closing the electronic commerce registry 24. The method of claim 23, further comprising obtaining from a server.The method of claim 24, wherein the authentication device is a mobile phone.26. The method of claim 25, wherein the database is maintained by mapping a mobile phone number of the mobile phone to each telecommunication device in the database.The electronic commerce includes sending the mobile phone number from the telecommunications device to the merchant system, the merchant system obtaining confirmation that the telecommunications device is in the database, and the electronic 27. The method of claim 26, wherein the mobile phone number is used to communicate with the mobile phone to authorize a commercial transaction.28. The method of claim 27, wherein the merchant system communicates with the merchant system via SMS in a wireless network.A merchant system comprising: means for conducting an electronic commerce with a telecommunications device on a consumer account; and means for obtaining authorization from the consumer's authenticator prior to closing the electronic commerce.Means for interfacing with a database of authorized telecommunications devices authorized to conduct electronic commerce on a consumer account, and mapping each of the authorized telecommunications devices to information identifying the consumer's authenticator Means for maintaining the database by doing a registry server.A consumer authenticator, means for receiving a request from a merchant system to authorize an electronic commerce transaction between a merchant system and a telecommunications device on the consumer's account; and responds to the request And an authentication device.Means for generating a request to a registry server to add the telecommunications device to a database authorizing a telecommunications device for electronic commerce with a merchant system on a consumer account, the request comprising the consumption Means for including information identifying an authenticator of the person and means for sending the request to the registry server.
Authentication of electronic commerce using wireless telecommunications devicesThe present disclosure relates generally to telecommunications, and more particularly to systems and techniques for authenticating electronic commerce using a wireless telecommunications device.Electronic commerce (e-commerce) on the Internet is expanding at a tremendous rate. Today, even the most unfamiliar consumers can trade on the Internet with just a few keystrokes on a computer, making the Internet perhaps the most convenient sales medium in the world. Most companies have successfully developed this new sales medium for many years, and retailers are following major online shopping sites. As e-commerce continues to grow, there is an increasing need to address security issues.Electronic commerce typically involves the process by which a consumer on a computer navigates through a merchant's website to thereby locate an item. These items are purchased by the consumer through a series of computer inputs in response to various screen displays, one of which may be a display of a range of payment options. The most common online payment option is payment by credit card, which requires the consumer to enter a card number along with the cardholder's name and card expiration date. However, before the consumer enters such information, the merchant's website is changed to a safe driving mode. In its secure mode, all communications with the merchant's website are encrypted in a way that protects against eavesdroppers who steal credit card information.While cryptography has proven to be quite effective at protecting credit card information stolen over the Internet, it does not provide protection against theft of the credit card itself. Stolen credit cards may be used by criminals to purchase products from various merchants on the Internet without being detected. Accordingly, there is a need for additional safety measures that reduce or eliminate the risk that unauthorized credit card users can conduct transactions on the Internet.One aspect of a merchant system is disclosed. The merchant system includes a processor configured to conduct electronic commerce with the telecommunication device on the consumer's account, the processor authenticating from the consumer's authenticator before terminating the electronic commerce. Further configured to obtainOne aspect of a registry server is disclosed. The registry server includes a processor configured to maintain a database of telecommunications devices authorized to conduct electronic commerce on a consumer account, the processor being authorized in the database Each telecommunications device is mapped to information identifying a consumer authentication device.One aspect of an authentication device is disclosed. The authenticator belongs to a consumer and has a processor configured to communicate with the merchant system to authorize electronic commerce between the merchant system and the telecommunications device on the consumer's account. Contains.One aspect of a telecommunications device is disclosed. The telecommunications device sends a request to the registry server to add the telecommunications device to a database authorizing the telecommunications device to conduct electronic commerce with the merchant system on the consumer's account The request includes a configured processor, the request including information identifying the consumer's authentication device.A method for conducting electronic commerce is disclosed. The method includes conducting an electronic commerce between the merchant system and the telecommunications device on the consumer's account and obtaining authorization from the consumer's authenticator prior to closing the electronic commerce.Other aspects of the merchant system are disclosed. The merchant system includes means for conducting an electronic commerce with a telecommunications device on the consumer's account and means for obtaining authorization from the consumer's authenticator before terminating the electronic commerce process. .Other aspects of the registry server are disclosed. The registry server includes means for interfacing with a database of authorized telecommunications devices authorized to conduct electronic commerce on the consumer's account, and each authorized telecommunications device includes a consumer authenticator. Means are included for maintaining the database by mapping to identifying information.Another aspect of a consumer authentication device is disclosed. The authenticator includes means for receiving a request from the merchant system for authorizing electronic commerce processing between the merchant system on the consumer's account and the telecommunications device, and means for responding to the request. Contains.Other aspects of telecommunications devices are disclosed. The telecommunications device is a means for generating a request to a registry server to add the telecommunications device to a database authorizing the telecommunications device for conducting electronic commerce with a merchant system on the consumer's account The request includes means for including information identifying the consumer's authentication device and means for sending the request to the registry server.It will be understood that other aspects will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described only by way of illustration various aspects of the invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various other respects, all without departing from the spirit and scope of the invention. Is possible. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. Various aspects of a telecommunications system are illustrated in the accompanying drawings by way of illustration and not limitation.FIG. 1 is a conceptual block diagram illustrating one example of electronic commerce.FIG. 2 is a conceptual block diagram illustrating one example of an electronic commerce transaction that requires authorization from a wireless telecommunications device.FIG. 3 is a conceptual block diagram illustrating the use of server registration in electronic commerce that requires authorization from a wireless telecommunications device.FIG. 4 is a conceptual block diagram illustrating one aspect of a merchant system.FIG. 5 is a conceptual block diagram illustrating one aspect of a registry server.FIG. 6 is a conceptual block diagram illustrating one aspect of a wireless telecommunications device.FIG. 7 is a functional block diagram of one aspect of a merchant system.FIG. 8 is a functional block diagram of one aspect of a registry server.FIG. 9 is a functional block diagram of one aspect of the authentication device.FIG. 10 is a functional block diagram of one aspect of a telecommunications device.Detailed descriptionThe detailed description disclosed below in connection with the appended drawings is intended as a description of various aspects of the invention and is not intended to represent the only embodiments in which the invention may be practiced. . The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the invention.FIG. 1 is a conceptual diagram illustrating one example of a telecommunications system that supports electronic commerce. In this example, a user on computer 102 can conduct electronic commerce with merchant system 104 over the Internet 106. A user initiates a transaction by launching a software application on the computer 102 or by some other authorization means. At the same time or so, the computer 102 interacts with the Internet Service Provider (ISP) 108 over a standard twisted pair telephone line, digital subscriber line (DSL), cable modem, or some other suitable medium. Establish a network connection with a function (IWF) (not shown). The computer 102 then works together with higher level software applications on both systems using its Internet Protocol (IP) address to telecommunications with the merchant system 104 over the Internet 106. To conduct electronic commerce. If computer 102 does not have a permanent Internet Protocol (IP) address, ISP 106 assigns a temporary one to it.Electronic commerce is typically conducted in a secure manner using, for example, encryption techniques such as symmetric and asymmetric key signature systems. Additional security measures may be implemented by requiring an entity other than the computer or merchant system to authorize the transaction. In one aspect, the other entity entity or “authentication device” is a mobile phone or other wireless or wired telecommunication device. In this aspect, the mobile phone owner (or “consumer”) is the person who is financially responsible for electronic commerce, which may or may not be a user on the computer 102. An example of this procedure will now be described with reference to FIG.FIG. 2 illustrates a telecommunications system having a wireless network 202 that connects any number of wireless telecommunications devices to the Internet 106. Wireless network 202 includes code division multiple access (CDMA) networks, global systems in mobile telecommunications telecommunications (GSM®) networks, general packet radio service (GPRS) networks, universal mobile telecommunications systems (UMTS). ) Network, or any other suitable wide area network (WAN). Alternatively, the wireless network 202 may be a local area network (LAN) such as 802.11, home RF, Bluetooth, ultra wideband (UWA), etc. Those skilled in the art will be able to readily determine the specific wireless network that is best suited for any particular application based on the system parameters and overall design constraints imposed on the telecommunications system 100.The wireless network 202 shown in FIG. 2 provides a means for the mobile phone 204 to connect to the Internet 106 to authorize electronic commerce between the computer 102 and the merchant system 104. In this example, computer 102 establishes a connection with merchant system 104 over the Internet 106 using the same or similar procedures described above with respect to FIG. At the same time, or so, the user enters his or her mobile phone number into the computer 110 along with other information necessary to conduct an e-commerce transaction. The mobile phone number allows the merchant system 104 to communicate with the mobile phone to authorize the transaction before being charged to the consumer account, i.e., the mobile phone owner account. That communication may occur through SMS 206 or directly by wireless network 202. The consumer can then use the mobile phone 104 to approve or reject the transaction and send a response to the merchant system 104. The response can be generated by hitting a designated key, entering a PIN, using biometrics, and / or by any other suitable method. The e-commerce transaction ends with the merchant system 104 only if the consumer authorizes it. Once terminated, the charges incurred by the user on the computer 102 can be collected for the mobile phone number, and in some cases can be included in the consumer's telephone charges.In another aspect of telecommunications system 100, a telecommunications device, such as computer 102, must first be registered with a registry server before conducting an electronic commerce transaction charged to a consumer's mobile phone account. An example of this aspect is described with reference to FIG.FIG. 3 is similar to the telecommunications system 100 of FIG. 2 except for the addition of a registry server 302 connected to the Internet 106. Referring to FIG. 3, a consumer registers his or her computer 102 by performing a registration procedure from the computer 102 to the registry server 302. This registration procedure begins with launching a software application on the computer 102. The consumer's mobile phone number is then entered into the computer 102 along with the registration request. At or around the same time, computer 102 establishes an Internet connection with ISP 108. Computer 102 uses an Internet connection to send information to registry server 302. The information includes the IP address of the computer 102, a mobile phone number entered by the consumer, and a registration request.The registry server 302 may provide various functions including authorizing registration requests and maintaining a database 304 of telecommunication devices registered by the consumer. In the aspect of telecommunications system 100 shown in FIG. 3, registry server 302 obtains authorization for the registration request in much the same way that merchant system 104 authorizes electronic commerce. That is, the registry server 302 requests authorization to register the computer 102 and communicates with the mobile phone 204 by SMS 206 or directly through the wireless network 202. The consumer can tap a designated key, enter a PIN, use biometrics and / or answer the call by any other suitable method. . The response is sent from the mobile phone 204 to the registry server 302. If the response authorizes the registration request, the registry server 302 maps the consumer's mobile phone number to the IP address of the computer 102 and stores the result in the database 304.As indicated above, not all computers have permanent IP addresses. In some cases, computers and other telecommunications devices are assigned temporary addresses from a pool of IP addresses maintained by their ISPs. A temporary address is generally assigned to a computer (or other telecommunications device) for the duration of an Internet session. When a computer with a temporary IP address ends its Internet session, that temporary IP address is returned by the ISP to the pool of IP addresses for assignment to other telecommunication devices. An ISP operating in this manner must update the database maintained by the registry server 302 each time a new temporary IP address is assigned to a registered telecommunications device.Returning to FIG. 3, a user on the computer 102 (which may or may not be a consumer) can e-commerce in the merchant system 104 by launching a software application or by some other authorization means. To start. The computer 102 then establishes a network connection with the IWF at the ISP 108. If the computer 102 does not have a permanent IP address, the ISP 106 assigns a temporary IP address to the computer 102 and updates the database 304 maintained by the registry server 302. The IP address is used by computer 102 to establish a connection with merchant system 104 over the Internet 106. At the same time, or around that time, the user enters certain information into the computer 102 necessary to conduct an electronic commerce transaction that includes the consumer's mobile phone number. This information, along with the computer's IP address, is sent by the computer 102 to the merchant system 104 over the Internet 106. The merchant system 104 establishes an Internet connection with the registry server 302 and whether the computer 102 has been registered by the consumer, ie, the database 304 maps the computer's IP address to the consumer's mobile phone number. Send a query to determine if it contains input. Once the registry server 302 confirms that the computer 102 is registered, the merchant system 104 uses the mobile phone number to send an authorization request over the wireless network 202 to the mobile phone 104. The electronic commerce is terminated by the merchant system 104 only if the consumer on the mobile phone 104 authorizes it. Once terminated, the charges incurred by the user on the computer 102 may be collected for the consumer's mobile phone number, and in some cases may be included in the consumer's telephone charges. .FIG. 4 is a simplified block diagram illustrating the functionality of the merchant system 104. In at least one aspect, the merchant system 104 includes at least one processor 402 that communicates with a number of peripheral devices via a system bus 404. The processor 402 may be implemented in hardware, software, firmware, or any combination thereof. Typically, the processor 402 will be implemented with a microprocessor that supports various software applications. These software applications provide many features that support electronic commerce, including obtaining appropriate authorization for such transactions.Peripheral devices may include, by way of example, computer readable media 406 including volatile and non-volatile memory. Volatile memory can be dynamic random access memory (DRAM), static random access memory (SRAM), or any other suitable high speed memory device. Non-volatile memory may include magnetic hard drives, optical disks, and / or any other type of storage device for large amounts of data and software applications. To increase the speed of memory access by the processor 402, software applications and data from non-volatile memory may be written to volatile memory. One skilled in the art will recognize that the term “computer-readable medium” includes any type of storage device accessible by the processor 402 and also encompasses a carrier wave that encodes a data signal.Peripherals may also include various interfaces including a network interface or modem 408. A network interface or modem 408 may be used to provide protocol conversions to support telecommunications by the merchant system 104 over the Internet.FIG. 5 is a simplified block diagram illustrating the functionality of the registry server 302. The architecture of the registry server 302 is similar to that of the merchant system 104. System bus 504 is used to connect one or more processors 502 to any number of peripheral devices. The processor 502 may be implemented in hardware, software, firmware, or any combination thereof, but typically will include a microprocessor that supports various software applications. The software application may reside on computer readable media 506 that is mounted on the system bus 504. Computer readable media 506 may include volatile and non-volatile memory similar to that described with respect to merchant system 104 (see FIG. 4). These software applications provide, among other things, a number of functions that maintain a database of telecommunication devices registered with the consumer owner.A database interface 508 connected to the system bus 504 allows the processor 502 to access the database 304 (see FIG. 3). In at least one aspect of the registry server 302, the database is used to map the consumer's mobile phone number to the IP address of his or her telecommunication device. The database may be external to the registry server 304 having a wireless or wireline T1 or T3, fiber optic connection, Ethernet, or other IP connection. Alternatively, the database may be fully or partially integrated within the registry server 304 on a hard drive or some other suitable non-volatile memory. A network interface or modem 510 may be used to provide protocol conversion to support communication between the registry server 302 and the Internet.FIG. 6 is a simplified block diagram illustrating the functionality of a telecommunications device. The telecommunications device can serve as an authentication device such as the cell phone 204 shown in FIGS. 2-3 and the like. Alternatively, the telecommunications device may be an electronic commerce terminal, such as the computer 102 shown in FIGS. 1-3, or any other suitable access terminal that can support electronic commerce. .The telecommunications device includes at least one processor 602 that communicates with a number of peripherals via a system bus 604, similar to the server described above. The processor 402 will typically be implemented with a microprocessor that supports various software applications, but may be implemented in hardware, software, firmware, or any combination thereof. In the case of an electronic commerce terminal (and in some aspects of the authentication device), a software application provides a means for conducting electronic commerce over the Internet. Software applications running on the authenticator also allow consumers to authorize electronic commerce by other devices. The software application may reside on computer readable media 606 that is mounted on the system bus 604. Computer readable medium 606 may include volatile and non-volatile memory similar to that described with respect to merchant system 104 (see FIG. 1).Peripherals may also include a transceiver 608 to support the physical interface between the telecommunications device and the network. Transceiver 608 may be a radio transceiver or a standard twisted pair telephone line modem, DSL modem, cable modem, fiber optic modem, Ethernet modem, T1 or T3 modem, or any other suitable to support a physical interface to the network. It may be capable of driving a wired connection such as other modems.The remaining peripheral device shown in FIG. 6 is a user interface 610. The user interface may include any number of devices including, for example, a keypad, display, mouse, joystick, and the like. These devices allow electronic commerce device users to perform various tasks such as performing electronic commerce over the Internet and, in the case of authentication devices, authorizing electronic commerce with other devices.The way in which the merchant system 104, registry server 304, and telecommunications device are actually implemented will vary depending on the particular application and design constraints imposed on the overall system. Those skilled in the art will recognize hardware, firmware and software configuration compatibility in these situations, and how best to implement the functionality described above for each particular application.FIG. 7 is a functional block diagram of one aspect of a merchant system. Merchant system 104 includes a module 704 for conducting electronic commerce with a telecommunications device on the consumer's account and a module 702 for obtaining authorization from the consumer's authenticator prior to closing the electronic commerce. .FIG. 8 is a functional block diagram of one aspect of a registry server. The registry server 302 includes a module 804 for interfacing with a database of authorized telecommunications devices authorized to conduct electronic commerce on the consumer's account, and each authorized telecommunications device with a consumer authenticator. A module 802 for maintaining a database by mapping to identifying information is included.FIG. 9 is a functional block diagram of one aspect of the authentication device. The authenticator 204 includes a module 902 for receiving a request from the merchant system to authorize electronic commerce between the merchant system and the telecommunications device on the consumer account 902, and a module for responding to the request. 904 is included.FIG. 10 is a functional block diagram of one aspect of a telecommunications device. The telecommunications device 102 is a module for generating a request for a registry server to add a telecommunications device to a database authorizing a telecommunications device that performs electronic commerce with a merchant system on a consumer account. A module 1002 in which the request includes information identifying the consumer's authentication device, and a module for sending the request to the registry server.Various exemplary logic blocks, modules, circuits, elements, and / or components described in connection with the aspects disclosed herein can be a general purpose processor, a digital signal processor (DSP), or an application specific integrated circuit (ASIC). Implemented or implemented in a field programmable gate array (FPGA) or other programmable logic component, individual gate, individual hardware component, or any combination thereof designed to perform the functions described above Also good. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing components, eg, a DSP and microprocessor combination, multiple microprocessors, one or more microprocessors associated with a DSP core, or any other such configuration. Good.The methods or algorithms described in connection with the aspects disclosed herein may be directly embodied in hardware, software modules executed by a processor, or a combination of the two. A software module resides in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other form of storage medium known in the art. Also good. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.The previous description is provided to enable any person skilled in the art to perform the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Accordingly, the claims are not intended to be limited to the embodiments shown herein but are to be given a sufficient scope consistent with language claims, and references to elements in the singular are particularly Unless stated to the contrary, it is not intended to mean "one and only one", but "one or more". All structural or functional equivalents to the elements of the various embodiments described throughout the present disclosure as known to those skilled in the art or later become known are expressly incorporated herein by reference and the claims It is intended to be covered by a term. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. An element of a claim is not clearly stated using the phrase “means for” or, in the case of a method claim, the element “step for” If not described using, it should be interpreted according to the provisions of 35 USC 112 sixth paragraph.
A graphics processor and method for executing a graphics program as a plurality of threads where each sample to be processed by the program is assigned to a thread. Although threads share processing resources within the programmable graphics processor, the execution of each thread can proceed independent of any other threads. For example, instructions in a second thread are scheduled for execution while execution of instructions in a first thread are stalled waiting for source data. Consequently, a first received sample (assigned to the first thread) may be processed after a second received sample (assigned to the second thread). A benefit of independently executing each thread is improved performance because a stalled thread does not prevent the execution of other threads.
The invention claimed is:1. A method of rendering a scene using a graphics processor comprising:configuring a multithreaded processing unit within the graphics processor to enable processing of samples independent of an order in which the samples are received;processing the samples independent of the order in which the samples are received by the multithreaded processing unit to render at least a portion of the scene; andconfiguring the multithreaded processing unit to disable processing of samples independent of an order in which the samples are received to minimize visual artifacts in the rendered scene by preventing the occurrence of position hazards.2. The method of claim 1, further comprising:processing a portion of the samples in the order in which the samples are received by the multithreaded processing unit.3. The method of claim 2, wherein the portion of the samples includes samples within intersecting objects.4. The method of claim 2, wherein the portion of the samples includes samples within coincident objects.5. A graphics processor for multithreaded execution of program instructions comprising:at least one multithreaded processing unit configured to receive samples in a first order to be processed by program instructions associated with at least one thread including:a scheduler configured to receive the program instructions, determine availability of source data, and schedule the program instructions for execution to process the samples in a second order independent of the first order;a resource tracking unit configured to track the availability of the source data; anda dispatcher configured to output the program instructions in the second order to be executed by the at least one multithreaded processing unit.6. The graphics processor of claim 5, wherein the samples include at least one of vertices, primitives, surfaces, fragments and pixels.7. The graphics processor of claim 5, further comprising a thread control buffer configured to store program counters, each program counter associated with one of the at least one thread.8. The graphics processor of claim 5, further comprising an instruction cache configured to store the program instructions.9. The graphics processor of claim 5, wherein the scheduler is configured to schedule the program instructions for execution to process the samples in a second order where the second order is the same as the first order.10. The graphics processor of claim 5, wherein the scheduler is configured to schedule the program instructions for execution after ordering the program instructions.11. The graphics processor of claim 10, wherein the ordering is based on the number of cycles each of the program instructions has been in an instruction window unit.12. The graphics processor of claim 5, wherein the scheduler is configured to determine availability of computation resources within the at least one multithreaded processing unit.13. The graphics processor of claim 12, wherein the resource tracking unit is configured to track the availability of the computation resources.14. The graphics processor of claim 5, wherein the at least one multithreaded processing unit is configured to allocate storage resources to the at least one thread.15. The graphics processor of claim 5, wherein the at least one multithreaded processing unit is configured to maintain thread state data for the at least one thread.16. The graphics processor of claim 15, wherein a portion of the thread state data indicates whether the thread is either assigned to a sample or is available to be assigned to a sample.17. A computing system comprising:a host processor;a host memory, the host memory storing programs for the host processor;a system interface configured to interface with the host processor; anda graphics processor for multithreaded execution of program instructions Including:at least one multithreaded processing unit configured to receive samples in a first order to be processed by program instructions associated with at least one thread including:a scheduler configured to receive the program instructions, determine availability of source data, and schedule the program instructions for execution in a second order independent of the first order;a resource tracking unit configured to track the availability of the source data; anda dispatcher configured to output the program instructions in the second order to be executed by the at least one multithreaded processing unit.18. The computing system of claim 17, wherein the host memory is configured to interface with the system interface.19. The computing system of claim 17, wherein the host memory is configured to directly interface with the host processor.20. A method of processing a first program instruction associated with a first thread and a second program instruction associated with a second thread comprising:receiving a first sample to be processed by the first program instruction associated with the first thread before receiving a second sample to be processed by the second program instruction associated with the second thread;determining that first source data required to process the first program instruction are not available;determining that second source data required to process the second program instruction are available; anddispatching the second program instruction to process the second sample in an execution unit prior to dispatching the first program instruction to process the first sample in the execution unit.21. The method of claim 20, further comprising determining that a position hazard does not exist between a position of the first sample and a position of any other sample being processed by a program instruction in the execution unit.22. The method of claim 21, further comprising, prior to the determining that a position hazard does not exist, disabling processing of samples independent of an order in which the samples are received using a programmable mode.23. The method of claim 20, further comprising determining that a position hazard does not exist between a position of the second sample and a position of any other sample being processed by a program instruction in the execution unit.24. The method of claim 20, further comprising:retaining as state information the position of the first sample received to be processed by the first program instruction associated with the first thread; andupdating the state information when the first thread has completed execution.25. The method of claim 20, further comprising:retaining as state information the position of the second sample received to be processed by the second program instruction associated with the second thread; andupdating the state information when the second thread has completed execution.26. The method of claim 20, further comprising allocating storage resources to the second thread.27. The method of claim 20, further comprising allocating storage resources to the first thread.28. A method of using a function call to configure a graphics processor comprising:detecting that a multithreaded processing unit within the graphics processor supports processing of samples independent of an order in which the samples are received for at least one sample type;issuing a function call to configure the multithreaded processing unit to enable processing of samples independent of an order in which the samples are received for the at least one sample type; andending the function call during rendering of an output image to disable processing of samples independent of the order In which they are received to prevent image artifacts due to position hazards.29. A method as claimed in claim 28 wherein the multithreaded processing unit is configured to process several output pixel locations distributed across the output image.30. A method as claimed in claim 28 wherein the multithreaded processing unit is configured to process several adjacent output pixel locations within the output image.31. A method as claimed in claim 28 wherein the multithreaded processing unit is configured to process regions of four adjacent pixels arranged in a square with the squares located within the output image.32. A method as claimed in claim 28 wherein a separate one of the function calls may be issued for each of the multithreaded processing unit.33. A method as claimed in claim 1 wherein each thread of the multithreaded processing unit may be separately configured to selectively enable or disable processing of samples independent of the order in which the samples are received.
FIELD OF THE INVENTIONOne or more aspects of the invention generally relate to multithreaded processing, and more particularly to processing graphics data in a programmable graphics processor.BACKGROUNDCurrent graphics data processing is exemplified by systems and methods developed to perform a specific operation on several graphics data elements, e.g., linear interpolation, tessellation, texture mapping, depth testing. Traditionally graphics processing systems were implemented as fixed function computation units and more recently the computation units are programmable to perform a limited set of operations. In either system, the graphics data elements are processed in the order in which they are received by the graphics processing system. Within the graphics processing system, when a resource, e.g., computation unit or data, required to process a graphics data element is unavailable, the processing of the element stalls, i.e., does not proceed, until the resource becomes available. Because the system is pipelined, the stall propagates back through the pipeline, stalling the processing of later received elements that may not require the resource and reducing the throughput of the system.For the foregoing reasons, there is a need for improved approaches to processing graphics data elements.SUMMARYThe present invention is directed to a system and method that satisfies the need for a programmable graphics processor that supports processing of graphics data elements in an order independent from the order in which the graphics data elements are received by the programmable graphics processing pipeline within the programmable graphics processor.Various embodiments of the invention include a computing system comprising a host processor, a host memory, a system interface configured to interface with the host processor, and the programmable graphics processor for multithreaded execution of program instructions. The graphics processor includes at least one multithreaded processing unit configured to receive samples in a first order to be processed by program instructions associated with at least one thread. Each multithreaded processing unit includes a scheduler configured to receive the program instructions, determine availability of source data, and schedule the program instructions for execution in a second order independent of the first order. Each multithreaded processing unit further includes a resource tracking unit configured to track the availability of the source data, and a dispatcher configured to output the program instructions in the second order to be executed by the at least one multithreaded processing unit.Further embodiments of the invention include an application programming interface for a programmable graphics processor comprising a function call to configure a multithreaded processing unit within the programmable graphics processor to enable processing of samples independent of an order in which the samples are received.Yet further embodiments of the invention include an application programming interface for a programmable graphics processor comprising a function call to configure a multithreaded processing unit within the programmable graphics processor to disable processing of samples independent of an order in which the samples are received.Various embodiments of a method of the invention include processing a first program instruction associated with a first thread and a second program instruction associated with a second thread. A first sample to be processed by a program instruction associated with a first thread is received before a second sample to be processed by a program instruction associated with a second thread is received. First source data required to process the program instruction associated with the first thread are determined to be not available. Second source data required to process the program instruction associated with the second thread are determined to be available. The program instruction associated with the second thread to process the second sample in the execution unit is dispatched prior to dispatching the program instruction associated with the first thread to process the first sample in the execution unit.Further embodiments of a method of the invention include using a function call to configure the graphics processor. Support for processing samples of at least one sample type independent of an order in which the samples are received by a multithreaded processing unit within the graphics processor is detected. The function call to configure the multithreaded processing unit within the graphics processor to enable processing of the samples independent of an order in which the samples are received is issued for the at least one sample type.Yet further embodiment so of a method of the invention include rendering a scene using the graphics processor. The multithreaded processing unit within the graphics processor is configured to enable processing of samples independent of an order in which the samples are received. The multithreaded processing unit within the graphics processor process the samples independent of the order in which the samples are received to render at least a portion of the scene.BRIEF DESCRIPTION OF THE VARIOUS VIEWS OF THE DRAWINGSAccompanying drawing(s) show exemplary embodiment(s) in accordance with one or more aspects of the present invention; however, the accompanying drawing(s) should not be taken to limit the present invention to the embodiment(s) shown, but are for explanation and understanding only.FIG. 1 illustrates one embodiment of a computing system according to the invention including a host computer and a graphics subsystem;FIG. 2 is a block diagram of an embodiment of the Programmable Graphics Processing Pipeline of FIG. 1;FIG. 3 is a conceptual diagram of the relationship between a program and threads;FIG. 4 is a block diagram of an embodiment of the Execution Pipeline of FIG. 2;FIGS. 5A and 5B illustrate embodiments of methods utilizing the Execution Pipeline illustrated in FIG. 4;FIG. 6 illustrates an embodiment of a method utilizing the Execution Pipeline illustrated in FIG. 4;FIGS. 7A, 7B, and 7C illustrate embodiments of methods utilizing the Computing System illustrated in FIG. 1.DISCLOSURE OF THE INVENTIONThe current invention involves new systems and methods for processing graphics data elements in an order independent from the order in which the graphics data elements are received by a multithreaded processing unit within a graphics processor.FIG. 1 is an illustration of a Computing System generally designated 100 and including a Host Computer 10 and a Graphics Subsystem 170. Computing System 100 may be a desktop computer, server, laptop computer, palm-sized computer, tablet computer, game console, cellular telephone, computer based simulator, or the like. Host Computer 110 includes Host Processor 114 which may include a system memory controller to interface directly to Host Memory 112 or may communicate with Host Memory 112 through a System Interface 115. System Interface 115 may be an I/O (input/output) interface or a bridge device including the system memory controller to interface directly to Host Memory 112. Examples of System Interface 115 known in the art include Intel(R)Northbridge and Intelg(R) Southbridge.Host Computer 110 communicates with Graphics Subsystem 170 via System Interface 115 and a Graphics Interface 117 within a Graphics Processor 105. Data received at Graphics Interface 117 can be passed to a Front End 130 or written to a Local Memory 140 through Memory Controller 120. Graphics Processor 105 uses graphics memory to store graphics data and program instructions, where graphics data is any data that is input to or output from components within the graphics processor. Graphics memory can include portions of Host Memory 112, Local Memory 140, register files coupled to the components within Graphics Processor 105, and the like.Graphics Processor 105 includes, among other components, Front End 130 that receives commands from Host Computer 110 via Graphics Interface 117. Front End 130 interprets and formats the commands and outputs the formatted commands and data to an IDX (Index Processor) 135. Some of the formatted commands are used by Programmable Graphics Processing Pipeline 150 to initiate processing of data by providing the location of program instructions or graphics data stored in memory. IDX 135, Programmable Graphics Processing Pipeline 150 and a Raster Analyzer 160 each include an interface to Memory Controller 120 through which program instructions and data can be read from memory, e.g., any combination of Local Memory 140 and Host Memory 112. When a portion of Host Memory 112 is used to store program instructions and data, the portion of Host Memory 112 can be uncached so as to increase performance of access by Graphics Processor 105.IDX 135 optionally reads processed data, e.g., data written by Raster Analyzer 160, from memory and outputs the data, processed data and formatted commands to Programmable Graphics Processing Pipeline 150. Programmable Graphics Processing Pipeline 150 and Raster Analyzer 160 each contain one or more programmable processing units to perform a variety of specialized functions. Some of these functions are table lookup, scalar and vector addition, multiplication, division, coordinate-system mapping, calculation of vector normals, tessellation, calculation of derivatives, interpolation, and the like. Programmable Graphics Processing Pipeline 150 and Raster Analyzer 160 are each optionally configured such that data processing operations are performed in multiple passes through those units or in multiple passes within Programmable Graphics Processing Pipeline 150. Programmable Graphics Processing Pipeline 150 and a Raster Analyzer 160 also each include a write interface to Memory Controller 120 through which data can be written to memory.In a typical implementation Programmable Graphics Processing Pipeline 150 performs geometry computations, rasterization, and pixel computations. Therefore Programmable Graphics Processing Pipeline 150 is programmed to operate on surface, primitive, vertex, fragment, pixel, sample or any other data. A fragment is at least a portion of a pixel, i.e., a pixel includes at least one fragment. For simplicity, the remainder of this description will use the term "samples" to refer to surfaces, primitives, vertices, pixels, or fragments.Samples output by Programmable Graphics Processing Pipeline 150 are passed to a Raster Analyzer 160, which optionally performs near and far plane clipping and raster operations, such as stencil, z test, and the like, and saves the results or the samples output by Programmable Graphics Processing Pipeline 150 in Local Memory 140. When the data received by Graphics Subsystem 170 has been completely processed by Graphics Processor 105, an Output 185 of Graphics Subsystem 170 is provided using an Output Controller 180. Output Controller 180 is optionally configured to deliver data to a display device, network, electronic control system, other Computing System 100, other Graphics Subsystem 170, or the like.FIG. 2 is an illustration of Programmable Graphics Processing Pipeline 150 of FIG. 1. At least one set of samples is output by IDX 135 and received by Programmable Graphics Processing Pipeline 150 and the at least one set of samples is processed according to at least one program, the at least one program including graphics program instructions. A program can process one or more sets of samples. Conversely, a set of samples can be processed by a sequence of one or more programs.Samples, such as surfaces, primitives, or the like, are received from IDX 135 by Programmable Graphics Processing Pipeline 150 and stored in a Vertex Input Buffer 220 in a register file, FIFO (first in first out), cache, or the like (not shown). The samples are broadcast to Execution Pipelines 240, four of which are shown in the figure. Each Execution Pipeline 240 includes at least one multithreaded processing unit, to be described further herein. The samples output by Vertex Input Buffer 220 can be processed by any one of the Execution Pipelines 240. A sample is accepted by a Execution Pipeline 240 when a processing thread within the Execution Pipeline 240 is available as described further herein. Each Execution Pipeline 240 signals to Vertex Input Buffer 220 when a sample can be accepted or when a sample cannot be accepted. In one embodiment Programmable Graphics Processing Pipeline 150 includes a single Execution Pipeline 240 containing one multithreaded processing unit. In an alternative embodiment, Programmable Graphics Processing Pipeline 150 includes a plurality of Execution Pipelines 240.Execution Pipelines 240 can receive first samples, such as higher-order surface data, and tessellate the first samples to generate second samples, such as vertices. Execution Pipelines 240 can be configured to transform the second samples from an object-based coordinate representation (object space) to an alternatively based coordinate system such as world space or normalized device coordinates (NDC) space. Each Execution Pipeline 240 communicates with Texture Unit 225 using a read interface (not shown in FIG. 2) to read program instructions and graphics data such as texture maps from Local Memory 140 or Host Memory 112 via Memory Controller 120 and a Texture Cache 230. Texture Cache 230 is used to improve memory read performance by reducing read latency. In an alternate embodiment Texture Cache 230 is omitted. In another alternate embodiment, a Texture Unit 225 is included in each Execution Pipeline 240. In yet another alternate embodiment program instructions are stored within Programmable Graphics Processing Pipeline 150.Execution Pipelines 240 output processed samples, such as vertices, that are stored in a Vertex Output Buffer 260 in a register file, FIFO, cache, or the like (not shown). Processed vertices output by Vertex Output Buffer 260 are received by a Primitive Assembly/Setup 205. This unit calculates parameters, such as deltas and slopes, to rasterize the processed vertices. Primitive Assembly/Setup 205 outputs parameters and samples, such as vertices, to Raster Unit 210. The Raster Unit 210 performs scan conversion on samples, such as vertices, and outputs samples, such as fragments, to a Pixel Input Buffer 215. Alternatively, Raster Unit 210 resamples processed vertices and outputs additional vertices to Pixel Input Buffer 215.Pixel Input Buffer 215 outputs the samples to each Execution Pipeline 240. Samples, such as pixels and fragments, output by Pixel Input Buffer 215 are each processed by only one of the Execution Pipelines 240. Pixel Input Buffer 215 determines which one of the Execution Pipelines 240 to output each sample to depending on an output pixel position, e.g., (x,y), associated with each sample. In this manner, each sample is output to the Execution Pipeline 240 designated to process samples associated with the output pixel position. In an alternate embodiment, each sample output by Pixel Input Buffer 215 is processed by an available Execution Pipeline 240.A sample is accepted by a Execution Pipeline 240 when a processing thread within the Execution Pipeline 240 is available as described further herein. Each Execution Pipeline 240 signals to Pixel Input Buffer 240 when a sample can be accepted or when a sample cannot be accepted. Program instructions associated with a thread configure programmable computation units within a Execution Pipeline 240 to perform operations such as texture mapping, shading, blending, and the like. Processed samples are output from each Execution Pipeline 240 to a Pixel Output Buffer 270. Pixel Output Buffer 270 optionally stores the processed samples in a register file, FIFO, cache, or the like (not shown). The processed samples are output from Pixel Output Buffer 270 to Raster Analyzer 160.Execution Pipelines 240 are optionally configured using program instructions read by Texture Unit 225 such that data processing operations are performed in multiple passes through at least one multithreaded processing unit, to be described further herein, within Execution Pipelines 240. Intermediate data generated during multiple passes can be stored in graphics memory.FIG. 3 is a conceptual diagram illustrating the relationship between a program and threads. A single program is used to process several sets of samples. Each program, such as a vertex program or shader program, includes a sequence of program instructions such as, a Sequence 330 of program instructions 331 to 344. The at least one multithreaded processing unit within a Execution Pipeline 240 supports multithreaded execution. Therefore the program instructions in instruction Sequence 330 can be used by the at least one multithreaded processing unit to process each sample or each group of samples independently, i.e., the at least one multithreaded processing unit may process each sample asynchronously relative to other samples. For example, each fragment or group of fragments within a primitive can be processed independently from the other fragments or from the other groups of fragments within the primitive. Likewise, each vertex within a surface can be processed independently from the other vertices within the surface. For a set of samples being processed using the same program, the sequence of program instructions associated with each thread used to process each sample within the set will be identical. However, it is possible that, during execution, the threads processing some of the samples within a set will diverge following the execution of a conditional branch instruction. After the execution of a conditional branch instruction, the sequence of executed instructions associated with each thread processing samples within the set may differ.In FIG. 3 program instructions within instruction Sequence 330 are stored in graphics memory, i.e., Host Memory 112, Local Memory 140, register files coupled to the components within Graphics Processor 105, and the like. Each program counter (0 through 13) in instruction Sequence 330 corresponds to a program instruction within instruction Sequence 330. The program counters are conventionally numbered sequentially and can be used as an index to locate a specific program instruction within Sequence 330. The first instruction 331 in the sequence 330 represents is the program instruction corresponding to program counter 0. A base address, corresponding to the graphics memory location where the first instruction 331 in a program is stored, can be used in conjunction with a program counter to determine the location where a program instruction corresponding to the program counter is stored.In this example, program instructions within Sequence 330 are associated with three threads. A Thread 350, a Thread 360 and a Thread 370 are each assigned to a different sample and each thread is uniquely identified by a thread identification code. A program instruction within Sequence 330 is associated with a thread using a program counter that is stored as a portion of thread state data, as described further herein. Thread 350 thread state data includes a program counter of 1 as shown in Sequence 330. The program counter associated with Thread 350 is a pointer to the program instruction in Sequence 330 corresponding to program counter 1 and stored at location 332. The instruction stored at location 332 is the next instruction to be used to process the sample assigned to Thread 350. Alternatively, an instruction stored at location 332 is the most recently executed instruction to process the sample assigned to Thread 350.The thread state data for Thread 360 and Thread 370 each include a program counter of 11, as shown in FIG. 3, referencing the program instruction corresponding to program counter 11 in Program 330 and stored at location 342. Program counters associated with threads to process samples within a primitive, surface, or the like, are not necessarily identical because the threads can be executed independently. When branch instructions are not used, Thread 350, Thread 360 and Thread 370 each execute all of the program instructions in Sequence 330.The number of threads that can be executed simultaneously is limited to a predetermined number in each embodiment and is related to the number of Execution Pipelines 240, the amount of storage required for thread state data, the latency of Execution Pipelines 240, and the like. Each sample is a specific type, e.g., primitive, vertex, or pixel, corresponding to a program type. A primitive type sample, e.g., primitive, is processed by a primitive program, a vertex type sample, e.g., surface or vertex, is processed by a vertex program, and a pixel type sample, e.g., fragment or pixel, is processed by a shader program. Likewise, a primitive thread is associated with program instructions within a primitive program, a vertex thread is associated with program instructions within a vertex program, and a pixel thread is associated with program instructions within a shader program.A number of threads of each thread type that may be executed simultaneously is predetermined in each embodiment. Therefore, not all samples within a set of samples of a type can be processed simultaneously when the number of threads of the type is less than the number of samples. Conversely, when the number of threads of a type exceeds the number of samples of the type within a set, more than one set can be processed simultaneously. Furthermore, when the number of threads of a type exceeds the number of samples of the type within one or more sets, more than one program of the type can be executed on the one or more sets and the thread state data can include data indicating the program associated with each thread.FIG. 4 is an illustration of a Execution Pipeline 240 containing at least one Multithreaded Processing Unit 400. A Execution Pipeline 240 can contain a plurality of Multithreaded Processing Units 400. Within each Multithreaded Processing Unit 400, a Thread Control Buffer 420 receives samples from Pixel Input Buffer 215 or Vertex Input Buffer 220. Thread Control Buffer 420 includes storage resources to retain thread state data for a subset of the predetermined number of threads. In one embodiment Thread Control Buffer 420 includes storage resources for each of at least two thread types, where the at least two thread types can include pixel, primitive, and vertex. At least a portion of Thread Control Buffer 420 is a register file, FIFO, circular buffer, or the like. Thread state data for a thread can include, among other things, a program counter, a busy flag that indicates if the thread is either assigned to a sample or available to be assigned to a sample, a pointer to the source sample to be processed by the instructions associated with the thread or the output pixel position and output buffer ID of the sample to be processed, and a pointer specifying a destination location in Vertex Output Buffer 260 or Pixel Output Buffer 270. Additionally, thread state data for a thread assigned to a sample can include the sample type, e.g., pixel, vertex, primitive, or the like.The source sample is stored in either Pixel Input Buffer 215 or Vertex Input Buffer 220. When a thread is assigned to a sample, the thread is allocated storage resources to retain intermediate data generated during execution of program instructions associated with the thread. The thread identification code for a thread may be the address of a location in Thread Control Buffer 420 in which the thread state data for the thread is stored. In one embodiment, priority is specified for each thread type and Thread Control Buffer 420 is configured to assign threads to samples or allocate storage resources based on the priority assigned to each thread type. In an alternate embodiment, Thread Control Buffer 420 is configured to assign threads to samples or allocate storage resources based on an amount of sample data in Pixel Input Buffer 215 and another amount of sample data in Vertex Input Buffer 220.An Instruction Cache 410 reads one or more thread entries, each containing thread state data, from Thread Control Buffer 420. Instruction Cache 410 may read thread entries to process a group of samples. For example, in one embodiment a group of samples, e.g., a number of vertices defining a primitive, four adjacent fragments arranged in a square, or the like, are processed simultaneously. In the one embodiment computed values such as derivatives are shared within the group of samples thereby reducing the number of computations needed to process the group of samples compared with processing the group of samples without sharing the computed values.In an embodiment of Multithreaded Processing Unit 400, priority is specified for each thread type and Instruction Cache 410 is configured to read thread entries based on the priority assigned to each thread type. In another embodiment, Instruction Cache 410 is configured to read thread entries based on the amount of sample data in Pixel Input Buffer 215 and the amount of sample data in Vertex Input Buffer 220. Instruction Cache 410 determines if the program instructions corresponding to the program counters and sample type included in the thread state data for each thread entry are available in Instruction Cache 410. When a requested program instruction is not available in Instruction Cache 410 it is read (possibly along with other program instructions stored in adjacent memory locations) from graphics memory. In an alternate embodiment Instruction Cache 410 can be shared between Multithreaded Processing Units 400 within Execution Pipeline 240.The program instructions corresponding to the program counters from the one or more thread entries are output by Instruction Cache 410 to a scheduler, Instruction Scheduler 430. A cache miss in Instruction Cache 410 can result in instructions being output by Instruction Cache 410 in an order which is different than the order in which the samples to be processed by the instructions were received by Thread Control Buffer 420. For example when an instruction to process a first received sample is not stored in Instruction Cache 410 and an instruction to process a second received sample is stored in Instruction Cache 410, the instruction to process the second received sample will be output by Instruction Cache 410 to Instruction Scheduler 430 while the instruction to process the first received sample is read from graphics memory.The number of instructions output each clock cycle from Instruction Cache 410 to Instruction Scheduler 430 can vary depending on whether or not the instructions are available in the cache. The number of instructions that can be output each clock cycle from Instruction Cache 410 to Instruction Scheduler 430 may also vary between different embodiments. In one embodiment, Instruction Cache 410 outputs one instruction per clock cycle to Instruction Scheduler 430. In an alternate embodiment, Instruction Cache 410 outputs a predetermined number of instructions per clock cycle to Instruction Scheduler 430.Instruction Scheduler 430 contains storage resources to store a predetermined number of instructions in an IWU (instruction window unit) 435. Each clock cycle, Instruction Scheduler 430 evaluates whether any instruction within the IWU 435 can be executed based on the availability of computation resources in an Execution Unit 470 and source data stored in a Register File 450. An instruction specifies the location of source data needed to execute the instruction. In addition to Register File 450, other locations of source data include Pixel Input Buffer 215, Vertex Input Buffer 220, locations in Local Memory 140, locations in Host Memory 112, and the like. A resource tracking unit, Resource Scoreboard 460, tracks the status of source data stored in registers in Register File 450. Specifically, registers scheduled to be written during processing, i.e., destination registers, are marked as "write pending". When a destination register is written, its status is updated and the "write pending" mark is removed. In one embodiment a destination register is marked as "write pending" by setting a bit in Resource Scoreboard 460 corresponding to the destination register. The bit is cleared when the destination register is written, indicating that data stored in the register is available to be used as source data. Similarly, Resource Scoreboard 460 may also track the availability of the computation resources in an Execution Unit 470.During the evaluation process, in one embodiment Instruction Scheduler 430 is configured to give priority to threads based on thread age (lowest program counter or greatest number of clock cycles resident in IWU 435). A CU (Comparison Unit) 433 is used to compare program counters. In an alternate embodiment, in addition to program counters, thread state data such as stack depths, nesting levels, subroutine calls, or the like are used to determine thread age. An STU (Scheduling Timeout Unit) 437 is used to count the number of consecutive clock cycles each instruction in IWU 435 is resident in IWU 435. In one embodiment, priority is specified for each thread type and Instruction Cache 410is configured to read thread entries based on the priority assigned to each thread type. In another embodiment, Instruction Cache 410 is configured to read thread entries based on the amount of sample data in Pixel Input Buffer 215 and the amount of sample data in Vertex Input Buffer 220.When Instruction Scheduler 430 determines which instructions and associated threads will be executed, Instruction Scheduler 430 outputs at least one instruction to a dispatcher, Instruction Dispatcher 440, updates destination register status and computation resource availability in Resource Scoreboard 460 and increments each program counter associated with the threads in Thread Control Buffer 420 associated with the at least one instruction output. In this manner, Instruction Scheduler 430 is able to schedule the execution of the instructions associated with each thread such that the processing of a sample is one or more instructions ahead of the processing of another sample. As a result of Instruction Scheduler 430 not being constrained to schedule instructions for execution on each sample within a set of data synchronously, the samples are not necessarily processed or output in the order in which they were received.Instruction Dispatcher 440 gathers the source data specified in an instruction and outputs the instruction and source data to Execution Unit 470. Execution Unit 470 is configured by the program instruction to process samples using programmable computation units to perform operations such as linear interpolation, derivative calculation, blending, and the like, and output the processed sample to a destination specified by the instruction. The destination can be Vertex Output Buffer 260, Pixel Output Buffer 270, or Register File 450. When execution of an instruction is complete, Execution Unit 470 updates Resource Scoreboard 460 to indicate that destination registers are written and the computation resources used to process the instruction are available. Likewise, Execution Unit 470 updates each program counter associated with the threads in Thread Control Buffer 420 following the execution of a loop or branch instruction. In an alternate embodiment, Resource Scoreboard 460 snoops an interface between Execution Unit 470 and Register File 450 to update register status.When the program instructions associated with a thread have completed execution, the storage resources allocated to retain intermediate data generated during execution of the thread become available for allocation to another thread, i.e., the storage resources are deallocated and the thread is flagged as available in Thread Control Buffer 420. When a program instruction stored in Instruction Cache 410 has completed execution on each sample within the one or more sets that the program instruction is programmed to process, the program instruction is retired from Instruction Cache 410 (by being overwritten).The occurrence of image artifacts caused by failing to maintain sample processing order for each output pixel position between frames or within a frame can be significantly reduced or eliminated by processing pixel type samples, e.g., pixels, fragments, and the like, for each output pixel location, in the order in which the pixel type samples are received. Processing the pixel type samples for each output pixel location in the order in which the pixel type samples are received can be achieved by permitting pixel type samples corresponding to each output pixel location to be processed by a dedicated Multithreaded Processing Unit 400 and by preventing the occurrence of position hazards. A position hazard exists when more than one pixel type sample corresponding to an output pixel position within an output buffer is being processed by any Multithreaded Processing Unit 400 because the order in which samples will be processed is not deterministic, i.e., is not necessarily the same as the order in which the samples are received. In one embodiment each Multithreaded Processing Unit 400 is configured to process several output pixel locations distributed across an output image. In an alternate embodiment each Multithreaded Processing Unit 400 is configured to process several adjacent output pixel locations within the output image. In another embodiment each Multithreaded Processing Unit 400 is configured to process regions of four adjacent pixels arranged in a square, with each square distributed within the output image.Thread Control Buffer 420 can be configured to accept only one fragment or pixel from Pixel Input Buffer 215 corresponding to each output pixel position within an output buffer and wait until the one fragment or pixel is processed before accepting another fragment or pixel corresponding to the same output pixel position within the output buffer. The output pixel position is stored as a portion of portion of thread state data in Thread Control Buffer 420. An output buffer ID specifying a unique output buffer containing output pixel positions is also optionally stored as a portion of thread state data in Thread Control Buffer 420. A process independent of order received (PIOR) flag is used to disable the prevention of position hazards. Disabling the PIOR flag during rendering eliminates image artifacts that can be introduced when fragment or pixel processing order for each output pixel location within an output buffer is not maintained between frames or within a frame. Enabling the PIOR flag during rendering can improve performance. Furthermore, a PIOR flag may be dedicated for each thread type to selectively enable or disable PIOR for each thread type.In an alternate embodiment each Multithreaded Processing Unit 400 is configured to process fragments and pixels corresponding to any output pixel position and Pixel Input Buffer 215 can be configured to output only one fragment or pixel corresponding to each output pixel position within an output buffer. In the alternate embodiment Pixel Input Buffer 215 waits until the one fragment or pixel corresponding to an output pixel position within an output buffer is processed before outputting another fragment or pixel corresponding to the same output pixel position within the output buffer.FIG. 5A illustrates an embodiment of a method utilizing Multithreaded Processing Unit 400 to dispatch program instructions to process two samples. In step 501, Thread Control Buffer 420 receives a sample to be processed by a program instruction associated with a thread from Vertex Input Buffer 220 or Pixel Input Buffer 215. In step 503, Thread Control Buffer 420 receives another sample to be processed by a program instruction associated with another thread from Vertex Input Buffer 220 or Pixel Input Buffer 215, after receiving the sample. In step 505, Instruction Scheduler 430 determines if source data required to process the program instruction associated with the thread are available, and, if so, in step 515 Instruction Scheduler 430 outputs the program instruction associated with the thread to Instruction Dispatcher 440. In step 515 Instruction Dispatcher 440 also dispatches the program instruction associated with the thread and updates the register status for destination registers. In an alternate embodiment, in step 505 Instruction Scheduler also determines if a computation resource within Execution Unit 470 required to process the program instruction associated with the thread is available.In step 517 Instruction Scheduler 430 determines if source data required to process the program instruction associated with the other thread are available, and, if so, in step 519 Instruction Scheduler 430 outputs the program instruction associated with the other thread to Instruction Dispatcher 440. In step 519 Instruction Dispatcher 440 also dispatches the program instruction associated with the other thread and updates the register status for destination registers. If in step 517 Instruction Scheduler 430 determines source data required to process the program instruction associated with the other thread are not available, Instruction Scheduler 430 remains in step 517. In an alternate embodiment, in step 517 Instruction Scheduler also determines if a computation resource within Execution Unit 470 required to process the program instruction associated with the other thread is available.If in step 505 Instruction Scheduler 430 determines source data required to process the program instruction associated with the thread are not available, in step 507 Instruction Scheduler 430 determines if source data required to process the program instruction associated with the other thread are available. If in step 507 Instruction Scheduler 430 determines source data required to process the program instruction associated with the other thread are not available, Instruction Scheduler 430 returns to step 505. If in step 507 Instruction Scheduler 430 determines source data required to process the program instruction associated with the other thread are available, in step 509 Instruction Scheduler 430 outputs the program instruction associated with the other thread to Instruction Dispatcher 440 prior to outputting the program instruction associated with the thread. In step 509 Instruction Dispatcher 440 also dispatches the program instruction associated with the other thread and updates the register status for destination registers.In step 511, Instruction Scheduler 430 determines if source data required to process the program instruction associated with the thread is available, and, if so, in step 513 Instruction Scheduler 430 outputs the program instruction associated with the thread to Instruction Dispatcher 440. In step 513 Instruction Dispatcher 440 also dispatches the program instruction associated with the thread and updates the register status for destination registers. If in step 511 Instruction Scheduler 430 determines source data required to process the program instruction associated with the thread are not available, Instruction Scheduler 430 remains in step 511.In an alternate embodiment, in step 507 Instruction Scheduler also determines if a computation resource within Execution Unit 470 required to process the program instruction associated with the other thread is available and in step 511 Instruction Scheduler also determines if a computation resource within Execution Unit 470 required to process the program instruction associated with the thread is available.FIG. 5B illustrates an embodiment of a method utilizing Multithreaded Processing Unit 400 to process one sample. In step 520, Thread Control Buffer 420 receives the sample from Vertex Input Buffer 220 or Pixel Input Buffer 215. In step 521, Thread Control Buffer 420 determines a thread type needed to process the sample. In step 523 Thread Control Buffer 420 determines if the PIOR flag is disabled for pixel threads (used to process pixels or fragments), and, if so, in step 525 Thread Control Buffer 420 determines if a position hazard exists for the sample. If in step 525 Thread Control Buffer 420 determines a position hazard exists for the sample, Thread Control Buffer 420 remains in step 525. A position hazard exists when an output pixel position associated with a sample is equal to an output pixel position associated with another sample and an output buffer ID associated with the sample is equal to an output buffer ID associated with the other sample.If in step 525 Thread Control Buffer 420 determines a position hazard does not exist for the sample, Thread Control Buffer 420 stores at least a portion of the output pixel position of the sample as state information. In step 527, Thread Control Buffer 420 determines if a thread is available to process the sample in Multithreaded Processing Unit 400, and, if so, in step 530 Thread Control Buffer 420 assigns a thread to the sample. When a thread is not available in step 527, Thread Control Buffer 420 does not proceed to step 530 until a thread becomes available. In step 530 the busy flag portion of the thread state data is marked unavailable and the program counter corresponding to the first instruction to process the sample is stored in the thread state data. In step 530 Thread Control Buffer 420 also stores the position corresponding to the sample in the thread state data. In step 533 Thread Control Buffer 220 allocates storage resources for storing intermediate data generated during execution of the thread. The storage resources may be in graphics memory.In step 535 Instruction Cache 410 fetches one or more instructions referenced by the program counter by reading the thread state data for the thread in Thread Control Buffer 420 with a busy flag indicating the thread is assigned to a sample. The one or more instructions can be located in Instruction Cache 410, a local storage resource, Local Memory 140, or Host Memory 112. Instruction Cache 410 outputs the one or more instructions to Instruction Scheduler 430. In step 537, Instruction Scheduler 430 determines if the one or more instructions can be scheduled based on source data availability, and, if not, remains in step 537. If in step 537 Instruction Scheduler 430 determines the one or more instructions can be scheduled based on source data availability, in step 540 Instruction Scheduler 430 updates the program counter stored in Thread Control Buffer 420, updates destination register status and outputs the one or more instructions to Instruction Dispatcher 440. The program counter can be updated by outputting a modified program counter to Thread Control Buffer 420 or by outputting a value, indicating the number of the one or more scheduled instructions, to be added to the program counter. The one or more instructions are output either in parallel or serially to Instruction Dispatcher 440 as specified by Instruction Scheduler 430. Instructions within a program can be scheduled for parallel execution by Instruction Scheduler 430 when the instructions are independent from each other and parallel execution will not modify the function of the program.In step 543 Instruction Dispatcher 440 gathers the source data specified by each of the one or more instructions and outputs the instruction and the source data to Execution Unit 470. In step 545 Execution Unit 470 executes the one or more instructions associated with the thread to process the sample. Execution Unit 470 writes processed sample data to each destination specified by the one or more instructions and updates destination register status in Resource Scoreboard 460. In step 545 Execution Unit 470 also updates the program counter associated with the thread when a branch or loop instruction is executed and the program counter is different than the program counter updated in step 540. In step 547 Execution Unit determines if there are more instructions in the thread, and, if so, returns to step 535. If Execution Unit 470 determines there are no more instructions in the thread and there are no pending destination register writes associated with the thread, in step 550 the thread busy flag is marked as available in Thread Control Buffer 420 and the storage resources are effectively deallocated.In an alternate embodiment steps 523 and 525 are completed by Instruction Scheduler 430 instead of being completed by Thread Control Buffer 420. In yet another alternate embodiment steps 523 and 525 are completed by Instruction Dispatcher 440 prior to gathering source data instead of being completed by Thread Control Buffer 420.Rather than processing one sample as shown in FIG. 5, Multithreaded Processing Unit 400 receives a stream of samples, additional threads are assigned to each sample and instructions are fetched for each thread. Instruction Scheduler 430 determines which instructions can be scheduled, choosing amongst instructions that process different samples. In this manner Multithreaded Processing Unit 400 can simultaneously process one or more samples using at least one program, where each sample may be processed in an order that is independent of the order in which the samples were received by Multithreaded Processing Unit 400. Likewise, each Multithreaded Processing Unit 400 can simultaneously process one or more samples using at least one program, where each sample may be processed in an order that is independent of the order in which the samples were received by Execution Pipeline 240.FIG. 6 illustrates an embodiment of a method utilizing Instruction Scheduler 430 to schedule the execution of program instructions to process several samples. In step 605 Instruction Scheduler 430 determines if there is an instruction in IWU 435, and, if not, Instruction Scheduler 430 waits for an instruction. If Instruction Scheduler 430 determines there is at least one instruction in IWU 435, in step 610 Instruction Scheduler 430 uses STU 437 to determine which if any of the instructions in IWU 435 have remained in IWU 435 for a time longer than a predetermined scheduling timeout limit. The scheduling timeout limit can be fixed or programmable. If in step 610 Instruction Scheduler 430 determines at least one of the instructions in IWU 435 has remained in IWU 435 for a time longer than the scheduling timeout limit, in step 615 the at least one instruction is removed from IWU 435. Each location in IWU 435 that stored a removed instruction is available to receive an instruction from Instruction Cache 410. Removing an instruction to process a sample from IWU 435 will delay the processing of the sample and can result in the sample being processed after other samples that were received by Thread Control Buffer 420 after the sample.If in step 610 Instruction Scheduler 430 determines none of the instructions in IWU 435 has remained in IWU 435 for a time longer than the scheduling timeout limit, in step 620 Instruction Scheduler 430 determines if a synchronization mode is enabled. If in step 620 Instruction Scheduler 430 determines a synchronization mode is enabled, in step 625 Instruction Scheduler 430 checks for synchronization and proceeds to step 630. In one embodiment, instructions with equal program counters are considered synchronized. In another embodiment, in addition to program counters, thread state data such as stack depths, nesting levels, subroutine calls, or the like are used to determine two or more threads are synchronized.In step 630 Instruction Scheduler 430 determines if any of the instructions are synchronized, and, if not, in step 635 those instructions are removed from IWU 435. If in step 630 Instruction Scheduler 430 determines the instructions in IWU 435 are synchronized Instruction Scheduler 430 proceeds to step 640. In an alternate embodiment, the instruction synchronization can be included in either Thread Control Buffer 420 or Instruction Cache 410 and instructions that are not synchronized are not output from Instruction Cache 410 to Instruction Scheduler 430.In step 640 Instruction Scheduler 430 sorts the instructions remaining in IWU 435 by thread age, e.g., from oldest to newest. In step 645 Instruction Scheduler 430 reads from Resource Scoreboard 460 to determine source data availability. In step 650 Instruction Scheduler 430 compares the source data availability with the source data requirements of the sorted instructions. In step 655 Instruction Scheduler 430 determines which instructions can be scheduled for execution and in step 660 Instruction Scheduler 430 writes Resource Scoreboard 460 as needed to update destination register status. Unavailability of source data required to process a received sample can result in a later received sample being processed before the received sample.In step 670 Instruction Scheduler 430 writes to Thread Control Buffer 420 to update the program counter for each thread corresponding to an instruction that was scheduled for execution. In step 680 Instruction Scheduler 430 outputs the scheduled instructions to Instruction Dispatcher 440.Conventional graphics processing systems have not permitted the scheduling of instructions for execution on each sample within a set of samples in an order independent from the order in which the samples were received because doing so can result in image artifacts. For example, image artifacts can be introduced when fragment or pixel processing order is not maintained for each output pixel location between frames or within a frame. Specifically, intersecting or coincident primitives or surfaces can yield different results for a fragment or pixel where the computed depth values for the intersecting or coincident primitives or surfaces are equal. For example, along a line of intersection between two primitives, a fragment can be "reordered" resulting in an artifact caused when an earlier transmitted fragment is determined to be "behind" a later transmitted fragment due to reordering resulting in the earlier transmitted fragment being processed after the later transmitted sample. As sequential frames of the scene are viewed, the line of intersection can seem to wiggle, sparkle, or crawl. Likewise, when two primitives are coincident and different colors, pixels within sequential frames can change color from frame to frame when fragments are "reordered". Furthermore, the color of each pixel within the two primitives is dependent on processing order such that within a frame the two primitives may appear speckled. It is possible to reduce visual artifacts by enabling and disabling the PIOR for pixel type sample processing during rendering.FIG. 7A illustrates an embodiment of a method utilizing a function call to configure Programmable Graphics Processing Pipeline 150 to process samples independent of the order in which the samples are received for at least one sample type. In step 701 a device driver executed by Host Processor 114 detects if Programmable Graphics Processing Pipeline 150 supports the PIOR and communicates that information to an application programming interface (API). If the device driver detects the Programmable Graphics Processing Pipeline 150 supports the PIOR, in step 703 a graphics application executed by Host Processor 114 issues the function call to configure Programmable Graphics Processing Pipeline 150 within Graphics Processor 105 to process pixel and fragment samples ignoring position hazards for pixel threads, by enabling the PIOR for pixel type samples. If the device driver detects the Programmable Graphics Processing Pipeline 150 does not support the PIOR, the graphics application proceeds to step 706. In step 706 the PIOR configuration is complete. In one embodiment the function call enables the PIOR for pixel type samples. In an alternate embodiment the function call disables the PIOR for higher-order surface and vertex type samples.FIG. 7B illustrates an embodiment of a method utilizing the PIOR to render images. In step 710 Programmable Graphics Processing Pipeline 150 is configured, as described further herein, to process pixel and fragment samples with the PIOR enabled. In step 720 a first program is used to render intersecting objects in a scene. In step 730 Programmable Graphics Processing Pipeline 150 is configured to disable the PIOR. In step 740 a second program is used to render non-intersecting objects in the scene. An API can be used by an application to control the state of the PIOR. The API is executed by Host Processor 114 and includes a function call that is used to configure Programmable Graphics Pipeline 150 to either enable or disable the PIOR for each sample type. In one embodiment the function call can be issued with one or more bits asserted where each bit is used to enable the PIOR for a sample type. Conversely, the function call may be issued with one or more of the bits negated to disable the PIOR for one or more of the sample types. In an alternate embodiment, the function call can be issued with one or more bits asserted to toggle the state of the PIOR for one or more sample types.A device driver executed by Host Processor 114 detects that Programmable Graphics Processing Pipeline 150 supports the PIOR and communicates that information to the API. A graphics application executed by Host Processor 114 can issue the function call to configure Programmable Graphics Processing Pipeline 150 within Graphics Processor 105 to process pixel and fragment samples ignoring position hazards, by enabling the PIOR. In one embodiment the function call communicates with Graphics Processor 105 via the device driver to modify a flag or bits in a register that is readable by Programmable Graphics Pipeline 150 and the flag or bits control the state of the PIOR.When images are rendered with PIOR enabled artifacts can be introduced during the rendering of non-opaque primitives. Correct rendering of transparent primitives requires rendering all of the opaque primitives and then rendering depth sorted non-opaque primitives. Because the non-opaque primitives are sorted prior to being received by the graphics processor, any reordering can result in blending artifacts. It is possible to reduce the occurrence of artifacts by enabling and disabling the PIOR during rendering.FIG. 7C illustrates an embodiment of a method utilizing the PIOR to render images. In step 710 Programmable Graphics Processing Pipeline 150 is configured, as described further herein, to process pixel type samples with the PIOR enabled. In step 725 a first program is used to render opaque objects in a scene. In step 730 Programmable Graphics Processing Pipeline 150 is configured to disable the PIOR. In step 745 a second program is used to render non-opaque objects in the scene. In another example, the first program only renders non-blended opaque objects. In yet another example, the first program only renders non-intersecting opaque objects. In a further example, the first program only renders non-blended and non-intersecting opaque objects.The invention has been described above with reference to specific embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The listing of steps in method claims do not imply performing the steps in any particular order, unless explicitly stated in the claim. Within the claims, element lettering (e.g., "a)", "b)", "i)", "ii)", etc.) does not indicate any specific order for carrying out steps or other operations; the lettering is included to simplify referring to those elements.
In described examples, a processor system includes a processor core that generates memory write requests, a cache memory (304), and a memory pipeline of the cache memory (304). The memory pipeline has a holding buffer (306), an anchor stage (302), and an RMW pipeline (300). The anchor stage (302) determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer (306) and conforming data is read from a corresponding cache memory (304) address to merge with the data payload. The RMW pipeline (300) has a merge stage (312) and a syndrome generation stage (314). The merge stage (312) merges the data payload in the holding buffer (306) with the conforming data to make merged data. The syndrome generation stage (314) generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory (304).
CLAIMSWhat is claimed is:1. A processor system comprising:a processor core configured to generate memory write requests;a cache memory;a memory controller having a holding buffer, and having a memory pipeline for processing the memory write requests to write data payloads of respective memory write requests to the cache memory, the memory pipeline including:an anchor stage configured to determine whether a first write request corresponds to a partial write, and if so, to write the data payload of the first write request to the holding buffer and read a set of conforming data from an address in the cache memory targeted by the first write request; anda read-modify-write (RMW) pipeline configured to operate on the first write request when the first write request corresponds to the partial write, the RMW pipeline including:a merge stage configured to read the data payload from the holding buffer, and to merge the set of conforming data with the data payload such that the data payload is retained to form merged data; anda syndrome generation stage configured to generate an error correction code (ECC) syndrome in response to the merged data;wherein the memory controller is configured to write the data payload and the ECC syndrome to the cache memory.2. The processor system of claim 1,wherein the anchor stage is configured to read, with the conforming data, a conforming data ECC syndrome corresponding to the conforming data;the RMW pipeline further including:an error detection stage configured to determine, in response to the conforming data ECC syndrome and the conforming data, whether there are error bits in the conforming data; andan error correction stage configured to use the conforming data ECC syndrome to correct the error bits.3. The processor system of claim 1, wherein a depth of the holding buffer depends on a depth of the RMW pipeline.4. The processor system of claim 1, wherein the anchor stage is configured to, directly after the syndrome generation stage generates the ECC syndrome, write the data payload and the ECC syndrome to the cache memory.5. The processor system of claim 1, the memory pipeline further including a previous stage that is previous to the anchor stage in the memory pipeline, the previous stage configured to:first write determine whether the first write request corresponds to the partial write;same target determine whether same-targeted data corresponding to a previous write request targeting the address in the cache memory is stored in the holding buffer; andif the first write determine and same target determine actions both determine an affirmative, then write the same-targeted data to a portion of the holding buffer corresponding to the first write request, so that when the data payload is written to the holding buffer it will be written to the portion of the holding buffer and will overwrite any overlapping portions of the same-targeted data.6. The processor system of claim 5, wherein the partial write is a type of write request in which one or more, but less than all, bytes in a portion of a data payload of the type of write request that is error corrected by a particular ECC syndrome are configured to be written to a destination memory address of the write request.7. The processor system of claim 1, the memory pipeline further including a previous stage that is previous to the anchor stage in the memory pipeline, the previous stage configured to:full line write determine whether the data payload corresponds to a full line write of the cache memory;same target determine whether same-targeted data corresponding to a previous write request targeting a same address as the write request is stored in the holding buffer; andif the full line write determine and same target determine actions both determine an affirmative, then invalidate the previous write request.8. The processor system of claim 1, wherein the cache memory is a level 2 cache (L2 cache), and the memory controller is an L2 cache controller.9. The processor system of claim 1, wherein the memory pipeline includes multiple pipeline banks, each of the pipeline banks including an anchor stage and an RMW pipeline.10. The processor system of claim 1,wherein the data payload is comprised of multiple chunks and each of the chunks is error corrected by a corresponding ECC syndrome;wherein determining whether the data payload corresponds to a partial write comprises determining whether one or more of the chunks corresponds to the partial write; andwherein the memory pipeline is configured to perform RMW operations on as few chunks in the data payload corresponding to the partial write as possible while maintaining updated error correction.11. The processor system of claim 1, wherein the memory pipeline is configured to expire a content of the holding buffer that corresponds to a data payload that the syndrome generation stage has finished processing.12. A method of operating a processor system, the method comprising:receiving a write request in a memory pipeline, the write request corresponding to a partial write; andperforming a read-modify-write (RMW) operation on the write request in the memory pipeline by:writing a data payload of the write request to a holding buffer;reading a set of conforming data from an address of the cache memory targeted by the write request;combining the data payload with the conforming data to generate a merged data; generating an error correction code (ECC) syndrome in response to the merged data; andwriting the data payload and the ECC syndrome to the cache memory.13. The method of claim 12, further comprising:reading from the cache memory, with the conforming data, a conforming data ECC syndrome corresponding to the conforming data;determining, in response to the conforming data and the conforming data ECC syndrome, whether there are error bits in the conforming data; andcorrecting the error bits using the conforming data ECC syndrome.14. The method of claim 12,wherein, after the generating step, the write request is returned to a same memory pipeline stage that performed the writing to a holding buffer step and the reading a set of conforming data step; andwherein the same memory pipeline stage performs the writing to the cache memory step.15. The method of claim 12, further comprising:before the writing to a holding buffer step:determining whether the data payload requires an RMW operation; determining whether the write request targets a same address in the cache memory as a data stored in the holding buffer that corresponds to a previous write request; andif the determining steps both determine an affirmative, then writing the data stored in the holding buffer, as an updated data, to a portion of the holding buffer corresponding to the write request, so that when the data payload is written to the holding buffer, it will be written to the portion of the holding buffer and will overwrite any overlapping portions of the updated data.16. The method of claim 15, wherein the partial write is a type of write request in which one or more, but less than all, bytes in a portion of a data payload of the type of write request that is error corrected by an ECC syndrome are configured to be written to a destination memory address of the write request.17. The method of claim 12, further comprising:before the writing to a holding buffer step:determining whether the data payload corresponds to writing a full line of the cache memory;determining whether the write request targets a same address in the cache memory as a data stored in the holding buffer that corresponds to a previous write request; andif the determining steps both determine an affirmative, invalidating the previous write request.18. The method of claim 12, wherein the cache memory is a level 2 cache (L2 cache), and the memory pipeline is part of an L2 cache controller.19. The processor system of claim 12, wherein the memory pipeline includes multiple pipeline banks, and the method is performed independently by each of the pipeline banks.20. The method of claim 12,wherein the data payload is comprised of multiple chunks and each of the chunks is error corrected by a corresponding ECC syndrome; wherein the write request corresponds to the partial write if one or more of the chunks corresponds to the partial write; andwherein the writing, reading, combining, and generating steps are performed on as few of the chunks as possible while maintaining updated error correction.21. The method of claim 12, further comprising expiring contents of the holding buffer corresponding to the data payload after the generating step.22. The method of claim 12, further comprising:receiving another write request in a memory pipeline, the another write request not corresponding to the partial write; andnot performing on the another write request the writing the data payload to the holding buffer step, the reading step, the combining step, the generating step, and the writing the data payload and the ECC syndrome to the cache memory step.
PIPELINED READ-MODIFY-WRITE OPERATIONS IN CACHE MEMORY[0001] This description relates generally to a processing device that can be formed as part of an integrated circuit, such as a system on a chip (SoC). More specifically, this description relates to improvements in management of read-modify-write operations in a memory system of such a processing device.BACKGROUND[0002] An SOC is an integrated circuit with multiple functional blocks on a single die, such as one or more processor cores, memory, and input and output.[0003] Memory write requests are generated by ongoing system processes by a processor connected to the bus fabric, such as a central processing unit (CPU) or a digital signal processor (DSP), and are directed towards a particular system memory, such as a cache memory or a main memory. Memory can be, for example, a static random-access memory (SRAM). Memory write requests include a data payload to be written, and may include a code used to correct errors in the data payload (the data payload can be considered to include the ECC syndrome). This code is referred to herein as an error correction code (ECC) syndrome. The amount of data corresponding to an ECC syndrome, which can be corrected using the ECC syndrome, is referred to herein as a chunk. A chunk can be, for example, a single word, such as a 32 byte word, or another data length.[0004] Hierarchical memory moves data and instructions between memory blocks with different read/write response times for respective processor cores (such as a CPU or a DSP). For example, memories which are more local to respective processor cores will often have lower response times. Hierarchical memories include cache memory systems with multiple levels (such as LI, L2, and L3), in which different levels describe different degrees of locality or different average response times of the cache memories to respective processor cores. Herein, the more local or lower response time cache memory (such as an LI cache) is referred to as being a higher level cache memory than a less local or higher response time lower level cache memory (such as an L2 cache or L3 cache). SUMMARY[0005] In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory pipeline of the cache memory. The memory pipeline has a holding buffer, an anchor stage, and a Read-Modify-Write (RMW) pipeline. The anchor stage determines whether a data payload of a write request corresponds to a partial write. If so, the data payload is written to the holding buffer and conforming data is read from a corresponding cache memory address to merge with the data payload. The RMW pipeline has a merge stage and a syndrome generation stage. The merge stage merges the data payload in the holding buffer with the conforming data to make merged data. The syndrome generation stage generates an ECC syndrome using the merged data. The memory pipeline writes the data payload and ECC syndrome to the cache memory.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 is a block diagram of an example processor that is a portion of a system on a chip (SoC).[0007] FIG. 2 is a block diagram of an example memory pipeline included within or associated with a memory controller of FIG. 1.[0008] FIG. 3 is a block diagram of an example RMW memory pipeline, as part of a processor, for processing RMW memory transactions.[0009] FIG. 4A shows a table providing an example list of different combinations of chunk contents in a data payload of a write request that targets a write at an address in a cache memory.[0010] FIG. 4B shows a table providing an example list of operations to use (RMW, full write, etc.) for corresponding cases of FIG. 4A.[0011] FIG. 4C shows a table providing an alternative example list of whether an RMW is utilized for corresponding cases of FIG. 4 A.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0012] FIG. 1 is a block diagram of an example processor 100 that is a portion of an SoC 10. SoC 10 includes a processor core 102, such as a CPU or DSP, that generates new data. Processor 100 can include a clock 103, which can be part of processor core 102 or separate therefrom (separate clock not shown). Processor core 102 also generates memory read requests that request reads from, as well as memory write requests that request writes to, a data memory controller 104 (DMC) and a streaming engine 106. In some embodiments, processor core 102 generates one read request or write request per cycle of clock 103 of processor core 102. Processor core 102 is also coupled to receive instructions from a program memory controller 108 (PMC), which retrieves those instructions from program memory, such as an LIP cache 112. Streaming engine 106 facilitates processor core 102 by sending certain memory transactions and other memory -related messages that bypass DMC 104 and PMC 108.[0013] SoC 10 has a hierarchical memory system. Each cache at each level may be unified or divided into separate data and program caches. For example, the DMC 104 may be coupled to a level 1 data cache 110 (LID cache) to control data writes to and data reads from the LID cache 110. Similarly, the PMC 108 may be coupled to a level 1 program cache 112 (LIP cache) to read instructions for execution by processor core 102 from the LIP cache 112. (In this example, processor core 102 does not generate writes to LIP cache 112.) A unified memory controller 114 (UMC) for a level 2 cache (L2 cache 116, such as L2 SRAM) is communicatively coupled to receive read and write memory access requests from DMC 104 and PMC 108, and to receive read requests from streaming engine 106, PMC 108, and a memory management unit 117 (MMU). UMC 114 is communicatively coupled to pass read data (from beyond level 1 caching) to DMC 104, streaming engine 106, and PMC 108, which is then passed on to processor core 102. UMC 114 is also coupled to control writes to, and reads from, L2 cache 116, and to pass memory access requests to a level 3 cache controller 118 (L3 controller). L3 controller 118 is coupled to control writes to, and reads from, L3 cache 119. UMC 114 is coupled to receive data read from L2 cache 116 and L3 cache 119 (via L3 controller 118). UMC 114 is configured to control pipelining of memory transactions (read and write requests) for instructions and data. L3 controller 118 is coupled to control writes to, and reads from, L3 cache 119, and to mediate transactions with exterior functions 120 that are exterior to processor 100, such as other processor cores, peripheral functions of the SOC 10, and/or other SoCs. Accordingly, L3 controller 118 is a shared memory controller of the SoC 10, and L3 cache 119 is a shared cache memory of the SoC 10. Accordingly, memory transactions relating to processor 100 and exterior functions 120 pass through L3 controller 118. Memory transactions are generated by processor core 102 and are communicated towards lower level cache memory, or are generated by exterior functions 120 and communicated towards higher level cache memory.[0014] MMU 117 provides address translation and memory attribute information to the processor core 102. It does this by looking up information in tables that are stored in memory (connection between MMU 117 and UMC 114 enables MMU 117 to use read requests to access memory containing the tables).[0015] FIG. 2 is a block diagram including an example memory pipeline 200 for receiving and servicing memory transaction requests and included within or associated with the FIG. 1 UMC 114, so for illustration FIG. 2 also repeats various blocks from FIG. 1 that communicate with UMC 114. Memory pipeline 200 includes an initial scheduling block 202 coupled to an integer number M of pipeline banks 206. Each pipeline bank 206 includes an integer number P of stages 208 and is illustrated as a vertical column below initial scheduling block 202. Different ones of the stages 208 can perform different functions, such as (without limitation) translation between a CPU address and a cache address, cache hit detection, checking for errors such as addressing or out-of-range errors, and writing to the corresponding cache memory.[0016] DMC 104 is coupled to initial scheduling block 202 by a bus 204-1 that is a number Ni lines wide, enabling DMC 104 to provide a read or write request transferring a number Ni bits of data at a time. Streaming engine 106 is coupled to initial scheduling block 202 by a bus 204-2 that is a number N2lines wide, enabling streaming engine 106 to provide a read request transferring a number N2bits of data at a time. PMC 108 is coupled to initial scheduling block 202 by a bus 204-3 that is a number N3lines wide, enabling PMC 108 to provide a read request transferring a number N3bits of data at a time. L3 controller 118 is coupled to initial scheduling block 202 by a bus 204-4 that is a number N lines wide, enabling L3 118 to provide a read or write request transferring a number N4bits of data at a time. MMU 117 is coupled to initial scheduling block 202 by a bus 204-5 that is a number N5lines wide, enabling MMU 117 to provide a read request transferring a number N5bits of data at a time.[0017] When a memory controller of processor 100 (such as DMC 104, streaming engine 106, PMC 108, or L3 controller 118) communicates to UMC 114 a request for a read from, or a write to, a memory intermediated by UMC 114 (such as L2 cache 116, L3 cache 119, or a memory in exterior functions 120), initial scheduling block 202 schedules the request to be handled by an appropriate pipeline bank 206 for the particular request. Accordingly, initial scheduling block 202 performs arbitration on read and write requests. Arbitration determines which pipeline bank 206 will receive which of the memory transactions queued at initial scheduling block 202, and in what order. In some examples, a read or write request can only be scheduled into a corresponding one of pipeline banks 206, depending on, for example, the memory address of the data being written or requested, request load of pipeline banks 206, or a pseudo-random function. Initial scheduling block 202 schedules read and write requests received from DMC 104, streaming engine 106, PMC 108, and L3 controller 118, by selecting among the first stages of pipeline banks 206. Memory transactions requested to be performed on L3 cache 119 (or exterior functions 120) are arbitrated and scheduled into an L3 cache pipeline by an L3 cache scheduling block (not shown) in L3 controller 118 after passing through memory pipeline 200 corresponding to L2 cache 116 (pipeline banks 206, and potentially bus snooping-related stages, which are not shown).[0018] Request scheduling prevents conflicts between read or write requests that are to be handled by the same pipeline bank 206, and preserves memory coherence (further described below). For example, request scheduling maintains order among memory transactions that are placed into a memory transaction queue (memory access request queue) of initial scheduling block 202 by different memory controllers of processor 100, or by different bus lines of a same memory controller.[0019] Further, a pipeline memory transaction (a read or write request) sent by DMC 104 or PMC 108 is requested because the memory transaction has already passed through a corresponding level 1 cache pipeline (in DMC 104 for LID cache 110, and in PMC 108 for LIP cache 112), and is either targeted to a lower level cache (or exterior functions 120) or has produced a miss in the respective level 1 cache. Accordingly, memory transactions that produce level 1 cache hits generally do not require access to pipeline banks 206 shown in FIG. 2, which control or intermediate memory access to L2 cache 116, L3 cache 119, and exterior functions 120 (see FIG. 1).[0020] Pipeline banks 206 shown in FIG. 2 are part of UMC 114. LID cache 110 can hold data generated by processor core 102. L2 cache 116 or L3 cache 119 can make data generated by processor core 102 available to exterior functions 120 by, for example, data being written to L2 cache 116 or L3 cache 119, or via snoop transactions from L2 cache controller 114 or L3 cache controller 118.[0021] Memory coherence is when memory contents at logically the same address (or at least contents deemed or indicated as valid) throughout the memory system are the same contents expected by the one or more processors in the system based on an ordered stream of read and write requests. Writes affecting a particular data, or at a particular logical memory address, are prevented from bypassing earlier-issued writes or reads affecting the same data or the same memory address. Also, certain types of transactions take priority, such as victim cache transactions (no victim cache is shown) and snoop transactions.[0022] A victim cache is a fully associative cache associated with a particular cache memory and may be configured so that if there is a cache hit, no action is taken with respect to the corresponding victim cache; if there is a cache miss and a victim cache hit, the corresponding memory lines are swapped between the cache and the victim cache; and if there is a cache miss and victim cache miss, data corresponding to the location in main memory producing the cache miss is written in a corresponding cache line, and the previous contents of the cache line are written in the victim cache. Fully associative means that data corresponding to any location in main memory can be written into any line of the victim cache.[0023] Bus snooping is a scheme by which a coherence controller (snooper) in a cache monitors or snoops bus transactions to maintain memory coherence in distributed shared memory systems (such as in SoC 10). If a transaction modifying a shared cache block appears on a bus, the snoopers check whether their respective caches have a copy of data corresponding to the same logical address of the shared block. If a cache has a copy of the shared block, the corresponding snooper performs an action to ensure memory coherence in the cache. This action can be, for example, flushing, invalidating, or updating the shared block, according to the transaction detected on the bus.[0024] At the first level of arbitration performed by initial scheduling block 202, UMC 114 (the L2 cache 116 controller, which includes initial scheduling block 202) determines whether to allow a memory transaction to proceed in memory pipeline 200, and in which pipeline bank 206 to proceed. Generally, each pipeline bank 206 is independent, such that read and write transactions on each pipeline bank 206 (for example, writes of data from LID cache 110 to L2 cache 116) does not have ordering or coherency requirements with respect to write transactions on other pipeline banks 206. Within each pipeline bank, writes to L2 cache 116 proceed in the order they are scheduled. If a memory transaction causes an addressing hazard or violates an ordering requirement, the transaction stalls and is not issued to a pipeline bank 206.[0025] A partial write request (also referred to herein as a partial write) is a write request with a data payload that includes a chunk (or more than one chunk) in which one or more, but less than all, bytes in the chunk will be written to the destination memory address. For example, in some systems, a write request data payload can be shorter than a destination memory’s addressable location write length, but still equal to or larger than the location’s minimum write length. Minimum write length refers to the amount of data that can be read from or written to a memory in a single clock cycle, which is generally determined by the physical width of the memory. Generally, a memory’s minimum write length will be a multiple of the chunk length. For example, a memory with a 128 byte line length may have a 64 byte minimum write length, corresponding to writing to a first physical bank of a line of the memory (bytes 0 to 63 of the line) or a second physical bank of the memory line (bytes 64 to 127 of the line). An example partial write request can be to write a data payload from bytes 0 to 110 of the line, meaning that in one of the chunks of the data payload (the chunk corresponding to bytes 96 to 127 of the line), only 15 out of 32 bytes will be written (corresponding to bytes 96 to 110 of the line). Also, in some systems, a write request data payload can be sparse (sparse is a special case of partial write). A sparse data payload is configured to write a non-continuous set of bytes within a destination memory. For example, a data payload may be targeted to write to bytes 0 through 24 and 42 through 63 (or only the even-numbered bytes, or bytes 1, 15, and 27, or some other arbitrary arrangement) of a destination memory addressable location. If a write request data payload is configured to fill complete chunks in a complete destination memory addressable location, such as bytes 0 to 63 in the example above (or the full line corresponding to bytes 0 to 127), the write request will generally not be considered a partial write.[0026] Partial writes trigger read-modify-write (RMW) operations. In an RMW operation, data is read from the destination cache memory in a read portion of the operation and used to supply those values not specified by the RMW operation and that are not to be changed by the operation. In this way, the data from the read portion conforms the data payload of the write portion of the operation to be continuous and full (not a partial write) to the destination cache memory’s minimum write length. After this, an updated error correction code (ECC) is generated from and appended to the resulting conformed data payload to preserve data integrity of the unwritten data. The data payload in written to the destination cache memory with the updated ECC, with or without the conforming data. For example, in the example above in which the data payload includes bytes 0 through 24 and 42 through 63 (chunks corresponding to bytes 0 through 31 and 32 through 63 correspond to partial write), bytes 25 through 41 are read to conform the data payload to the 64 byte minimum write length. [0027] FIG. 3 is a block diagram of an example RMW memory sub-pipeline 300, as part of processor 100, for processing RMW memory transactions. RMW memory sub-pipeline 300 conditionally processes a memory read request as part of a cache memory pipeline, such as part of a selected stage (for example, a stage selected from Stage 1 through Stage P) of memory pipeline 200 for UMC 114 (L2 cache controller), if a write request being processed by the selected stage involves an RMW operation. Accordingly, RMW memory sub-pipeline 300 processes a write request if a stage of a corresponding cache memory pipeline determines that an RMW operation is required by the write request. This is generally equivalent to determining whether the write request is a partial write.[0028] Previous stage 316 is an ordinary-processing stage in a cache memory pipeline such as memory pipeline 200. “Ordinary-processing” refers to a pipeline stage that has functions that are performed in processing a memory transaction regardless of whether the memory transaction is a write request that is a partial write and that will be processed using an RMW operation. Previous stage 316 is connected to read from and write to a holding buffer 306. Holding buffer 306 can be, for example, a dedicated set of registers that is part of the memory controller that includes the RMW memory sub-pipeline 300, such as UMC 114.[0029] Pipeline stage 302 is part of the RMW memory sub-pipeline 300, and is also an ordinary-processing stage in the cache memory pipeline. Pipeline stage 302 anchors (connects) RMW memory sub-pipeline 300 to the cache memory pipeline; accordingly, RMW memory sub pipeline 300 branches off from the cache memory pipeline at pipeline stage 302, and in some systems, returns to (terminates at) the cache memory pipeline at pipeline stage 302. The connection between previous stage 316 and pipeline stage 302 is a dotted arrow to indicate that there may be additional pipeline stages executed between previous stage 316 and pipeline stage 302. Pipeline stage 302 receives a memory read request in a cache memory pipeline (including functions performed regardless of whether an RMW operation is required), such as pipeline stage 208 four (Stage 4 in a pipeline bank 206 in FIG. 2, not shown) in memory pipeline 200. (Pipeline stage 208 four can be, for example, hit and miss control.) Pipeline stage 302 is connected to read from, and write to, cache memory 304 (the cache memory to which the write request’s data payload is to be committed). Pipeline stage 302 is also connected to write to the holding buffer 306. The data payload of a write request being processed by RMW memory sub pipeline 300 is held in the holding buffer 306 during RMW processing. [0030] Pipeline stage 302 is followed by an error detection stage 308. Error detection stage 308 is followed by an error correction stage 310. Error correction stage 310 is followed by a merge stage 312, which is connected to read from holding buffer 306. Merge stage 312 is followed by a syndrome generation stage 314. Syndrome generation stage 314 is followed by a return to pipeline stage 302.[0031] Referring to FIG. 2, a separate RMW memory sub-pipeline 300 can be connected to a pipeline stage 302 in each separate pipeline bank 206 of a memory pipeline 200.[0032] Returning to FIG. 3, when pipeline stage 302 receives a memory write request to process, pipeline stage 302 determines whether an RMW operation is required to make ECCs in a data payload of the write request properly correspond to (and accordingly, properly enable error correction of) respective chunks in the data payload. This determination can be made by, for example, determining whether the memory write request is a partial write, or by checking a flag set by a previous stage 208 in the corresponding pipeline bank 206 that determined whether the memory write request is a partial write. If an RMW operation is required, pipeline stage 302 issues a read request to the address in cache memory 304, and writes (commits) the write request’s data payload to a holding buffer 306. If the read request from the pipeline stage 302 results in a cache miss, the data requested by the read request is retrieved from a lower level memory than the cache memory 304 to enable the read request to proceed. For example, if cache memory 304 is an L2 cache 116, then the requested data is retrieved from L3 cache 119 or other lower level memory.[0033] The read request issued by pipeline stage 302 requests retrieval of each chunk in cache memory 304 corresponding to a same memory address as any portion of the write request’s data payload. (The resulting data read from cache memory 304 is referred to herein, for convenience, as conforming data.) The conforming data’s ECC syndrome is read along with the conforming data. In an example, a write request’s data payload is configured to write bytes 0 to 38 in a line of cache memory 304, and chunks are 32 bytes long. Bytes 0 to 38 correspond to a first 32 byte chunk at bytes 0 to 31, and a second 32 byte chunk at bytes 32 to 63. An RMW operation will be indicated, and pipeline stage 302 will issue a read to cache memory 304 for bytes 0 to 63 of the corresponding line of memory, and the two corresponding ECC syndromes. Pipeline stage 302 also writes to holding buffer 306 the write request’s data payload, including data, destination memory address, byte enables (indicating which bytes following a destination memory address the data corresponds to, such as, in the example above, bytes 0 to 38), and other control information.[0034] After pipeline stage 302, error detection stage 308 determines whether there are any errors in the conforming data in light of the conforming data’s ECC syndrome(s), and determines the type(s) and number(s) of bits of the errors in the conforming data (if any). After the error detection stage 308, the error correction stage 310 corrects the conforming data if necessary (as detected by the error detection stage 308) and possible. For example, in some systems, the conforming data can be corrected using a 10 bit ECC syndrome per 32 byte chunk if the conforming data contains a single one-bit error (or less) in each chunk. If the data cannot be corrected, an appropriate action is taken - for example, the write request may be dropped (discarded), and an exception may be taken.[0035] After data correction stage 310, in merge stage 312, the conforming data is merged with the corresponding data (with exceptions described below, the data payload from the corresponding write request) in holding buffer 306. Accordingly, data from holding buffer 306 replaces (overwrites) corresponding bytes in the conforming data to form a new, merged data. In the example above in which the data payload corresponds to bytes 0 to 38 of a cache memory line, and the conforming data corresponds to bytes 0 to 63 of the cache memory line, the data payload replaces bytes 0 to 38 of the conforming data to form the merged data, also thereby leaving bytes 39 through 63 unchanged.[0036] After merge stage 312, syndrome generation stage 314 uses the merged data to generate one or more new ECC syndromes (as required) corresponding to the merged data. In the example above, the data payload corresponds to bytes 0 to 38 of a cache memory line, and chunks are 32 bytes in length. Bytes 0 to 31 of the merged data do not require an ECC syndrome to be generated using an RMW operation because the corresponding data payload portion was a full chunk before merging (an ECC syndrome corresponding to the full chunk - bytes 0 to 31 - could have been previously generated). A new ECC syndrome is calculated for bytes 32 to 63 of the merged data because the corresponding data payload portion overwrote only a portion of those bytes. Accordingly, the written data was not a full chunk before merging (bytes 32 to 38). The resulting ECC syndrome, which is up-to-date with respect to the write request’s data payload, is referred to as being in synchronization with the merged data. In some systems, ECC syndromes for chunks that the processor core 102 produces as full and continuous chunks can be generated at any time before the data payload being written to memory, such as before the write request being transmitted from DMC 104 (LI cache controller) to UMC 114 (L2 cache controller).[0037] After syndrome generation stage 314, the write request is returned to pipeline stage 302, and the write request’s data payload, along with the new ECC syndrome, is written to cache memory 304. If the read request performed by the pipeline stage 302 resulted in a cache hit, then the data that is written can be only the write request’s data payload (and the ECC syndromes corresponding to the chunks included in the data payload), or it can include the merged portion of the conforming data. The conforming data is required to generate a new ECC corresponding to the data payload, but can be optional when writing to the cache memory. The conforming data, having been read from the cache memory, should already be present in the cache memory if the read request performed by the pipeline stage 302 resulted in a cache hit. However, if the read request performed by the pipeline stage 302 resulted in a cache miss, then the data that is written includes the merged portion of the conforming data. An entry in holding buffer 306 corresponding to an RMW operation expires when the RMW operation completes (ends) after the data payload of the write request is written into the corresponding target cache memory. In some systems in which a cache write is completed a clock cycle after the syndrome generation stage is completed, holding buffer entry expiration can occur after generation of the new ECC syndrome by the syndrome generation stage 314.[0038] In some systems, holding buffer 306 can have additional functionality to facilitate pipeline throughput and avoid stalls. The depth of holding buffer 306 is dependent on the total RMW memory pipeline 300 depth. For this purpose, pipeline stage 302 reading from the cache memory 304 is considered the beginning of RMW memory pipeline 300, and syndrome generation stage 314 completing generation of the new ECC syndrome is considered the end of the RMW memory pipeline 300. Holding buffer 306 contains information on all RMW operations that have begun and have not ended (or been terminated by an error, such as at error correction stage 310).[0039] Previous stage 316 checks whether a write request requires an RMW operation. If so, previous stage 316 also checks in holding buffer 306 to find any pending RMW operation to the same address in cache memory 304 as the write request (a same-targeted write request). If there is such a pending RMW operation, then the current holding buffer 306 contents targeting that address, with corresponding byte enables, are combined with the most recent data targeting that address (generally, the contents of the data payload of the write request at previous stage 316). Accordingly, non-overlapping bytes of both the newer data and the older data are retained; if there is any overlap, the most recent data supersedes the specific overlapped bytes of the current holding buffer 306 contents; and the resulting combined data is written into an entry in the holding buffer 306 corresponding to the newer write request. (In some systems, this can be performed by writing the older data into the entry in the holding buffer 306 corresponding to the newer write request, and then performing an RMW operation on the newer write request so that the desired overwriting and resulting combined data is a consequence of the order of operations.) The pending RMW operation on the older data payload continues unaffected (corresponding holding buffer 306 contents are left unchanged), while the newer write request enters the RMW memory pipeline 300 for RMW processing of the combined data. (This is distinct from the case of a full line write - a write request that will write to a full line of the cache memory, and is accordingly not a partial write and does not require an RMW operation - as described below.) If the older write request has not yet finished processing and had its data payload written into the cache memory 304, then the address in the cache memory 304 targeted by the newer write request and the older write request contains data that is stale with respect to the newer write request. Stale data is data scheduled to be updated by a write request preceding the newer write request in a memory pipeline, such as the memory pipeline 200. Accordingly, the data- combining process described with respect to previous stage 316 prevents merge stage 312 from merging the newer write request with stale data. This additional holding buffer 306 functionality can be used, for example, in systems in which a write request can immediately follow another write request within a pipeline bank 206 (referring to FIG. 2), such as systems in which write requests can be issued in each cycle (for example, in systems with“write streaming” behavior).[0040] Subject to an intervening read as described below, if previous stage 316 determines that the data payload of a write request corresponds to a full line write, and there is a pending RMW operation targeting the same address, the pending RMW operation is invalidated and the write request at the previous stage 316 proceeds (accordingly, is not stalled). (In some embodiments, this determination could be performed at pipeline stage 302, or a pipeline stage between previous stage 316 and pipeline stage 302.) Also, if a read request is received at a stage of an ordinary processing cache memory pipeline (such as memory pipeline 200) at an intervening time between previous stage 316 and pipeline stage 302 that originated outside the RMW memory pipeline 300, then pending RMW operations are allowed to complete without allowing write requests at previous stage 316 to overwrite holding buffer 306 contents corresponding to the pending RMW operations.[0041] FIG. 4A shows a table 400 providing an example list of different combinations of chunk contents in a data payload of a write request that targets a write at an address in a cache memory. The body 402 of table 400 is described by a title 404“bytes in data payload to be written in memory line at target address.” Body 402 is divided into four titled columns, corresponding to chunks - in this example, 32 byte ranges 406 - of data in a 128 byte maximum data payload of a write request. The four columns have the following titles: [31 :0] (byte 0 to byte 31), [63:32] (byte 32 to byte 63), [95:64] (byte 64 to byte 95), and [127:96] (byte 96 to byte 127). The rows of body 402 are indexed in a column 412 with a title 414“case.” Individual cells 416 in body 402 can correspond to a byte range for one of two scenarios, either in a data payload that contains data that will write all bytes (not a partial write) in a corresponding chunk at the target address - accordingly, labeled“all bytes”; or a data payload that contains data that will write less than all bytes in the corresponding chunk, resulting in a partial write - accordingly, labeled“partial.”[0042] FIG. 4B shows a table 418 providing an example list of operations to use (e.g., RMW, full write, etc.) for corresponding cases of FIG. 4 A. In a memory system corresponding to FIGS. 4A and 4B, a cache memory is comprised of cache memory lines. In the example, each cache memory line is 128 bytes in length. Each cache memory line has two physical banks (physical bank 0 and physical bank 1), each of length 64 bytes, and each physical bank has two virtual banks (virtual bank 0 and virtual bank 1), each of length 32 bytes. (Byte lengths of the cache line, physical banks, and virtual banks do not include additional related memory - not shown - storing corresponding ECC syndromes and other control information.) Each virtual bank 420, such as virtual bank 0 in physical bank 1, heads a column of a body 422 of table 418 of FIG. 4B. In this example cache memory, each physical bank (64 bytes each) can respectively be written in a single cycle 424 (cycle 1 or cycle 2) of a system clock (such as a clock 103 of a processor 100) by writing both of the physical bank’s virtual banks 420 (32 bytes each). (Physical banks 0 and 1 cannot be written at the same time.) This means that 64 bytes in the 128 byte cache line can be written in a clock cycle, and the example cache memory has a 64 byte minimum write length. (In some systems, the example cache memory may also be able to write a full cache line in a single cycle.)[0043] The cells 426 in body 422 are indexed by a column 428 titled 430“case.” Entries in cells 426 are either“RMW,” meaning that a corresponding byte range for a corresponding case number (indexed in column 412 in FIG. 4 A and column 428 in FIG. 4B) utilizes an RMW operation, or“write,” meaning that a corresponding byte range for a corresponding case number can be written to the cache memory without performing an RMW operation.[0044] Physical bank 0, virtual bank 0 corresponds to (is written by) byte range 406 [31 :0] of the write request data payload shown in table 400 of FIG. 4A. Physical bank 0, virtual bank 1 corresponds to (is written by) byte range 406 [63:32] of the write request data payload shown in table 400. Physical bank 1, virtual bank 0 corresponds to (is written by) byte range 406 [95:64] of the write request data payload shown in table 400. Physical bank 1, virtual bank 1 corresponds to (is written by) byte range 406 [127:96] of the write request data payload shown in table 400. The example 128 byte cache memory is four 32 byte chunks in length.[0045] Accordingly, bytes in a data payload of a write request in a byte range [63:0] are written together, and bytes in the data payload of the write request in a byte range [127:64] are written together (writes are aligned at physical bank boundaries). This also means that the byte range [63:0] is written separately (and in a different clock cycle) from the byte range [127:64][0046] As shown in FIGS. 4A and 4B, for 64 byte writes aligned at physical bank boundaries (and that do not overlap with each other), when a chunk (a 32 byte range 406) in a particular case corresponds to a partial write, a write to the corresponding physical bank utilizes an RMW operation (performance of an RMW operation memory pipeline, such as RMW memory pipeline 300 of FIG. 3) on both chunks written to that physical bank. This is because an entire physical bank - 64 bytes, corresponding to two chunks - is read or written together. For example, in cases 1, 2, 3, and 4 in table 400 of FIG. 4A, chunks corresponding to byte ranges 406 [127:96] and [95:64] correspond to partial writes (the corresponding entry is“partial), meaning that an RMW operation is required. Accordingly, the entries for virtual banks 0 and 1 of physical bank 1 for cases 1, 2, 3, and 4 in table 418 of FIG. 4B are“RMW.” In another example, in cases 5, 6, 7, 8, 9, 10, 11, and 12, only one of the two chunks in byte ranges 406 [127:96] and [95:64] in table 400 corresponds to a partial write. (Byte range 406 [127:96] corresponds to a partial write in cases 5, 6, 7, and 8, and byte range 406 [95:64] corresponds to a partial write in cases 9, 10, 11, and 12.) However, because of the 64 byte minimum write length, as shown in table 418, both virtual banks 0 and 1 of physical bank 1 (respectively corresponding to byte ranges [95:64] and [127:96]) require an RMW operation. In another example, cases 13, 14, 15, and 16 show that“all bytes” in byte ranges 406 [127:96] and [95:64] will be written to the cache memory line by the data payload of the write request. Accordingly, table 418 shows that in cases 13, 14, 15, and 16, virtual banks 0 and 1 of physical bank 1 will be written to the cache memory line without performing an RMW operation (the corresponding table entries are“write”).[0047] A minimum write length shorter than a cache memory line length can result in saving multiple clock cycles, and corresponding power expenditure, in completing processing of a write request in some cases. For example, in some systems (such as some systems using an RMW memory pipeline 300 of FIG. 3), completing an RMW operation through committing the corresponding data payload to memory from the RMW / no RMW decision point (for example, pipeline stage 302) may take six cycles, while completing a write operation (without RMW) through committing the corresponding data payload to memory from the RMW / no RMW decision point takes one cycle. Further, an RMW operation requires two memory transactions (a read and a write), while a write operation without RMW requires one memory transaction (a write). Also, there can be hazards that prevent full pipelining while an RMW operation is in progress. Savings realized by a minimum write length that is a shorter (such as an integer factor N shorter) than a memory cache line length are illustrated by cases 4, 8, 12, 13, 14, and 15 of FIG. 4B, in which RMW operation is not required for chunks written to one of the physical banks, despite an RMW operation being required for chunks written to the other physical bank.[0048] FIG. 4C shows a table 432 providing an alternative example list of whether an RMW is utilized for corresponding cases of FIG. 4A. Similar content types in portions of table 432 of FIG. 4C to content types of corresponding portions of table 418 of FIG. 4B have the same identifying numbers. An example cache memory corresponding to FIG. 4C is similar to the example cache memory of FIGS. 4 A and 4B, except that the example cache memory corresponding to FIG. 4C has a minimum cache memory write length of 32 bytes, the same length as the chunk length (and the virtual bank length). Writes according to table 432 do not overlap with each other (are non-overlapping) and are aligned with cache line boundaries. Accordingly, in table 432, only chunks corresponding to partial writes require RMW operations. For example and returning to table 400 of FIG. 4A, in case 6, a chunk to be written to physical bank 1, virtual bank 1 (byte range 406 [127:96]) is a partial write; a chunk to be written to physical bank 1, virtual bank 0 (byte range 406 [95:64]) is not a partial write (“all bytes”); a chunk to be written to physical bank 0, virtual bank 1 (byte range 406 [63:32]) is a partial write; and a chunk to be written to physical bank 0, virtual bank 0 (byte range 406 [31 :0]) is not a partial write. For the case 6 example as applied to the 64 byte (double the chunk length) minimum write length of FIG. 4B, then in its table 418 (case 6), the case 6 example of partial write chunks and non-partial write chunks results in all chunks of the data payload requiring RMW operations. However, in the example of table 432 of FIG. 4C (case 6), with 32 byte minimum write length (equal to the chunk length), only the partial write chunks - the chunks to be written to physical bank 1, virtual bank 1 and physical bank 0, virtual bank 1 - require RMW operations.[0049] Consider a cache memory with cache memory lines of length L bytes, a minimum write length M bytes (aligned with memory bank boundaries) such that L / M > 1 is an integer (the number of writes to fully write a cache memory line), and a chunk length of P bytes such that there is an integer number L / P > 1 chunks in a cache memory line. Generally, under these conditions, a chunk in a data payload corresponding to a partial write will require an RMW operation to be performed on an integer M / P > 1 chunks in the data payload. If L / M >2, then a chunk in a data payload corresponding to a partial write may not require all chunks in the data payload written to the cache memory line to receive RMW operations. If M equals P, then chunks in the data payload that do not correspond to partial writes may not require RMW operations. (In some systems, only chunks in the data payload that correspond to partial writes will require RMW operations.)[0050] Except where otherwise explicitly indicated (such as the number of bits in an ECC syndrome), memory lengths provided herein refer specifically to data, and not to control of data.[0051] In some embodiments, the streaming engine only receives and passes on read requests from the processor core, and returns read data to the processor core, rather than passing on and returning responses for both read and write requests.[0052] In some embodiments, the processor can include multiple processor cores (embodiments with multiple processor cores are not shown), with similar and similarly- functioning couplings to DMC, the streaming engine, and PMC to those shown in and described with respect to FIG. 1. In some embodiments, processor includes different functional blocks. [0053] In some embodiments, bus lines enabling parallel read or write requests can correspond to different types of read or write requests, such as directed at different blocks of memory or made for different purposes.[0054] In some embodiments, the streaming engine enables the processor core to communicate directly with higher level cache (such as L2 cache), skipping lower level cache (such as LI cache), to avoid data synchronization issues. This can be used to help maintain memory coherence. In some such embodiments, the streaming engine can be configured to transmit only read requests, rather than both read and write requests.[0055] In some embodiments, different memory access pipeline banks can have different numbers of stages.[0056] In some embodiments, syndrome generation stage 314 is followed by a return to a stage in the cache memory pipeline before pipeline stage 302, after arbitration and scheduling and cache hit detection.[0057] In some embodiments, pipeline stage 302 can perform the functions of previous stage 316 In some embodiments, RMW memory sub-pipeline 300 can be anchored to the cache memory pipeline by a stage performing the functions of previous stage 316 In some embodiments, previous stage 316 can be considered part of RMW memory sub-pipeline 300 [0058] Modifications are possible in the described embodiments, and other embodiments are possible within the scope of the claims.
An integrated circuit ("IC") package (10) comprises a first semiconductor die (12) and a second semiconductor die (14). The second semiconductor die (14) is coupled to the first semiconductor die (12) within the same IC package. The first semiconductor die (12) includes an interface (13) to memory (22) and the first and second semiconductor dies share said memory. The memory (22) may be located outside or inside the IC package (10) containing the first and second semiconductor dies. In another embodiment, a system comprises a first IC package containing a memory die and a second IC package coupled to the first IC package. The second IC package contains a die stack comprising first and second dies coupled together. The first die includes an interface to the memory die and both of the dies in the die stack share access to the memory die. The system may comprise a communication system such as a cellular telephone.
An integrated circuit ("IC") package, comprising:a first semiconductor die; anda second semiconductor die coupled to the first semiconductor die within the same IC package;   wherein the first semiconductor die includes an interface to memory and the first and second semiconductor dies share said memory.The IC package of claim 1 wherein the memory is located outside the IC package containing the first and second semiconductor dies.The IC package of claim 1 wherein the memory is located inside the IC package containing the first and second semiconductor dies.The IC package of any of claims 1 to 3, wherein the interface implements double data rate cycles to be run to said memory.The IC package of any of claims 1 to 4, wherein the first semiconductor die is fabricated according to a different manufacturing process as the second semiconductor die.The IC package of any of claims 1 to 4, wherein the first semiconductor die is fabricated according to the same manufacturing process as the second semiconductor die.The IC package of any of claims 1 to 6, wherein the first semiconductor dies comprises an application engine and the second semiconductor die comprises a modem.A system, comprising:a first integrated circuit ("IC") package containing a memory die; anda second IC package coupled to the first IC package, wherein the second IC package contains a die stack comprising first and second dies coupled together;   wherein the first die includes an interface to the memory die and both of said dies in the die stack share access to said memory die.The system of claim 8 wherein the first die comprises an application engine die and the second die comprises a modem.The system of claim 9 wherein the system comprises a cellular telephone.The system of any of claims 8 to 10, wherein the first semiconductor die is fabricated according to a different manufacturing process than the second semiconductor die.
FIELD OF THE INVENTIONThe present subject matter relates generally to integrated circuits ("ICs"). More particularly, the present subject matter relates to an IC package comprising at least two stacked dies having shared access to a memory.BACKGROUND OF THE INVENTIONIn many electronic systems, it is desirable for multiple devices (e.g., processors) to have access to memory for code and data storage. Determining an optimal configuration and packaging for multiple processor semiconductor dies and memory accessible to the dies is often difficult, costly, and may consume value circuit board space.SUMMARY OF INVENTIONIn accordance with at least one embodiment of the invention, an integrated circuit ("IC") package comprises a first semiconductor die and a second semiconductor die. The first and second semiconductor dies are coupled together (e.g., "stacked") within the same IC package. The first semiconductor die includes an interface to a memory die and the first and second semiconductor dies share said memory formed on said memory die. The memory die can either be located outside or inside the IC package containing the first and second semiconductor dies.In another embodiment, a system comprises a first IC package containing a memory die and a second IC package coupled to the first IC package. The second IC package contains a die stack comprising first and second dies coupled together. The first semiconductor die includes an interface to the memory die and both of the dies in the die stack share access to the memory die. In some embodiments, the system may comprise a communication system such as a cellular telephone or Personal Digital Assistant ("PDA").NOTATION AND NOMENCLATURECertain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, various companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms "including" and "comprising" are used in an openended fashion, and thus should be interpreted to mean "including, but not limited to." Also, the term "couple" or "couples" is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.BRIEF DESCRIPTION OF THE DRAWINGSFor a more detailed description of the preferred embodiments of the present invention, reference will now be made to the accompanying drawings, wherein:Figure 1 shows a system in accordance with a preferred embodiment of the invention that comprises a die stack having shared access to memory.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure is limited to that embodiment.Figure 1 shows two IC packages 10 and 20. The IC package 10 preferably contains a plurality of semiconductor dies. In the exemplary embodiment of Figure 1, IC package 10 comprises two dies 12 and 14 coupled together as a "die stack" as will be explained below. Die 12 comprises an application engine 12 and die 14 comprises a modem engine 14. More than two dies may be included in IC package 10 if desired. The application engine 12 preferably comprises a central processing unit ("CPU") core and memory. One or more applications may be executed by the application engine 12. Examples of applications executed by the application engine comprise basic user services such as email, personal assistant and video conferencing. The modem engine 14 functions to receive and transmit information via an antenna 18 coupled to the modem. The modem engine 14 performs various modulation and demodulation functions. The IC packages 10 and 20 may form at least a part of a communication system such as a wireless device, for example, a cellular telephone. Application engine 12 and modem 14 are interconnected via an interconnect link 16.The IC package 20 comprises preferably one semiconductor die 22, although additional dies can be included as well. The die 22 preferably comprises a memory 22 that is accessible via a memory interface 13 on the application engine 12. Both dies 12 and 14 can share access to the memory 22 via the application engine's interface 13. The memory 22 may comprise any suitable type of memory. Examples of memory comprise memory capable of single data rate or double data rate cycles, non-volatile memory (NOR, NAND Flash memory) or volatile memory such as dynamic random access memory ("DRAM") or static RAM ("SRAM"). The application engine's interface 13 is thus configured to be compatible with the type of memory implemented in IC package 20.The dies 12 and 14 may be fabricated per the same manufacturing process or different processes. For example, die 12 may be fabricated according to a high performance complementary metal oxide semiconductor ("CMOS") process such as Texas Instruments Incorporated's 90nm CMOS technology, while die 14 may be fabricated according to a lower performance process such as Texas Instruments Incorporated's 130nm CMOS technology. The high performance CMOS process permits the application engine 12 to function at relatively high speed, albeit at the potential expense of higher leakage current than would otherwise be the case. The lower performance process used for the modem 14 may achieve lower leakage current than for the application engine 12, but modem 14 may function at a lower performance level. In general, the application engine 12 is designed for higher performance which is desirable for its functionality, whereas the modem 14 need not operate at such high performance and thus can be designed for lower leakage current to save battery (not specifically shown) life.As noted above, the dies 12 and 14 in the IC package 20 may be coupled together to form a die stack. Any commonly known or later developed manufacturing technique for fabricating the die stack is acceptable. Exemplary die stacking techniques are provided in the following U.S. Patent. Nos.: 6,621,155; 6,674,161; and 6,682,955.While the preferred embodiments of the present invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the scope of the claimed invention. For example, although the memory 22 is shown in Figure 1 in a separate IC package, memory 22 may included in the same IC package as the stacked dies. The embodiments described herein are exemplary only, and are not intended to be limiting. Accordingly, the scope of protection is not limited by the description set out above.
This disclosure describes techniques for context switching. In one example, a graphics processing unit may be configured to generate one or more signatures for context information stored in on-chip memory of the graphics processing unit, determine whether the one or more signatures match any previously generated signatures for context information stored in one or more memories accessible by the graphics processing unit, store, to at least one of the one or more memories, any signature of the one or more signatures that is determined not to match any previously generated signature stored in atleast one of the one or more memories, and store, to at least one of the one or more memories, the context information respectively corresponding to the one or more signatures determined not to matchany previously generated signature stored in at least one of the one or more memories.
1.A method for context switching by a graphics processing unit includes:Generating one or more signatures of current context information stored in the on-chip memory of the graphics processing unit;Determining whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that are accessible by the graphics processing unit;Storing any signature of any one of the one or more memories that is determined to not match any previously generated signature stored in at least one of the one or more memories to at least one of the one or more memories ;as well asStoring, to at least one of the one or more memories, the one or more signatures respectively corresponding to any previously generated signatures determined to be non-matching stored in at least one of the one or more memories The current context information.2.The method of claim 1, wherein the one or more memories accessible by the graphics processing unit comprises at least one of: the on-chip memory and graphics of the graphics processing unit The memory outside the processing unit.3.The method of claim 1, wherein the one or more memories accessible by the graphics processing unit include only the memory external to the graphics processing unit, wherein access is enabled by the graphics processing unit If the memory external to the graphics processing unit is a system memory, the one or more memories include only the memory external to the graphics processing unit, or where the memory can be accessed by the graphics processing unit. The one or more memories do not include the on-chip memory of the graphics processing unit.4.The method of claim 1, wherein the current context information corresponds to a preempted process, and wherein the previous context information corresponds to one or more previously preempted processes.5.The method of claim 1, further comprising not storing any of the one or more signatures determined to match any previously generated signature stored in at least one of the one or more memories. .6.The method of claim 5, further comprising not storing the one or more signatures respectively corresponding to any of the previously generated signatures that are determined to match stored in at least one of the one or more memories. The current context information.7.The method of claim 1, further comprising not recovering previous context information from the memory external to the on-chip memory, the previous context information corresponding to ones of the one or more signatures being determined as matching storage, respectively Any signature of any previously generated signature in at least one of the one or more memories.8.The method of claim 1, wherein generating one or more signatures of current context information comprises applying one or more signature algorithms to one or more of the following: the current context information, the current context One or more groups of information and one or more types of the current context information.9.The method of claim 1, wherein determining whether the one or more signatures match any previously generated signature comprises determining that each of the one or more signatures matches any of the previously generated signatures, Or each of the one or more signatures does not match any of the previously generated signatures.10.The method of claim 1, wherein determining whether the one or more signatures match any previously generated signature comprises determining that at least one of the one or more signatures matches any of the previously generated signatures, And at least one of the one or more signatures does not match any of the previously generated signatures.11.A device includes:a graphics processing unit configured to perform context switching, wherein the graphics processing unit has on-chip memory; andA memory external to the graphics processing unit, wherein the graphics processing unit is configured to:Generating one or more signatures of current context information stored in the on-chip memory of the graphics processing unit;Determining whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that are accessible by the graphics processing unit;Storing any signature of any one of the one or more memories that is determined to not match any previously generated signature stored in at least one of the one or more memories to at least one of the one or more memories ;as well asStoring, to at least one of the one or more memories, the one or more signatures respectively corresponding to any previously generated signatures determined to be non-matching stored in at least one of the one or more memories The current context information.12.The apparatus of claim 11, wherein the one or more memories accessible by the graphics processing unit comprises at least one of: the on-chip memory and graphics of the graphics processing unit The memory outside the processing unit.13.The apparatus of claim 11, wherein the one or more memories accessible by the graphics processing unit include only the memory external to the graphics processing unit, wherein access is enabled by the graphics processing unit If the memory external to the graphics processing unit is a system memory, the one or more memories include only the memory external to the graphics processing unit, or where the memory can be accessed by the graphics processing unit. The one or more memories do not include the on-chip memory of the graphics processing unit.14.The apparatus of claim 11, wherein the current context information corresponds to a preempted process, and wherein the previous context information corresponds to one or more previously preempted processes.15.The apparatus of claim 11, wherein the graphics processing unit is configured not to store any previously generated ones of the one or more signatures determined to match stored in at least one of the one or more memories. The signature of any signature.16.The apparatus of claim 15, wherein the graphics processing unit is configured to not store the ones that respectively correspond to any previously generated signatures that are determined to match in at least one of the one or more memories. Or multiple signatures of the current context information.17.The apparatus of claim 11, wherein the graphics processing unit is configured to not restore previous context information from the memory external to the on-chip memory, the previous context information respectively corresponding to the one or more signatures It is determined to match any signature of any previously generated signature stored in at least one of the one or more memories.18.The apparatus of claim 11, wherein the graphics processing unit is configured to generate one or more signatures of current context information comprises one or more of being configured to apply one or more signature algorithms to: One or more types of the current context information, one or more groups of the current context information, and the current context information.19.The apparatus of claim 11, wherein the graphics processing unit is configured to determine whether the one or more signatures match any previously generated signature by being configured to perform the following operations: determining the one or more signatures Each one matches any of the previously generated signatures, or each of the one or more signatures does not match any of the previously generated signatures.20.The apparatus of claim 11, wherein the graphics processing unit is configured to determine whether the one or more signatures match any previously generated signature by being configured to perform the following operations: determining the one or more signatures At least one matches any of the previously generated signatures, and at least one of the one or more signatures does not match any of the previously generated signatures.21.A device includes:An apparatus for generating one or more signatures of current context information stored in an on-chip memory of a graphics processing unit;Means for determining whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that are accessible by the graphics processing unit;Storing at least one of the one or more signatures stored in at least one of the one or more memories, determined to be non-matching, to at least one of the one or more memories Any signed device; andFor storing, to at least one of the one or more memories, the one or more of any previously generated signatures respectively determined to be stored as non-matching in at least one of the one or more memories The device that signed the current context information.22.The apparatus of claim 21, wherein the one or more memories accessible by the graphics processing unit comprises at least one of: the on-chip memory and graphics of the graphics processing unit The memory outside the processing unit.23.The apparatus of claim 21, wherein the one or more memories accessible by the graphics processing unit include only the memory external to the graphics processing unit, wherein access is enabled by the graphics processing unit If the memory external to the graphics processing unit is a system memory, the one or more memories include only the memory external to the graphics processing unit, or where the memory can be accessed by the graphics processing unit. The one or more memories do not include the on-chip memory of the graphics processing unit.24.The apparatus of claim 21, further comprising for not storing any of the one or more signatures determined to match any previously generated signature stored in at least one of the one or more memories. Any signed device.25.The apparatus of claim 24, further comprising no memory for storing the one or more, respectively, corresponding to any previously generated signature determined to match stored in at least one of the one or more memories. The device that signed the current context information.26.The apparatus of claim 21, further comprising means for not recovering previous context information from the memory external to the on-chip memory, the previous context information corresponding to each of the one or more signatures, respectively Any signature that matches any previously generated signature stored in at least one of the one or more memories is determined.27.The apparatus of claim 21, wherein the means for generating one or more signatures of the current context information comprises means for applying one or more signature algorithms to one or more of the following: The current context information, one or more groups of the current context information, and one or more types of the current context information.28.The apparatus of claim 21, wherein the means for determining whether the one or more signatures match any previously generated signature comprises determining for each of the one or more signatures to match the previously generated signature Either one of the signatures, or the one of the one or more signatures does not match any of the previously generated signatures.29.The apparatus of claim 21, wherein the means for determining whether the one or more signatures match any previously generated signature comprises determining that the at least one of the one or more signatures matches the previously generated signature Any one of the signatures and at least one of the one or more signatures does not match any of the previously generated signatures.30.A non-transitory computer-readable storage medium having instructions stored thereon, the instructions, when executed, cause the one or more processors of the computing device to:Generating one or more signatures of the current context information stored in the on-chip memory of the graphics processing unit;Determining whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that are accessible by the graphics processing unit;Storing any signature of any one of the one or more memories that is determined to not match any previously generated signature stored in at least one of the one or more memories to at least one of the one or more memories ;as well asStoring, to at least one of the one or more memories, the one or more signatures respectively corresponding to any previously generated signatures determined to be non-matching stored in at least one of the one or more memories The current context information.
Efficient storage and recovery of contextual information for context switchingTechnical fieldThis disclosure relates to techniques for context switching and, more particularly, to techniques for efficient context switching.Background techniqueTime division of processing unit resources (eg, on-chip memory) allows various process operations and advances the processing unit's progress. This can be achieved by pausing and swapping out a process (eg, a first process) and allowing another process (eg, a second process) to execute. This kind of process is called context switching because the processing unit switches from performing the first process to performing the second process.Summary of the InventionIn general, the present disclosure describes techniques for context switching, and more specifically, techniques for efficient context switching. In an example of the present disclosure, a processing unit such as a CPU or GPU can be configured to reduce the amount of context information saved and/or loaded (ie, restored) during a context switch. For example, the processing unit can be configured to perform a context switch by generating one or more signatures corresponding to the context information of the cut-out and/or cut-in process. The processing unit can be configured to use the one or more signatures corresponding to the context information to determine whether the corresponding context information (or a subset thereof) should be saved and/or restored during a context switch.In one example, the present disclosure describes a method of context switching by a processing unit, the method including generating one or more signatures of current context information stored in an on-chip memory of the processing unit. The method can include determining if the one or more signatures match any previously generated signature of previous context information stored in one or more memories that can be accessed by the processing unit. The method can include storing any previously stored in at least one of the one or more memories stored in at least one of the one or more memories that was determined to be mismatched to at least one of the one or more memories. The signature of any signature. The method can include storing, to at least one of the one or more memories, the one corresponding to any previously generated signature that was determined to be stored as non-matching in at least one of the one or more memories, respectively. Or multiple signatures of the current context information.In another example, the present disclosure describes an apparatus including a processing unit configured to perform a context switch. The processing unit can have on-chip memory. The apparatus can further include a memory external to the processing unit. The processing unit can be configured to generate one or more signatures of current context information stored in the on-chip memory of the processing unit. The processing unit can be configured to determine whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that can be accessed by the processing unit. The processing unit can be configured to store any of the one or more signatures stored in at least one of the one or more memories that are determined to not match to at least one of the one or more memories. Any signature of a previously generated signature. The processing unit can be configured to store, to at least one of the one or more memories, a location corresponding to any previously generated signatures that are respectively determined to be mismatched in at least one of the one or more memories. The current context information of one or more signatures is described.In another example, the present disclosure describes an apparatus that includes means for generating one or more signatures of current context information stored in an on-chip memory of a processing unit. The apparatus can include means for determining whether the one or more signatures match any previously generated signature of previous context information stored in one or more memories that can be accessed by the processing unit. The apparatus can include any of the one or more signatures stored in at least one of the one or more memories stored to at least one of the one or more memories that are determined to be mismatched. Any signed device that previously generated a signature. The apparatus can include means for storing, to at least one of the one or more memories, any previously generated signatures that are respectively determined to be stored as non-matching in at least one of the one or more memories. An apparatus for describing one or more signed current contextual information.In another example, the present disclosure describes a non-transitory computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors of a computing device to generate on-chip memory stored in a processing unit. One or more signatures of the current context information in . The instructions, when executed, can cause the one or more processors of the computing device to determine whether the one or more signatures match a previous context stored in one or more memories that can be accessed by the processing unit. Any previously generated signature of the message. The instructions, when executed, can cause the one or more processors of the computing device to store at least one of the one or more signatures in the one or more memories that are determined to be stored as mismatches. Describe any signature of any previously generated signature in at least one of the one or more memories. The instructions, when executed, can cause one or more processors of the computing device to store at least one of the one or more memories respectively corresponding to being determined to be mismatched and stored in the one or more memories. The current context information of the one or more signatures of any previously generated signature in at least one of.The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.Description of the drawingsFIG. 1 is a block diagram illustrating an example computing device configured to use the techniques of the present disclosure.FIG. 2 is a block diagram showing the components of FIG. 1 in more detail.FIG. 3 is a flowchart showing an example method consistent with the techniques of this disclosure.FIG. 4 is a flowchart showing an example method of the present disclosure consistent with the techniques of this disclosure.FIG. 5 is a flowchart showing an example method of the present disclosure consistent with the techniques of this disclosure.FIG. 6 is a flowchart showing an example method of the present disclosure consistent with the techniques of this disclosure.FIG. 7 is a flowchart showing an example method of the present disclosure consistent with the techniques of this disclosure.FIG. 8 is a flowchart showing an example method of the present disclosure consistent with the techniques of this disclosure.FIG. 9 is a block diagram showing example components of a computing device configured to use the techniques of the present disclosure.detailed descriptionIn general, the techniques of this disclosure are directed toward using signatures to eliminate or reduce the number of redundant saves and/or restores of context information during context switches in a computing system. For example, when the process is context switched (eg, preempted by another process or swapped with another process), the processing unit (eg, CPU or GPU) may associate any contextual information stored on the on-chip memory of the processing unit together with One or more signatures of the context information that have been saved (or to be saved) are saved to an external memory (for example, system memory). The processing unit may be configured to generate one or more signatures by applying a signature algorithm to the context information.The processing unit may be configured to generate a single signature for each application signature algorithm. For example, if two signatures are generated for context information, then that means that two signature algorithms apply to two different sets of contextual information. In this example, the two signature algorithms may be the same or different, and two sets of different context information corresponding to the same process may or may not have any overlap. The signature algorithm may generate an MD5 hash, a cyclic redundancy check (CRC), a Bloom filter signature, or other identifier output by a hash, signature, or filter function. For example, in an example involving an MD5 hash, the processing unit may be configured to apply an MD5 hashing algorithm to the context information to generate a corresponding signature (ie, an MD5 hash value in this example).As will be described in more detail below, the processing unit generates a signature to determine whether the context information of the cutout process has changed and/or has been previously saved to external memory. For example, if the context information (or a subset thereof) of the cut-out process has been previously saved to the external memory and has not been changed, for example, the signature of the context information such as the cut-out process and the context information previously saved to the external memory The match between signatures indicates that the processing unit may not save context information (or a subset thereof) of the cut out process, thereby avoiding redundant save operations. As another example, if the context information (or a subset thereof) of the cut-in process has been previously saved to the external memory and the context information in the on-chip memory of the processing unit is the same as the previously stored context information, for example, as has been entered in the process The match between the signature of the context information and the signature of the context information previously saved to the external memory indicates that the processing unit may not recover the context information (or a subset thereof), thereby avoiding unnecessary recovery operations. By avoiding redundantly storing previously stored information, such as by using signatures, the present disclosure may enable faster context switching. As such, the present disclosure may enable faster context switching by avoiding redundant restoration of previously stored information, such as by using signatures.In some examples, one or more of the techniques described herein may take full advantage of any versatility among applications (eg, games) that share the same engine (eg, a game engine). For example, two games developed using the same game engine may share a common resource, such as a shader, as a library for the game developer, because both games use the same game engine library to draw the same or similar pieces. In this way, two different games may have a common project or similar items (eg, tree, wall, texture, etc.). Although the size, orientation, and other attributes of common or similar items may differ between two games, context switching between the two games may avoid redundant preservation when the context information between the games relates to common or similar topics. And/or resume operations. For example, the way the tree is drawn (color, texture, etc.) may be described in the context information, while the size/coordinates of the tree itself correspond to the data processed by the GPU. If the GPU detects that the context information is the same for the new process (eg, the process associated with the second game) and the preempted process (eg, the process associated with the first game), the GPU may be configured not to recover the context information, and Configured to recover data.FIG. 1 is a block diagram illustrating an example computing device that may be configured to implement one or more aspects of the present disclosure. As shown in FIG. 1, computing device 2 may be, for example, a personal computer, desktop computer, laptop computer, computer workstation, tablet computing device, video game platform or console, wireless communication device (eg, mobile phone, cellular). Telephones, satellite phones, and/or mobile phone handsets), landline phones, Internet phones, handheld devices (such as portable video game devices or personal digital assistants (PDAs)), personal music players, video players, A display device, a television, a television set-top box, a server, an intermediary network device, a host computer, any mobile device, or any other type of device that processes and/or displays graphical data. In the example of FIG. 1, the computing device 2 may include a central processing unit (CPU) 6, a system memory 10, and a graphics processing unit (GPU) 12. The computing device 2 may also include a display processor 14, a transceiver 3, a user interface 4, a video codec 7, and a display 8. In some examples, video codec 7 may be a software application, such as a software application among one or more software applications 18 configured to be processed by CPU 6 or other components of computing device 2 . In other examples, video codec 7 may be a hardware component other than CPU 6, a software application that runs on a different component than CPU 6, or a combination of hardware and software.The GPU 12 may be designed to have a single instruction multiple data (SIMD) structure. In a SIMD architecture, GPU 12 may include multiple SIMD processing elements, where each SIMD processing element executes the same command but executes on different data. The specific command that is executed on a particular SIMD processing element is called a thread. Each SIMD processing element may be considered to execute different threads because the data for a given thread may be different; however, the threads executing on the processing element are the same commands that are executed on other processing elements. In this manner, the SIMD structure allows the GPU 12 to perform many tasks in parallel (eg, at the same time).CPU 6 and/or GPU 12 are configured to perform context switching. In some examples, context switching may be triggered by a scheduling processor, scheduling unit, or scheduling scheme in a multitasking environment. For example, CPU 6 and/or GPU 12 may include a scheduling processor, scheduling unit, or scheduling scheme configured to trigger a context switch. In other examples, context switching may be triggered by an interrupt handler based on one or more interrupts. In other examples, context switching may be triggered when transitions between modes are needed, such as when switching from core mode to user mode.As used herein, the term "processing unit" means CPU 6 and/or GPU 12 . As used herein, the term "process" encompasses processes, threads, and/or tasks. Context switching is the case where a processing unit switches from performing a process to performing a different process. This process is called context switching because the processing unit switches from performing the first process to performing the second process. The cut out process may be said to be preempted by the second process (or already cut in process). In order to ensure that any advancement made during the execution of the first process is not lost when the processing unit switches to the second process, the context information associated with the first process currently stored in the on-chip memory of the processing unit may be saved to a certain External memory (eg, system memory 10) to achieve recovery (ie, recovery) of that data when the processing unit switches back to the first process to resume execution of the first process.As will be described in more detail below, the techniques described herein may reduce the amount of contextual information saved and/or loaded (ie, restored). By reducing the number of saves and restores of contextual information, processing resources can be used more efficiently by reducing latency (eg, reducing processing resources, such as clock cycles required to save and/or restore contextual information). You can also reduce power consumption and power consumption through the techniques described in this article. For example, the techniques described herein avoid blindly saving all contextual information as the process cuts out. In addition, the techniques described herein avoid blindly restoring all contextual information during process entry. As will also be described in more detail below, the techniques described herein implement reducing the amount of context information saved and/or restored by determining whether the context information has changed.In some examples, as used herein, the term "context information" means a minimal set of data that corresponds to a process, needed to recover the process after a context switch. In such an instance, the minimal set of data needed to recover the process after the context switch corresponding to the process may refer to the smallest set of data that must reside on the processing unit to resume processing after the context switch, or may refer to A minimum set of data that is saved to and/or restored from external memory (eg, off-chip memory such as system memory 10) to resume processing after a context switch. The smallest set of data saved for the cut-out process may or may not be the same minimum set of data that was restored for that process when that process had been cut-in.As an example, the processing unit may save context information corresponding to the process when the process has been cut out, but the processing unit may not need to restore any of the context information, or may only need to cut in the subsequent context switch during the The portion of the saved context information is restored immediately after the process. As an example, the processing unit does not need to save context information corresponding to the process (or only need to save context information) when the process has been cut out, but the processing unit may need to cut into the process during subsequent context switching. Immediately afterwards restore context information or parts of it.Context information can be grouped into different types of contextual information. Groups can be based on the type of contextual information and/or how contextual information is generated. For example, control register information, constant information, and other software programming stated information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, software programming state information). As another example, the status flag information, dirty bit information, and other hardware modified state information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware modified state information). As another example, general register information, on-chip memory information, and other hardware-generated state information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware-generated state information).In other examples, as used herein, the term "context information" means state information, which may include a minimal set of data that corresponds to a procedure required to resume a process after a context switch. In such an instance, the state information needed to recover the process after the context switch corresponding to the process may refer to state information that must reside on the processing unit to resume processing after the context switch, or may refer to saving to and/or Or, state information is restored from the external memory (eg, the off-chip memory such as the system memory 10) to resume processing after the context switch. The state information saved for the cut-out process may or may not be the same state information that was recovered for the process when that process had been cut-in.In other examples, as used herein, the term "context information" means a subset of the smallest set of data that corresponds to a process that is needed to recover the process after a context switch. For example, a subset of the smallest set of data may include one or more groups of contextual information. Groups can be based on the type of contextual information and/or how contextual information is generated. For example, control register information, constant information, and other software programming context information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, software programming contextual information). As another example, the status flag information, dirty bit information, and other hardware modified context information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware modified contextual information). As another example, general-purpose register information, on-chip memory information, and other hardware-generated context information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware-generated contextual information).In yet other examples, as used herein, the term "context information" means state information, which may be a subset of the smallest set of data that corresponds to a procedure required to resume a process after a context switch. For example, a subset of the smallest set of data may include one or more groups of status information. The group may be based on the manner in which the status information type and/or status information is generated. For example, control register information, constant information, and other software programming stated information may each be an individual group of status information, or may belong within the same group of status information (eg, software programming status information).As another example, the status flag information, dirty bit information, and other hardware modified status information may each be an individual group of status information, or within the same group that may belong to status information (eg, hardware modified status information). As another example, general-purpose register information, on-chip memory information, and other hardware-generated state information may each be an individual group of state information, or may belong within the same group of context information (eg, hardware-generated state information).In this disclosure, the use of "state information," "contextual information," or any other term does not control which of the "context information" definitions apply to a particular instance, embodiment, etc., provided that the definition exists . In fact, the definitions of "contextual information" are intended to facilitate a detailed description of the examples set forth throughout this disclosure. In this regard, unless expressly stated otherwise, one or more of the definitions of "contextual information" set forth herein apply to each instance of the technology described herein. In addition, the terms "state" and "context" may or may not be interchangeable terms, depending on the example.The context information may include one or more of software programming state information, hardware modified state information, hardware generated states, and/or data information. Software programming status information may include control register information, constant information, and the like. For example, software programming state information for GPU 12 may include a stream of commands received by GPU 12 from, for example, GPU driver 22, which is executed on CPU 6 for a particular process. In this example, such status information may be found in, for example, a control register. The hardware modified state information may include any changes made to the software programming state information during the execution of the corresponding process. For example, the hardware modified state information may include state flag information, dirty bit information, and the like. The hardware-generated state information may include state information generated by the hardware due to the execution of the corresponding process. For example, hardware-generated status information may include general-purpose register information, on-chip memory information, and the like.As used herein, a "cut out" process during a context switch may be a process performed on a processing unit (eg, GPU 12) until a context switch and "cut out" for a "cut in" process. The "cut-in" process may be a process performed on a processing unit (eg, GPU 12) due to a context switch. After the context switch, the cut out process is a process that was previously performed on the processing unit but is no longer performed due to the context switch.As an example, context switching from the first process to the second process may mean that the first process is a cut-out process, and the second process is a cut-in process. As another example, context switching from a process and context switching to a process may refer to a cut-out process and a cut-in process, respectively. The cut-in process can be cut out due to another context switch. Context switching can cause one or more processes to cut out or cut in one or more times. For example, a long execution process can be cut out and cut in multiple times due to multiple context switches to accommodate the execution of different processes. In this regard, the cut-in process may be a newly-executed process due to a context switch, or the cut-in process may be a process performed due to a context switch but may not be considered a new execution process because the cut-in process may be The previous context switch was a cut out process (ie, rather than a new execution, execution may be considered as resumed).As another example, a cut-out process may refer to a process that is performed on a processing unit and preempted by another process (eg, a cut-in process). As another example, a cut-out process may refer to its process of being interrupted or paused (eg, stopped, interrupted, postponed, etc.) for a cut-in process. As another example, the cut-out process may refer to the process swapped out for the cut-in process. As another example, a cut-out process may refer to a process that is scheduled to swap out for a cut-in process. As another example, the cut-out process may refer to a preempted process.As another example, the cut-in process may refer to the process of preempting another process (eg, a cut-out process) to be performed on the processing unit and executing on the processing unit depending on whether the context switch is completed (eg, executed) or Do not perform. As another example, a cut-in process may refer to performing a process on a processing unit that causes an interrupt or a suspension (eg, stop, interrupt, postpone, etc.) of the execution of another process (eg, a cut-out process). As another example, the cut-in process may refer to the process that has been swapped in for the cut-in process. As another example, a cut-in process may refer to a process that is scheduled to be swapped in for a cut-in process.The term "cut out" process does not imply that a context switch is currently being performed or completed (eg, executed). For example, a cut-out process may refer to a process performed on a processing unit that will be or otherwise scheduled to be swapped out for another process (eg, a cut-in process), or may refer to a context switch (eg, in context) After the switch) The procedure that is not performed on the processing unit. As another example, a cut-out process may refer to a process before, during, or after a context switch. Similarly, the term "cut into" process does not imply that a context switch is currently being performed or has been completed (eg, executed). For example, a cut-in process may refer to or be scheduled for execution on a processing unit (eg, on a processing unit executing a process preempted by a cut-in process) for another process (eg, a cut-out process). Instead, the swapped out process may refer to a process that is not performed on the processing unit due to a context switch (eg, after a context switch). As another example, a cut-in process may refer to a process before, during, or after a context switch.In some examples, system memory 10 is a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted to mean that system memory 10 is non-removable or its content is static. As one example, system memory 10 may be removed from computing device 2 and moved to another device. As another example, generally similar to system memory 10, memory may be inserted into computing device 2. In some examples, non-transitory storage media may store data that may change over time (eg, in RAM).Although one or more software applications 18 are conceptually shown internal to the CPU 6, it should be understood that the one or more software applications 18 may be stored in the system memory 10, external to the computing device 2, but Accessible memory, or a combination thereof. The external memory may be accessible, for example, intermittently to the computing device 2 .Display processor 14 may utilize a tile-type architecture. In some examples, tiles are area representations of pixels that include height and width, where the height is one or more pixels and the width is one or more pixels. In such an example, the tiles may be rectangular or square in nature. In other examples, the tiles may be shapes other than square or rectangular. Display processor 14 may extract multiple image layers (eg, foreground and background) from at least one memory. For example, display processor 14 may extract an image layer from a frame buffer, and GPU outputs graphics data in the form of a pixel representation and/or other memory to the frame buffer.As another example, display processor 14 may extract image layers from on-chip memory of video codec 7, on-chip memory of GPU 12, output buffer 16, codec buffer 17, and/or system memory 10. Multiple image layers may include a foreground layer and/or a background layer. As used herein, the term "image" is not intended to mean only a static image. In practice, the image or image layer may be associated with a static image (eg, the image or image layer may be an image when blended) or video-associated (eg, the image or image layer may be a single image in the image sequence when blended. The sequence of images produces a moving picture or video in sequence when viewed.Display processor 14 may process pixels from multiple layers. Example pixel processing that may be performed by display processor 14 may include upsampling, downsampling, scaling, rotation, and other pixel processing. For example, display processor 14 may process pixels associated with foreground image layers and/or background image layers. Display processor 14 may blend pixels from multiple layers and write the blended pixels back into memory in tile format. The blended pixels are then read from the memory in a raster format and sent to the display 8 for rendering.Video codec 7 may receive encoded video data. Computing device 2 may receive encoded video data from, for example, a storage medium, a web server, or a source device (eg, a device that encodes or otherwise transmits encoded video data to computing device 2, such as a server). In other examples, computing device 2 may itself generate encoded video data. For example, computing device 2 may include a camera for capturing still images or video. Captured data (eg, video data) may be encoded by video codec 7. The encoded video data may include a variety of syntax elements generated by a video encoder for use by video decoders such as video codec 7 to decode the video data.Although video codec 7 is described herein as being both a video encoder and a video decoder, it should be understood that video codec 7 may be a video decoder without encoding functionality in other examples. Video data decoded by video codec 7 may be sent directly to display processor 14, may be sent directly to display 8, or may be sent to memory accessible to display processor 14 or GPU 12, such as system memory 10, output, Buffer 16 or codec buffer 17 . In the illustrated example, the video codec 7 is connected to the display processor 14, which means that the decoded video data is sent directly to the display processor 14 and/or to memory that is accessible to the display processor 14. In this example, display processor 14 may issue one or more memory requests to obtain decoded video data from memory in a manner that is associated with issuing one or more memory requests from a memory associated with GPU 12 (eg, an output buffer). 16) Similar to acquiring graphics (still image or video) data.The video codec 7 may operate according to a video compression standard, such as ITU-T H.264, Advanced Video Coding (AVC) or ITU-T H.265, High Efficiency Video Coding (HEVC). )standard. However, the techniques of this disclosure are not limited to any particular coding standard.Transceiver 3, video codec 7, and display processor 14 may be part of the same integrated circuit (IC) as CPU 6 and/or GPU 12, and may include one or more ICs including CPU 6 and/or GPU 12. The outside may be formed in an IC external to the IC including the CPU 6 and/or the GPU 12 . For example, video codec 7 may be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (applications). Specific integrated circuit (ASIC), field programmable gate array (FPGA), discrete logic, software, hardware, firmware, or any combination thereof.Computing device 2 may include additional modules or processing units not shown in FIG. 1 for the sake of clarity. For example, computing device 2 may include telephone and speaker implementations that are not shown in FIG. 1 in an example in which computing device 2 is a mobile wireless telephone, or include a speaker if computing device 2 is a media player. . Computing device 2 may also include a camera. Moreover, the various modules and units shown in computing device 2 may not be required in every instance of computing device 2 . For example, in instances where the computing device 2 is a desktop computer or other device that is equipped to interface with an external user interface or display, the user interface 4 and the display 8 may be external to the computing device 2 .Examples of user interface 4 include but are not limited to trackballs, mice, keyboards, and other types of input devices. The user interface 4 may also be a touch screen and may be incorporated as part of the display 8 . Transceiver 3 may include circuitry that allows for wireless or wired communication between computing device 2 and another device or network. Transceiver 3 may include modulators, demodulators, amplifiers, and other such circuitry for wired or wireless communications. In some examples, the transceiver 3 may be integrated with the CPU 6 .The CPU 6 may be a microprocessor configured to process instructions of a computer program for execution, such as a central processing unit (CPU). The CPU 6 may include a general-purpose or special-purpose processor that controls the operation of the computing device 2 . The user may provide input to the computing device 2 to cause the CPU 6 to execute one or more software applications, such as one or more software applications 18 . One or more software applications 18 executing on the CPU 6 (or on one or more other components of the computing device 2) may include, for example, an operating system, a word processor application, an email application, a spreadsheet application , media player applications, video game applications, graphical user interface applications, or using another type of software application for 2D or 3D graphics graphics data. In addition, the CPU 6 may execute a GPU driver 22 for controlling the operation of the GPU 12 . The user may provide input to the computing device 2 through one or more input devices (not shown) such as a keyboard, mouse, microphone, touch pad, or another input device coupled to the computing device 2 through the user interface 4 .The software application 18 executing on the CPU 6, for example, may include one or more graphics rendering instructions that instruct the CPU 6 to render the graphics data to the display 8. The instructions may include instructions for processing 3D graphics and instructions for processing 2D graphics. In some examples, software instructions may conform to a graphical application programming interface (API) 19 . The graphics API 19 may be, for example, an Open Graphics Library (__N1__) API, an Open_Graphics_Library_Embedded_Systems (OpenGL ES) API, a Direct3D API, an X3D API, a RenderMan API, a WebGL API, an Open Computing Language ( Open Computing Language (OpenCLTM) or any other public or proprietary standard GPU computing API. To process graphics rendering instructions for one or more software applications 18 executed on CPU 6, during execution of one or more software applications 18, CPU 6 may issue a GPU 12 (eg, via GPU driver 22). Or multiple graphics rendering commands to cause the GPU 12 to perform some or all of the rendering of the graphics data. In some examples, the graphics data to be rendered may include a list of graphics primitives such as points, lines, triangles, quadrilaterals, triangle strips, and the like.One or more software applications 18 may include one or more drawing instructions that instruct the GPU 12 to visualize a graphical user interface (GUI), graphics scene, graphics data, or other graphics-related data. For example, drawing instructions may include instructions that define a set of one or more graphics primitives to be exposed by GPU 12 . In some examples, drawing instructions may collectively define all or part of a plurality of fenestration surfaces for use in a GUI. In additional examples, the drawing instructions may collectively define all or part of a graphical scene that is contained within one or more graphical objects within a model space or world space defined by an application.The GPU 12 may be configured to perform graphics operations to render one or more graphics primitives to the display 8 . Thus, when one or more of the software applications 18 executing on the CPU 6 require graphics processing, the CPU 6 may provide graphics rendering commands along with graphics data to the GPU 12 for rendering to the display 8 . The graphics data may include, for example, drawing commands, status information, primitive information, texture information, and the like. In some cases, GPU 12 may have built-in highly parallel structures that provide more efficient graphics-related operations than CPU 6 . For example, GPU 12 may include multiple processing elements, such as shader units, configured to operate on multiple vertices or pixels in a parallel fashion. In some cases, the highly parallel nature of GPU 12 allows GPU 12 to render graphics images faster (eg, GUI and two-dimensional (2D) and/or three-dimensional (three-dimensional) than using CPU 6 to draw the scene directly to display 8 . The -dimensional, 3D) graphics scene is drawn onto the display 8 .One or more software applications 18 may invoke the GPU driver 22 to issue one or more commands to the GPU 12 for rendering one or more graphics primitives as a displayable graphic image (eg, displayable graphics data). For example, one or more software applications 18 may invoke GPU driver 22 to provide GPU 12 with primitive definitions when executed. In some cases, primitive definitions may be provided to the GPU 12 in the form of a list of drawing primitives (eg, triangles, rectangles, triangular fans, triangular stripes, etc.). Primitive definitions may include vertex specifications that specify one or more vertices that are related to primitives to be rendered. Vertex specifications may contain the position coordinates of each vertex, and in some cases include other attributes associated with the vertex, such as color coordinates, normal vectors, and texture coordinates. Primitive definitions may also include primitive type information (eg, triangles, rectangles, triangular fans, triangles, etc.), scaling information, rotation information, and the like.Based on instructions issued by the one or more software applications 18 to the GPU driver 22, the GPU driver 22 can be configured to assign one or more operations for execution by the GPU 12 in order to render one or more primitives of the primitive. When GPU 12 receives a command from CPU 6, a graphics processing pipeline may execute on the shader processor of GPU 12 to decode the command and configure the graphics processing pipeline to perform the operations specified in the command. For example, an input assembler in a graphics processing pipeline may read primitive data and assemble the data into primitives for use by other graphics pipeline stages in the graphics processing pipeline. After the specified operation is performed, the graphics processing pipeline outputs the rendered data to an output buffer 16 accessible to the display processor 14 . In some examples, the graphics processing pipeline may include fixed function logic and/or be executed on a programmable shader core.Depending on the example, output buffer 16 stores destination pixels for GPU 12 and/or video codec 7 . Each target pixel can be associated with a unique screen pixel location. Similarly, depending on the example, the output buffer 17 may store the destination pixel for the video codec 7. Codec buffer 17 may be viewed as a frame buffer associated with video codec 7 . In some examples, output buffer 16 and/or codec buffer 17 may store the color component and destination alpha value for each destination pixel. For example, output buffer 16 and/or codec buffer 17 may store pixel data according to any format. For example, output buffer 16 and/or codec buffer 17 may store Red, Green, Blue, Alpha, RGBA components for each pixel, where the "RGB" components correspond to color values ​​and The "A" component corresponds to the destination alpha value. As another example, the output buffer 16 and/or codec buffer 17 may store pixel data according to a YCbCr color format, a YUV color format, an RGB color format, or according to any other color format. Although output buffer 16 and system memory 10 are illustrated as separate memory units, output buffer 16 may be part of system memory 10 in other examples. For example, the output buffer 16 may be allocated storage space in the system memory 10 . Output buffer 16 may constitute a frame buffer. In addition, as discussed above, output buffer 16 may also be capable of storing any suitable data other than pixels.Similarly, although codec buffer 17 and system memory 10 are illustrated as separate memory units, codec buffer 17 may be part of system memory 10 in other examples. For example, codec buffer 17 may be allocated memory space in system memory 10 . Codec buffer 17 may constitute a video codec buffer or a frame buffer. In addition, as discussed above, codec buffer 17 may also be capable of storing any suitable data other than pixels. In some examples, while output buffer 16 and codec buffer 17 are illustrated as separate memory units, output buffer 16 and codec buffer 17 may be the same buffer or different portions of the same buffer.In some cases, GPU 12 may be integrated into the main board of computing device 2 . In other cases, GPU 12 may reside on a graphics card that is installed in a port in the motherboard of computing device 2 or may otherwise be incorporated within a peripheral device that is configured to interact with computing device 2 . In some examples, GPU 12 may be on-chip with CPU 6, eg, in a system on chip (SOC), and GPU 12 may include one or more processors, such as one or more microprocessors, dedicated Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs), or other equivalent integrated or discrete logic circuits. GPU 12 may also include one or more processor cores so that GPU 12 may be referred to as a multi-core processor. In some examples, GPU 12 may be dedicated hardware including integrated and/or discrete logic circuitry that provides GPU 12 with massively parallel processing capabilities suitable for graphics processing. In some cases, GPU 12 may also include general-purpose processing capabilities and may be referred to as a general-purpose GPU (GPGPU) when implementing general-purpose processing tasks such as so-called "computing" tasks.In some examples, graphics memory 20 may be part of GPU 12 . For example, graphics memory 20 may be on-chip memory or memory physically integrated into the integrated circuit chip of GPU 12 . If the graphics memory 20 is on-chip, the GPU 12 may be able to read values ​​or value from the graphics memory 20 more quickly than reading values ​​from the system memory 10 or writing values ​​to the system memory 10 over the system bus. Write to graphics memory 20. Thus, GPU 12 can read data from graphics memory 20 and write data to graphics memory 20 without using a bus. In other words, GPU 12 may use local storage devices instead of off-chip memory to process data locally. Such a graphics memory 20 may be referred to as on-chip memory. This allows the GPU 12 to operate in a more efficient manner by eliminating the need for the GPU 12 to read and write data over the bus, and reading and writing data over the bus can experience heavier bus traffic and associated contention for bandwidth. However, in some cases, GPU 12 may not include separate memory but utilize system memory 10 through the bus. The graphics memory 20 may include one or more volatile or non-volatile memory or storage devices, such as random access memory (RAM), static RAM (SRAM), dynamic RAM (dynamic RAM, DRAM). Erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, magnetic data media, or optical storage media.In some examples, GPU 12 may store a fully-shaped image in system memory 10 . Display processor 14 may retrieve the image from system memory 10 and/or output buffer 16 and output a value that causes pixels of display 8 to illuminate to display the image. In some examples, display processor 14 may be configured to perform 2D operations on the data to be displayed, including scaling, rotation, blending, and compositing. The display 8 may be a display of the computing device 2 that displays the image content generated by the GPU 12 . The display 8 may be a liquid crystal display (LCD), an organic light emitting diode display (OLED), a cathode ray tube (CRT) display, a plasma display, or another type of display device. In some examples, display 8 may be integrated within computing device 2 . For example, display 8 may be a screen of a mobile phone. In other examples, display 8 may be a stand-alone device coupled to computing device 2 via a wired or wireless communication link. For example, the display 8 may be a computer monitor or flat panel display connected to a computing device (eg, a personal computer, a mobile computer, a tablet, a mobile phone, etc.) via a cable or wireless link.The CPU 6 processes instructions for execution in the computing device 2 . CPU 6 may use a driver (eg, GPU driver 22 that may be implemented in software executed by CPU 6) to generate a command stream for execution by GPU 12. That is, the CPU 6 may generate a command stream that defines a set of operations to be performed by the GPU 12 .The CPU 6 may generate a command stream that will be executed by the GPU 12 to cause the viewable content to be displayed on the display 8 . For example, CPU 6 may generate a command stream that provides GPU 12 with instructions to render graphics data that may be stored in output buffer 16 for display at display 8 . In this example, CPU 6 may generate a command stream that is executed by a graphics rendering pipeline.Additionally or alternatively, CPU 6 may generate a command stream to be executed by GPU 12 that causes GPU 12 to perform other operations. For example, in some cases, CPU 6 may be a host processor that generates a stream of commands to use GPU 12 as a general-purpose graphics processing unit (GPGPU). In this manner, GPU 12 may act as a secondary processor of CPU 6 . For example, GPU 12 may perform a variety of general-purpose computing functions conventionally performed by CPU 6 . Examples include a variety of image processing functions, including video decoding and post-processing (eg, deblocking, noise reduction, color correction, etc.) as well as other application-specific image processing functions (eg, face detection/identification, pattern recognition, wavelet transform, etc.).In some examples, GPU 12 may cooperate with CPU 6 to perform such GPGPU applications. For example, CPU 6 may share certain functions to GPU 12 by providing a stream of commands to GPU 12 for execution by GPU 12 . In this example, CPU 6 may be a host processor and GPU 12 may be a secondary processor. CPU 6 may communicate with GPU 12 to direct GPU 12 to execute the GPGPU application through GPU driver 22 .GPU driver 22 may communicate one or more command streams that may be executed by shader units of GPU 12 to GPU 12 . GPU 12 may include a command processor 24, which may receive one or more command streams from GPU driver 22. Command processor 24 may be any combination of hardware and software configured to receive and process one or more command streams. Thus, the command processor 24 is a stream processor. In some examples, instead of the command processor 24, any other suitable stream processor may be used instead of the command processor 24 to receive and process one or more command streams and perform the techniques disclosed herein. In one example, command processor 24 may be a hardware processor. In the example shown in FIG. 1, the command processor 24 may be included in the GPU 12. In other examples, command processor 24 may be a different unit than CPU 6 and GPU 12 . Command processor 24 may also be referred to as a stream processor, a command/stream processor, etc. to indicate that it may be any processor configured to receive a stream of commands and/or operations.Command processor 24 may process one or more command streams that include scheduling operations contained in one or more command streams for execution by GPU 12 . In particular, the command processor 24 may process one or more command streams and schedule operations in the one or more command streams for execution by the shader unit. In operation, GPU driver 22 sends to command processor 24 a stream of commands including a series of operations to be performed by GPU 12 . The command processor 24 may receive an operation flow including the command stream, and may sequentially process the operation of the command flow based on the order of the operations in the command flow, and may schedule an operation in the command flow to For execution by the shader processor of the shader unit of the GPU 12.FIG. 2 is a block diagram illustrating an example implementation of CPU 6, GPU 12, and system memory 10 of FIG. 1 in further detail. The CPU 6 may include at least one software application 18, a graphics API 19, and a GPU driver 22, each of which may be one or more software applications or services executing on the CPU 6. The GPU 12 may include a graphics processing pipeline 30 that includes a plurality of graphics processing stages that operate together to execute graphics processing commands. Graphics processing pipeline 30 is one example of a graphics processing pipeline, and the present disclosure is applicable to any other graphics processing or graphics processing pipeline. GPU 12 may be configured to execute graphics processing pipeline 30 in a variety of rendering modes, including a grid rendering mode and a direct rendering mode. Each process may have corresponding context information during rendering. The context information may include information corresponding to the process associated with the graphics processing pipeline 30. For example, this process may be a graphics processing pipeline 30 process. In the context of context switching in GPU 12, the context information may include or otherwise constitute rendering state information. GPU 12 may switch from one process context to another at any point in graphics processing pipeline 30 .In some examples, GPU 12 may switch from one application context to another application that may contain drawing (eg, graphics) or scheduling (eg, computing). For example, GPU 12 may switch from one drawing context to another drawing or scheduling. In another example, GPU 12 may switch from one scheduling context to another scheduling or drawing.In other examples, when one or more processes executed on the GPU 12 are sent by the CPU 6 to the GPU 12 for preemption by one or more processes performed by the GPU 12 , eg, by a CPU process or from a specific process When one or more commands of the command stream sent by the GPU driver 22 executing on the CPU 6 preempt, the GPU 12 may context switch. Thus, in the examples described throughout this disclosure, the cut-in process and/or cut-out process may be a process that is transferred from a CPU (eg, CPU 6) to a GPU (eg, GPU 12).In other examples, one or more processes performed at the upper GPU 12 are sent to the GPU 12 by a workload processing unit (eg, CPU, any other processing unit, or any workload processing unit on the GPU 12) for use by When one or more processes executed by GPU 12 preempt, GPU 12 may context switch. Thus, in the examples described throughout this disclosure, the cut-in process and/or cut-out process may be a process that is transmitted from a workload processing unit to a GPU (eg, GPU 12). As another example, the cut-in process and/or cut-out process may be a process that is transferred from a portion of a workload processing unit of a GPU to another portion of a processing unit of the GPU. In other examples, the cut-in process and/or cut-out process as described throughout this disclosure may be a process that is transmitted from any processing unit to a GPU (eg, GPU 12).In other examples, when one or more processes executed at the upper CPU 6 are sent by the GPU 12 to the CPU 6 for preemption by one or more processes executed by the CPU 6, for example, the GPU 12 is offloaded to the CPU 6 to When the process for processing is preempted, the CPU 6 can context switch. Thus, in the examples described throughout this disclosure, the cut-in process and/or cut-out process may be a process that is transferred from a GPU (eg, GPU 12) to a CPU (eg, CPU 6).In other examples, the cut-in process and/or cut-out process as described throughout the present disclosure may be a process that is transmitted from any processing unit to any other processing unit. In other examples, the cut-in process and/or cut-out process as described throughout this disclosure may be any cut-in process and/or any cut-out process.As shown in FIG. 2, graphics processing pipeline 30 may include a command engine 32, a geometry processing stage 34, a rasterization stage 36, and a pixel processing pipeline 38. Pixel processing pipeline 38 may include texture engine 39 . Each of the components in graphics processing pipeline 30 may be implemented as a fixed-function component, implemented as a programmable component (eg, implemented as part of a shader program executing on a programmable shader unit), or implemented as a fixed function and programmable The combination of components. Memory usable or otherwise accessible to CPU 6 and GPU 12 may include, for example, system memory 10, output buffer 16, codec buffer 17, and any on-chip memory of CPU 6 and any on-chip memory of GPU 12. Output buffer 16, which may be referred to as a frame buffer in some examples, may store rendered image data.One or more software applications 18 may be any application that utilizes any functionality of GPU 12 or does not utilize any of the functionality of GPU 12 . For example, one or more application programs 18 may be any application program in which execution of the CPU 6 causes (or does not cause) one or more commands to be shared by the GPU 12 for processing. Examples of one or more applications 18 may include an application (eg, a video game application) that causes CPU 6 to offload 3D rendering commands to GPU 12, an application (eg, a user) that causes CPU 6 to offload 2D rendering commands to GPU 12. An interface application), or an application (such as a GPGPU application) that causes the CPU 6 to offload general computing tasks to the GPU 12 . As another example, one or more applications 18 may include firmware residing on any component of computing device 2, such as CPU 6, GPU 12, display processor 14, or any other component. The firmware may or may not utilize or invoke the functionality of the GPU 12 .One or more software applications 18 may include one or more drawing instructions that instruct the GPU 12 to render a graphical user interface (GUI) and/or graphics scene. For example, drawing instructions may include instructions that define a set of one or more graphics primitives to be exposed by GPU 12 . In some examples, drawing instructions may collectively define all or part of a plurality of fenestration surfaces for use in a GUI. In additional examples, the drawing instructions may collectively define all or part of a graphical scene that is contained within one or more graphical objects within a model space or world space defined by an application.One or more software applications 18 may invoke GPU driver 22 through graphics API 19 to post one or more commands to GPU 12 for rendering one or more graphics primitives as displayable graphics images. For example, one or more software applications 18 may invoke GPU driver 22 through graphics API 19 to provide primitive definitions to GPU 12 . In some cases, primitive definitions may be provided to the GPU 12 in the form of a list of drawing primitives (eg, triangles, rectangles, triangular fans, triangular stripes, etc.). Primitive definitions may include vertex specifications that specify one or more vertices that are related to primitives to be rendered.Vertex specifications may contain the position coordinates of each vertex, and in some cases include other attributes associated with the vertex, such as color coordinates, normal vectors, and texture coordinates. Primitive definitions may also include primitive type information (eg, triangles, rectangles, triangular fans, triangles, etc.), scaling information, rotation information, and the like. Based on instructions issued by the one or more software applications 18 to the GPU driver 22, the GPU driver 22 can be configured to assign one or more operations for execution by the GPU 12 in order to render one or more primitives of the primitive. When GPU 12 receives a command from CPU 6, graphics processing pipeline 30 decodes the command and configures one or more processing elements within graphics processing pipeline 30 to perform the operations specified in the command. After the specified operation is performed, graphics processing pipeline 30 outputs the rendered data to a memory (eg, output buffer 16) accessible to display processor 14. Graphics pipeline 30 may be configured to execute in one of a plurality of different rendering modes, including a binning rendering mode and a direct rendering mode.The GPU driver 22 may be further configured to compile one or more shader programs and download the compiled shader program onto one or more programmable shader units contained within the GPU 12 . Shader programs can be written in high-level shading languages ​​such as the OpenGL Shading Language (GLSL), the High Level Shading Language (HLSL), the C for Graphics (Cg) shader language, and so on. The compiled shader program may contain one or more instructions that control the operation of programmable shader units within the GPU 12 . For example, shader programs may include vertex shader programs and/or pixel shader programs. The vertex shader program may control the execution of a programmable vertex shader unit or a unified shader unit and include instructions that specify one or more vertex-by-vertex operations. The pixel shader program may include a pixel shader program that controls execution of a programmable pixel shader unit or a unified shader unit, and includes instructions that specify one or more operations per pixel.Graphics processing pipeline 30 may be configured to receive one or more graphics processing commands from CPU 6 through GPU driver 22 and execute graphics processing commands to generate a displayable graphics image. As discussed above, graphics processing pipeline 30 includes a plurality of stages that operate together to execute graphics processing commands. However, it should be noted that such stages need not necessarily be implemented in separate hardware blocks. For example, portions of geometry processing stage 34 and pixel processing pipeline 38 may be implemented as part of a unified shader unit. Graphics pipeline 30 may be configured to execute in one of a plurality of different rendering modes, including a binning rendering mode and a direct rendering mode.The command engine 32 may receive graphics processing commands and configure the remaining processing levels within the graphics processing pipeline 30 to perform various operations for performing graphics processing commands. Graphics processing commands may include, for example, drawing commands and graphics status commands. The draw command may include a vertex specification command that specifies the position coordinates of one or more vertices, and in some cases, assigns other attribute values ​​associated with each of the vertices, such as color Coordinates, normal vectors, texture coordinates, and fog coordinates. The graphics state commands may include primitive type commands, transformation commands, lighting commands, and the like. Primitive type commands may specify how the types and/or vertices of primitives to be visualized are combined to form a primitive. The transform command may specify the type of transform to perform on the vertices. The lighting command may specify the type, direction, and/or layout of different lights within the graphics scene. The command engine 32 may cause the geometry processing stage 34 to perform geometric processing with respect to vertices and/or primitives associated with one or more received commands.Geometry processing stage 34 may perform per-vertex operations and/or primitive operations on one or more vertices to generate primitive data for rasterization stage 36 . Each vertex can be associated with a set of attributes such as position coordinates, color values, normal vectors, and texture coordinates. Geometry processing stage 34 modifies one or more of these attributes in accordance with various vertex-wise operations. For example, geometry processing stage 34 may perform one or more transformations on vertex position coordinates to produce modified vertex position coordinates. Geometry processing stage 34 may apply, for example, one or more of a modeled transformation, a viewing transformation, a projection transformation, a ModelView transformation, a ModelViewProjection transformation, a viewport transformation, and a depth-scale scaling transformation to the vertex position coordinates to produce modified vertex position coordinates. In some cases, the vertex position coordinates may be model space coordinates, and the modified vertex position coordinates may be screen space coordinates. Screen space coordinates may be obtained after the application of the modeling, viewing, projection, and viewport transformations. In some cases, geometry processing stage 34 may also perform per-vertex lighting operations on the vertices to generate modified color coordinates of the vertices. Geometry processing stage 34 may also perform other operations including, for example, normal transformations, normal normalization operations, view volume clipping, uniform segmentation, and/or backface culling operations.Geometry processing stage 34 may generate primitive data that includes a set of one or more modified vertices that define a primitive to be rasterized, and data that specifies how the vertices are combined to form a primitive. Each of the modified vertices may include, for example, modified vertex position coordinates and processed vertex attribute values ​​associated with the vertex. Primitive data may collectively correspond to primitives rasterizing other stages of the graphics processing pipeline 30 . Conceptually, each vertex may correspond to a corner of a primitive where the two edges of the primitive meet. Geometry processing stage 34 may provide primitive data to rasterization stage 36 for further processing.In some examples, all or part of the geometry processing stage 34 may be implemented by one or more shader programs executing on one or more shader units. For example, in such an example, the geometry processing stage 34 may be implemented by a vertex shader, a geometry shader, or any combination thereof. In other examples, the geometry processing stage 34 may be implemented as a fixed-function hardware processing pipeline, or as a combination of fixed-function hardware and one or more shader programs executing on one or more shader units.The rasterization stage 36 is configured to receive primitive data representing primitives to be rasterized from the geometry processing stage 34 and rasterize the primitives to produce a plurality of source pixels corresponding to the rasterized primitives. In some examples, rasterization stage 36 may determine which screen pixel locations are covered by the primitives to be rasterized, and generate source pixels for each screen pixel location determined to be covered by the primitives. The rasterization stage 36 may determine which screen pixel locations are covered by the primitives by using techniques such as edge walking techniques, evaluating edge equations, and the like. Rasterization stage 36 may provide the resulting source pixels to pixel processing pipeline 38 for further processing.The source pixels generated by the rasterization stage 36 may correspond to screen pixel locations, such as destination pixels, and be associated with one or more color properties. All source pixels generated for a particular rasterized primitive can be said to be associated with a rasterized primitive. The pixels to be covered by the primitives determined by the rasterization stage 36 may conceptually contain pixels representing the vertices of the primitives, pixels representing the edges of the primitives, and pixels representing the interiors of the primitives.Pixel processing pipeline 38 is configured to receive source pixels associated with the rasterized primitives and perform one or more per-pixel operations on the source pixels. Per-pixel operations that may be performed by the pixel processing pipeline 38 include, for example, alpha testing, texture mapping, color calculations, pixel rendering, per-pixel lighting, fog handling, blending, pixel ownership testing, source alpha testing, mold testing, depth testing, scissors testing And/or stippling operation. In addition, pixel processing pipeline 38 may execute one or more pixel shader programs to perform one or more per-pixel operations. The resulting data generated by the pixel processing pipeline 38 may be referred to herein as destination pixel data and stored in the output buffer 16 . Destination pixel data may be associated with a destination pixel in the output buffer 16 that has the same display location as the processed source pixel. The destination pixel data may include data such as color values, destination alpha values, depth values, and the like.Pixel processing pipeline 38 may include texture engine 39 . Texture engine 39 may include both programmable and fixed-function hardware designed to apply a texture (tile) to a pixel. Texture engine 39 may include dedicated hardware for performing texture filtering whereby one or more texel values ​​are multiplied by one or more pixel values ​​and accumulated to produce a final texture mapped pixel.FIG. 3 is a flowchart showing an example method of the present disclosure. The method of FIG. 3 may be performed by the CPU 6 or the GPU 12 . FIG. 3 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12). The processing unit may be configured to receive a context switch trigger event (50). Upon receiving a triggering event, the processing unit may perform one or more processes. In some examples, the context switch triggering event may be triggered by or received from: a scheduler in a multitasking environment (eg, a scheduling processor or a scheduling unit), an interrupt to handle one or more interrupts A handler, or a mode controller for controlling transitions between modes, such as when switching from kernel mode to user mode. In such instances, the triggering event may itself be a processing of a scheduled event, an interrupt, a request to switch from one mode to another, or any instruction related to a triggering event that causes the processing unit to prepare for a context switch and eventually perform a context switch. . In some instances, context switching may be triggered asynchronously from workload submission. For example, while the processing unit (eg, GPU 12) may act on something (eg, a task or process), the processing unit may provide a scheduler (eg, a scheduling processor or scheduling unit) with an interrupt or similar mechanism. To preempt or otherwise interrupt something (such as a task or process) that the processing unit acts on. In some examples, the processing unit may be configured to pause or otherwise stop the first before applying one or more signature algorithms to the context information corresponding to the cut-out process (eg, the first process in this example). The execution of the process. In other examples, the processing unit may be configured to apply one or more signature algorithms corresponding to the process before pausing or otherwise halting the execution of the cut out process (eg, the first process in this example). Context information.In response to receiving the context switch triggering event, the processing unit may be configured to prepare for context switch (52), ultimately causing the processing unit to switch context from the first process (eg, cut out process) to the second process (eg, cut in process). . To do so, the processing unit may be configured to generate (54) one or more signatures corresponding to contextual information stored in the on-chip memory of the processing unit. In some examples, context information may correspond to a first process (eg, a cut out process). In some examples, the hardware unit of the processing unit may be configured to generate one or more signatures. In such examples, the hardware unit of the processing unit may be configured to perform one or more of the functions identified in FIGS. 3, 4, and/or 5. For example, a hardware unit of a processing unit may be configured to perform one or more of the functions associated with blocks 52, 54, 56, 58, 60, 62, 64, 70, and 72.In some examples, the processing unit (eg, GPU 12) may be configured to generate one or more signatures by applying a signature algorithm to the context information (54). The signature algorithm may generate an MD5 hash value, a cyclic redundancy check (CRC) value, a Bloom filter signature value, or other identifier output by a hash, signature, or filter function. For example, in an example involving an MD5 hash, the processing unit may be configured to apply an MD5 hashing algorithm to the context information corresponding to the process (eg, a cut out process) one or more times to generate one or more correspondences. Sign (eg, one or more MD5 hash values ​​in this example). For example, the processing unit may be configured to apply an MD5 hashing algorithm to the context information as a whole, thereby causing a single MD5 hash value to be generated. As another example, the processing unit may be configured to apply an MD5 hashing algorithm to one or more groups and/or one or more types of context information for each group to which the hashing algorithm applies and/or The context information of the type produces the corresponding MD5 hash value. In some examples, the signature may be implemented as a multiple input signature register (MISR). For example, a multiple input signature register (MISR) may generate a signature based on one or more bits. For example, the MISR may generate a signature based on one or more bits of a hardware module, such as a hardware element of a processing unit or a processing unit.In some examples, the processing unit may be configured to apply a signature algorithm to the binary data stored in the on-chip memory of the processing unit. For example, the processing unit may be configured to apply a signature algorithm to data stored in the processing unit's registers, the processing unit's memory (eg, RAM), and/or any other data structures or memory locations of the processing unit.A single signature can be generated for each application signature algorithm. For example, if two signatures are generated for context information, then that means that two signature algorithms apply to two different sets of contextual information. In this example, the two signature algorithms may be the same or different, and two sets of different context information corresponding to the same process may or may not have any overlap.As described herein, context information may be grouped into different types of context information. Groups can be based on the type of contextual information and/or how contextual information is generated. For example, control register information, constant information, and other software programming stated information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, software programming state information). As another example, the status flag information, dirty bit information, and other hardware modified state information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware modified state information). As another example, general register information, on-chip memory information, and other hardware-generated state information may each be an individual group of contextual information, or may belong within the same group of contextual information (eg, hardware-generated state information).In some examples, the processing unit may be configured to apply the signature algorithm as a whole to context information. In other examples, the processing unit may be configured to apply a signature algorithm to each type or group of contextual information rather than apply contextual information as a whole. In such instances, it may be appreciated that certain types or groups of contextual information may change more frequently than other types or groups of contextual information. The processing unit may be configured to track the change of context information of each type or group by applying a signature algorithm to generate a signature for each type or group of context information. When raising the computational cost of generating and comparing signatures, this increase in computational cost is offset in such instances because the processing unit can reduce the number of saves and/or restores more efficiently across multiple context switches. As used herein, reducing the number of saves and/or restores may also refer to reducing the amount of data being saved and/or the amount of data being restored.The processing unit may be configured to determine whether any of the generated one or more signatures matches any previously generated signature (56). It should be understood that the previously generated signature is generated using the same method described above with respect to block 54, except for the time prior to the time when one or more signatures were generated. In some examples, the previously generated signature may be referred to as an off-chip signature to indicate that these signatures are stored in a memory external to the processing unit, such as in the external memory 10 . In some examples, the on-chip memory of the processing unit may include or be otherwise volatile, and the memory external to the processing unit may include or otherwise be non-volatile memory. In such an example, the currently generated signature may be referred to as an on-chip signature to facilitate differentiation between the signature previously stored on the external memory and the on-chip memory stored in the processing unit for comparison purposes. signature. For example, using this nomenclature, the processing unit may be configured to determine if any one of one or more on-chip signature matches is any off-chip signature.In other examples, one or more previously generated signatures may be stored in the on-chip memory of the processing unit. In this regard, while some examples throughout this disclosure refer to comparing on-chip and off-chip signatures, it should be understood that such specific portions of the present disclosure may refer to comparing currently-generated on-chip signatures to previously generated if specific examples are implemented. On the chip. In such an example, the currently-generated on-chip signature is similar to an on-chip signature, and the previously-generated on-chip signature is similar to an off-chip signature except that the previously generated signature is actually stored in the processor's on-chip memory rather than stored externally Outside of memory.For each on-chip signature matching the off-chip signature, the processing unit is configured not to store the following data in the external memory: each on-chip signature matching the off-chip signature, and each on-chip signature corresponding to a matching off-chip signature, respectively Context information (58). If the on-chip signature matches the off-chip signature (ie, the two signatures are the same), the processing unit does not save the external memory with the on-chip signature and the context information corresponding to the on-chip signature (ie, context information from which the on-chip signature was derived). This is because the fact that the signature matches indicates that the context information corresponding to the on-chip signature has not been changed from the processing unit to the last change of the external memory in the corresponding context information. By avoiding redundantly storing previously stored information, a processing unit (eg, GPU 12) configured in accordance with an example of the present disclosure can achieve faster context switching by reducing latency and also achieve reduced power and energy consumption. The delay is reduced because generating a signature and performing a signature comparison takes less time to execute than saving the context information corresponding to the matched signature.It should be understood that block 58 shows unstored content. In some examples, the processing unit may include instructions for this effect. For example, block 58 may be synonymous with skipping or avoiding save operations. In other examples, where the on-chip signature matches off-chip signature matching, block 56 may proceed directly to block 62 . In such instances, by proceeding directly to block 62, block 60 is omitted or avoided, causing the on-chip signature and the context information corresponding to the on-chip signature not to be saved, because block 60 is not invoked or processed.For each on-chip signature that does not match the off-chip signature, the processing unit is configured to store the following data in the external memory: each on-chip signature, and context information corresponding to each on-chip signature that does not match the off-chip signature, respectively ( 60). In some examples, each respective on-chip signature is saved in a data structure that associates each respective on-chip signature with corresponding context information from which it derives (or generates) each respective on-chip signature. For example, the data structure may include metrics to indicate which memory locations in the external memory correspond to which on-chip memory signatures. If the on-chip signature does not match the off-chip signature (ie, the on-chip signature is not the same as any off-chip signature), the processing unit will have the on-chip signature and the context information corresponding to the on-chip signature (ie, the signature from which the on-chip signature is derived. The context information) saves the external memory because the on-chip memory does not match any off-chip signature fact indicating that the context information corresponding to the on-chip signature has been changed from the processing unit's last storage of the corresponding context information in the external memory. By avoiding redundantly storing previously stored information and instead storing context information when contextual information is compared to previously saved changes, the present disclosure achieves faster context switching by reducing latency and also achieves reduced power consumption and power consumption .The processing unit may be configured to proceed from block 58 and block 60 to restore the context information of the cut-in process from the external memory (eg, external memory 10) (62). After restoring the context information of the cut-in process, the processing unit may be configured to perform the cut-in process (64). In other examples, the processing unit may return from blocks 58 and 62 to block 56 until each on-chip signature has been resolved (e.g., regardless of whether the processing unit has determined that each chip-on-chip matches or does not match the off-chip signature and/or Or whether boxes 72 and 62 have resolved each case of a match or a non-match). Once the processing unit has parsed each on-chip signature generated at block 56 (or begins executing the minimum amount necessary for the cut-in process), the processing unit may be configured to advance to the cut-in process (64).FIG. 4 is a flowchart showing an example method of the present disclosure. The method of FIG. 4 may be performed by the CPU 6 or the GPU 12 . FIG. 4 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12). The processing unit may be configured to receive the context switch triggering event (50) in the same manner as described above with respect to FIG. In response to receiving the context switch triggering event, the processing unit may be configured to prepare for context switch (52), ultimately causing the processing unit to switch context from the first process (eg, cut out process) to the second process (eg, cut in process). . To do so, the processing unit may be configured to generate (54) one or more signatures corresponding to contextual information stored in the on-chip memory of the processing unit. In the example of FIG. 4, the processing unit may be configured to generate one or more signatures in the same manner as described above with respect to FIG. 3. The processing unit may be configured to store context information corresponding to the cut out process in an external memory (70). It should be understood that the order of the operations shown in FIGS. 3 to 5 is exemplary and may be different in other examples. For example, prior to generating one or more signatures (54) corresponding to context information stored in the on-chip memory of the processing unit, the processing unit may be configured to store context information corresponding to the cut-out process in an external memory ( 70). The processing unit may be configured to determine whether any of the generated one or more signatures matches any previously generated signature (56) in the same manner as described above with respect to FIG.In the example of FIG. 4, for each on-chip signature matching an off-chip signature, the processing unit is configured not to recover the following data from the memory outside the on-chip memory of the processing unit: each on-chip signature corresponding to a matching off-chip signature, respectively. The context information (72). If the on-chip signature matches the off-chip signature (ie, the two signatures are the same), the processing unit does not restore the external memory from the external memory by restoring the context information corresponding to the on-chip signature (ie, the context information from which the on-chip signature is derived). This is because the fact that the signature matches indicates that the corresponding context information has not been stored in the external memory's last change from the processing unit corresponding to the on-chip signature's context information. For example, a matching signature indicates that any data that will have been recovered will be redundant and therefore unnecessary because the context information that was overwritten is the same as the context information being recovered, as evidenced by the signatures that match each other. By avoiding redundancy and unnecessary recovery of data, a processing unit (eg, GPU 12) according to an example of the present disclosure can achieve faster context switching by reducing latency and also achieve reduced power and energy consumption. The delay is reduced because generating a signature and performing a signature comparison takes less time to execute than restoring the context information corresponding to the matched signature.It should be understood that block 72 shows the contents of the on-chip memory that has not been restored from the external memory to the processing unit. In some examples, the processing unit may include instructions for this effect. For example, block 72 may be synonymous with skipping or avoiding recovery operations. In other examples, where the on-chip signature matches off-chip signature matching, block 56 may proceed directly to block 64 . In such an example, by proceeding directly to block 64, block 72 is omitted or avoided, causing the on-chip signature and the context information corresponding to the on-chip signature to not be restored because block 72 is not invoked or processed.For each on-chip signature that does not match the off-chip signature, the processing unit is configured to restore the context information of the cut-in process from the external memory (eg, the external memory 10) (62). By avoiding redundant, unnecessary recovery of data, the present disclosure can achieve faster context switching by reducing latency and also achieve reduced power and energy consumption. The processing unit may be configured to proceed from block 72 and block 62 to perform the cut-in process (64). In other examples, the processing unit may return from blocks 72 and 62 to block 56 until each on-chip signature has been resolved (eg, Whether or not the processing unit has determined whether each on-chip signature matches or does not match the off-chip signature and/or whether each of boxes 72 and 62 has resolved a match or a non-match). Once the processing unit has parsed each on-chip signature generated at block 56 (or begins executing the minimum amount necessary for the cut-in process), the processing unit may be configured to advance to the cut-in process (64).FIG. 5 is a flowchart showing an example method of the present disclosure. The method of FIG. 3 and the method of FIG. 4 may be combined in various ways, and FIG. 5 shows one example of this combination. The method of FIG. 5 may be performed by the CPU 6 or the GPU 12 . FIG. 5 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12). The processing unit may be configured to receive the context switch triggering event (50) in the same manner as described above with respect to FIGS. 3 and 4 . In response to receiving the context switch triggering event, the processing unit may be configured to prepare for context switch (52), ultimately causing the processing unit to switch context from the first process (eg, cut out process) to the second process (eg, cut in process). . To do so, the processing unit may be configured to generate (54) one or more signatures corresponding to contextual information stored in the on-chip memory of the processing unit. The processing unit may be configured to generate one or more signatures in the same manner as described above with respect to FIG. 3 .The processing unit may be configured to determine whether any of the generated one or more signatures matches any previously generated signature (56) in the same manner as described above with respect to FIGS. 3 and 4.For each on-chip signature matching the off-chip signature, the processing unit is configured not to store the following data in the external memory in the same manner as described with respect to FIG. 3: each on-chip signature matching the off-chip signature, and corresponding to Context information signed on each chip that matches off-chip signatures (58). In some examples, the processing unit may be configured to return from blocks 58 and 60 to block 56 until each on-chip signature has been parsed for blocks 58 and 60 . Once the processing unit has parsed each on-chip signature at blocks 58 and 60, the processing unit may be configured to proceed to blocks 72 and 62. In other examples, the processing unit may be configured to address all instances of the non-matching signature prior to addressing any and all instances of the matching signature to ensure that any data that needs to be stored to external memory is not subject to contextual information. Resume overwrite. In other words, prior to recovering any context information from memory outside the on-chip memory of the processing unit, the processing unit described herein may be configured to save any context information from the processing unit's on-chip memory to the external memory. In such an example, the processing unit may be configured to return from block 72 and block 62 to block 56 until each on-chip signature has been parsed for blocks 72 and 62 . Once the processing unit has parsed each on-chip signature at blocks 72 and 62, the processing unit may be configured to perform the cut-in process (64).With further reference to FIG. 5, for each on-chip signature matching an off-chip signature, the processing unit is configured not to recover the following data from external memory external to the on-chip memory of the processing unit in the same manner as described with respect to FIG. Out-of-chip signed context information for each chip signed (72).For each on-chip signature that does not match the off-chip signature, in FIG. 5, the processing unit is configured to store the following data in the external memory in the same manner as described with respect to FIG. 3: Each on-chip signature, and respectively corresponding Context information signed on each chip that does not match the signature outside the chip (60). For each on-chip signature that does not match the off-chip signature, the processing unit is configured to restore the context information of the hand-in process from the external memory in the same manner as described with respect to FIG. 4 (62). The processing unit may be configured to proceed from block 72 and block 62 to perform the cut-in process (64), as shown in FIG. In other examples, the processing unit may return from blocks 72 and 62 to block 56 until each on-chip signature has been resolved (eg, whether or not the processing unit has determined whether each chip-on-chip matches or does not match the off-chip signature). Once the processing unit has parsed each on-chip signature generated at block 56, the processing unit may be configured to perform the cut-in process (64).FIG. 6 is a flowchart showing an example method of the present disclosure. The method of FIG. 6 may be performed by a processing unit such as CPU 6 or GPU 12 . FIG. 6 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12). The processing unit may be configured to generate one or more signatures (100) of current context information stored in the on-chip memory of the processing unit. The processing unit may be configured to determine if one or more signatures match any previously generated signature stored in the previous context information in the one or more memories that are accessible by the processing unit (102). Any signature generated for context information may correspond to context information for which a signature was generated. For example, if signature A is generated for context information A and signature B was previously generated for context information B, then signatures A and B correspond to context information A and B, respectively, in this example. In some examples, the one or more memories accessible by the processing unit may include at least one of: on-chip memory of the graphics processing unit and memory external to the graphics processing unit (eg, system memory 10, where the processing unit is a CPU In the case of 6 is the on-chip memory of the GPU 12, or the on-chip memory of the CPU 6 if the processing unit is the GPU 12. In some examples, one or more memories accessible by the processing unit may only include memory external to the processing unit. In other examples, one or more memories accessible by the processing unit may only include memory external to the processing unit when the memory external to the processing unit is a system memory. In other examples, the one or more memories accessible by the processing unit may not include on-chip memory of the graphics processing unit.In some examples, the current context information may correspond to a preempted process (eg, a cut out process). For example, the current context information may correspond to any context information corresponding to a process performed on a processing unit. As another example, the current context information may correspond to any context information corresponding to a process that is suspended for the second process but will be swapped out for that second process (eg, a cut-in process). In some examples, the previous context information may correspond to one or more previously preempted processes (eg, one or more processes that have been previously cut out). For example, the previous context information may correspond to any context information that corresponds to any process that previously experienced a context switch.The processing unit may be configured to store, to at least one of the one or more memories, any signature of any one or more of the one or more signatures determined to not match any previously generated signature stored in at least one of the one or more memories ( 104). The processing unit may be configured to store, to at least one of the one or more memories, a current corresponding to one or more signatures of any previously generated signatures that are respectively determined to not be stored in at least one of the one or more memories. Context information (106).In the example of FIG. 6, according to some examples, the one or more memories accessible by the processing unit include the on-chip memory of the processing unit. In other examples, the one or more memories accessible by the processing unit include memory external to the graphics processing unit. In some examples, the memory external to the processing unit is system memory. In other examples, the one or more memories accessible by the processing unit only contain the on-chip memory of the processing unit. In other examples, the one or more memories accessible by the processing unit only contain memory external to the graphics processing unit.In the example of FIG. 6, according to some examples, the processing unit may be configured not to store any of the one or more signatures determined to match any previously generated signature stored in at least one of the one or more memories to any memory. Any signature. The processing unit may be configured not to store any current memory context information corresponding to one or more signatures that are respectively determined to match any previously generated signature stored in at least one of the one or more memories. The processing unit may be configured to not restore previous context information from memory external to the on-chip memory, the previous context information respectively corresponding to any of the one or more signatures determined to match that stored in at least one of the one or more memories. Any signature of a previously generated signature. In some examples, the processing unit may be configured to recover previous context information from memory external to the on-chip memory, the previous context information corresponding to ones of the one or more signatures determined to be mismatched and stored in the one or more memories Any signature of any previously generated signature in at least one of.In the example of FIG. 6, the processing unit may be configured to generate one or more signatures of the current context information by being configured to apply one or more signature algorithms to one or more of the following: current context information, current One or more groups of context information, one or more types of current context information. In some examples, the processing unit may be configured to determine if one or more signatures match any previously generated signature by configuring to: determine that each of the one or more signatures matches any of the previously generated signatures , or each of one or more signatures does not match any of the previously generated signatures. In other examples, the graphics processing unit may be configured to determine if one or more signatures match any previously generated signature by being configured to: determine that at least one of the one or more signatures matches any of the previously generated signatures One, and at least one of the one or more signatures does not match any of the previously generated signatures.FIG. 7 is a flowchart showing an example method of the present disclosure. The method of FIG. 7 may be performed by the CPU 6 or the GPU 12 . FIG. 7 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12 ). The processing unit may be configured to perform context switching (150) at a first time from a first process performed on the processing unit. The processing unit may be configured to generate a first signature based on context information for a first time associated with the first process (152). The processing unit may be configured to store the context information and the first signature in memory external to the process for the first time (154). The processing unit may be configured to perform a context switch (156) at a second time from a first process performed on the processing unit. The processing unit may be configured to generate a second signature based on context information for a second time associated with the first process (158). The processing unit may be configured to compare the first signature with the second signature (160). If the first signature is different from the second signature, the processing unit may be configured to store the context information and the second signature in memory external to the processing unit for the second time (162). If the first signature matches the second signature, the processing unit may be configured to not store the context information in memory external to the processing unit for the second time (164).FIG. 8 is a flowchart showing an example method of the present disclosure. The method of FIG. 8 may be performed by the CPU 6 or the GPU 12 . FIG. 8 depicts a method of context switching by a processing unit (eg, CPU 6 or GPU 12). The processing unit may be configured to perform context switching (170) at a first time from a first process performed on the processing unit. The processing unit may be configured to generate a first signature based on context information for a first time associated with the first process (172). The processing unit may be configured to store the context information and the first signature in a memory external to the process for the first time (174). The processing unit may be configured to switch to a first process (176) for execution on the processing unit at the second time context. The processing unit may be configured to generate a second signature based on context information stored at a second time in the on-chip memory of the processing unit prior to performing the first procedure (178). The processing unit may be configured to compare the first signature with the second signature (180). If the first signature is different from the second signature, the processing unit may be configured to restore the context information stored in the external memory (182). If the first signature matches the second signature, the processing unit may be configured not to restore the context information stored in the external memory (184).FIG. 9 is a block diagram illustrating one example of a processing unit described herein in accordance with one or more techniques described herein. In the example of FIG. 9, hardware unit 200 is communicatively coupled to external memory 202 (eg, off-chip memory). In some examples, hardware unit 200 may be the entire processing unit or a portion thereof (eg, a pipeline stage). For example, hardware unit 200 may be GPU 12 or hardware unit 200 may depict the components of GPU 12 . In some examples, external memory 202 may be any memory external to hardware unit 200 . For example, external memory 202 may be system memory 10 as described herein. External memory 202 may store any contextual information received from hardware unit 200 or any other hardware unit along with any signatures associated with the contextual information. External memory 202 and on-chip memory 210 may utilize any data structure to associate any signature with any context information. The context information may or may not be classified as one or more groups of context information and/or one or more types of context information. In the illustrated example, "group/type n" refers to the nth group and/or type. It should be understood that while labeled "group/type" in the example shown in FIG. 9, the group and type are different. In fact, this nomenclature is intended to convey that contextual information can be classified into one or more groups and/or one or more types.In the example shown, hardware unit 200 is also communicatively coupled to input module 204 and output module 206 . In some examples, input module 204 may be any software, firmware, or hardware that may be executed on hardware, or may be configured to convert API state (eg, the manner and object of rendering) into hardware unit 200 that may be configured to process or otherwise understand. Format any hardware. In some examples, output module 206 may be any software, firmware, or hardware that is executed on hardware, or may be configured to receive contextual information and/or data from a current level of a processing pipeline (eg, a graphics pipeline) to a lower level in a processing pipeline. Any hardware.Hardware unit 200 may include one or more functional units 208 . The functional unit may be anything within the hardware unit 200 or any pipeline stage of the hardware unit 200 that is configured to process data in the manner specified by the contextual information. For example, an arithmetic logic unit (ALU) may be a functional unit that may add two integers based on the precision requirements specified in the context information. As another example, the functional unit may receive contextual information that may be viewed as having rules that specify how the functional unit should process the data. As another example, the functional unit may receive data as an input, process the data according to the context information, and output the processed data to the hardware unit 200 or the next level in the pipeline. One or more functional units 208 may or may not interact with any generated signatures.In the illustrated example, hardware unit 200 may include on-chip memory 210, which may store any contextual information. The hardware unit 200 may include a signature algorithm unit 212 that may be configured to apply one or more signature algorithms to any context information to generate one or more signatures. Hardware unit 200 may include a save/restore unit 214 .In the example shown, contextual information is presented in three exemplary groups or types of contextual information. In other examples, the context information in the on-chip memory 210 may be classified into one or more groups and/or one or more types of context information. It should be understood that while labeled "group/type" in the example shown in FIG. 9, the group and type are different. In fact, this nomenclature is intended to convey that contextual information can be classified into one or more groups and/or one or more types. In other examples, context information may not be grouped or may not be called different types. For example, with respect to one or more groups of context information and/or one or more types of context information, the signature algorithm unit 212 may be configured to apply one or more signatures to the context information as a whole. After applying the signature algorithm to the context information, the signature algorithm unit may store the generated signature in the on-chip memory of the hardware unit 200 immediately. In other examples, signature algorithm unit 212 may communicate any generated signature directly to save/restore unit 214 in addition to or instead of storing any generated signatures in on-chip memory 210 .In some examples, hardware unit 200 may be configured to generate one or more signatures for the control register when the control register is programmed. For example, the signature algorithm unit 212 may be configured to generate one or more signatures for the control register when the control register is programmed.In some examples, save/restore unit 214 may be any software, firmware, or any hardware executing on hardware. In some examples, save/restore unit 214 may be configured to compare whether any signature generated by signature algorithm unit 212 matches any signature stored in on-chip memory 210 and/or external memory 202 (eg, any off-chip memory). As described throughout the present disclosure, whether or not the save/restore unit 214 stores (or does not store) the context information to the external memory 202 and/or restores (or does not restore) the context information from the external memory 202 to the on-chip may be determined depending on whether the match exists. Memory 210.According to the present disclosure, the term "or" may be interpreted as "and/or" in the absence of other indications in the context. In addition, while phrases such as "one or more" or "at least one" may have been used in some of the features disclosed herein but not in others, no such language has been used for the context in which there is no other indication. The features can be interpreted as implying such meaning.In one or more examples, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. For example, the processing unit may be configured to perform any of the functions described herein. As another example, although the term "processing unit" has been used throughout this disclosure, it is to be understood that such processing unit may be implemented in hardware, software, firmware, or any combination thereof. If any of the functions, processing units, technologies, or other modules described herein are implemented in software, the described functions, processing units, technologies, or other modules described herein may be stored on a computer-readable medium or as a computer-readable medium. One or more instructions or code transmitted on the media. Computer-readable media can include computer data storage media or communication media including any medium that facilitates transfer of a computer program from one place to another. In this manner, a computer-readable medium may generally correspond to (1) a tangible computer-readable storage medium that is non-transitory, or (2) a communication medium such as a signal or a carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. By way of example, and not limitation, these computer readable media may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage devices, magnetic disk storage devices, or other magnetic storage devices. Disks and optical disks as used herein include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), floppy disks, and Blu-ray discs, in which disks usually reproduce data magnetically. , while the optical disk uses laser light to optically reproduce data. Combinations of the above should also be included within the scope of computer-readable media. The computer program product may include computer-readable media.The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrations. Or discrete logic circuits. Thus, the term "processor" or "processing unit" as used herein may refer to any of the foregoing structures or any other structure suitable for implementing the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for context switching and/or parallel processing. Moreover, the techniques may be fully implemented in one or more circuits or logic elements.The techniques of this disclosure may be implemented in a wide variety of devices or devices, including wireless handsets, integrated circuits (ICs), or a set of ICs (eg, a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of the apparatus configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. In fact, as described above, various units may be combined in a codec hardware unit in conjunction with a suitable software and/or firmware combination or provided by a set of interoperable hardware units including the above described One or more processors.Various examples are described. These and other examples are within the scope of the following claims.
The invention relates to multi-level cell data load optimization. Techniques are disclosed, including a method that can include the steps: entering a first mode of operation of an apparatus includinga memory device, receiving first information indicative of a subsequent download of second information at the memory device, the memory device including a first group of cells configured as multi-level cell (MLC) memory, in response to receipt of the first information, converting a portion of the first group of cells from configuration as MLC memory to configuration as single-level cell (SLC) memory, receiving and storing the second information at the memory device, and upon exiting the first mode of operation, reconfiguring at least a portion of the SLC memory to MLC memory while simultaneously maintaining storage of the second information within the memory device.
1.A method comprising:Entering a first operating mode of a device containing a memory device;Receiving first information indicating subsequent download of second information at the memory device, the memory device including a memory unit configured as a first group of multi-level unit MLC memory;In response to receiving the first information, converting a part of the memory cells of the first group from being configured as an MLC memory to being configured as a single-level cell SLC memory;Receiving and storing the second information at the memory device; andAfter leaving the first operation mode, at least a part of the SLC memory is immediately reconfigured as an MLC memory, while maintaining the second information stored in the memory device.2.The method of claim 1, comprising heating the memory device after receiving and storing the second information and before reconfiguring the portion of the SLC memory.3.The method of claim 2, wherein the heating comprises thermal-stage characteristics of reflow soldering that heats the memory device onto the memory device or on a component of the device containing the memory device.4.The method of claim 1, wherein said receiving the first information comprises receiving an estimate of a size of the microsecond information.5.The method of claim 4, wherein said converting comprises determining an SLC configurable memory available for said storage of said second information based on said estimated size of said second information and a capacity of said memory device The maximum amount.6.The method of claim 4, wherein receiving and storing the second information comprises:Determining that the estimated size of the second information is smaller than the actual size of the second information;During the first mode of operation, reconfiguring a portion of the SLC memory as an MLC memory while maintaining any second information stored on the portion of the SLC memory; andReceiving additional second information into the MLC memory.7.The method of claim 1, wherein the first operation mode is a manufacturing operation mode.8.The method of claim 1, wherein the second information includes an operating system.9.The method of claim 1, wherein the second information includes a car navigation, communication, and entertainment operating system.10.The method of claim 9, wherein the second information includes a car navigation, communication, or entertainment application.11.The method of claim 1, wherein the second information includes a car diagnostic operating system.12.The method of claim 1, wherein the MLC memory comprises a three-level cell TLC memory.13.The method of claim 1, wherein the MLC memory comprises a four-level cell QLC memory.14.A memory circuit includes:A memory cell configured to provide multi-level cell MLC storage; andA controller operatively coupled to the memory unit, the controller being configured to perform operations including:Receiving an indication of a production mode of a device containing the memory circuit;Receiving an estimated size of subsequent downloads during the production mode to configure at least a portion of the memory unit from operations such as MLC storage to operations such as single-level unit SLC storage in response to the estimated size;Receiving instructions for the subsequent download to the memory unit, andUpon receiving the indication that the device is leaving the production mode, the at least part of the memory unit is immediately reconfigured from operations such as SLC storage to operations such as MLC storage, and will be included as the subsequent The information of the downloaded part is maintained in the memory unit.15.The memory circuit of claim 14, wherein the memory cell is configured to provide three-level cell TLC storage.16.The memory circuit of claim 14, wherein the memory cell is configured to provide four-level cell QLC storage.17.A machine-readable medium includes instructions that, when executed by a machine, cause the machine to perform operations including the following:Receiving a first instruction indicating a production mode of a device including a memory device;Receiving first information indicating a subsequent download of second information at the memory device, the memory device including a unit configured to operate as a first group of multi-level unit MLC memories;In response to receiving the first information, converting a part of the cells of the first group from an operation as an MLC memory to an operation as a single-level cell SLC memory;Receiving and storing the second information at the memory device; andAfter leaving the first operation mode, at least a part of the SLC memory is immediately reconfigured to operate as an MLC memory, while maintaining the second information stored in the memory device.18.The machine-readable medium of claim 17, wherein the operations further include:Determine that the reception of the second information exceeds the size indication received with the first information, and in synchronization with receiving the second information, reconfigure some of the SLC memories to operate as MLC memories to accommodate all The storage of the portion of the second information exceeding the size indication.19.The machine-readable medium of claim 18, wherein the reconfiguring at least the portion of the SLC memory comprises reconfiguring at least the portion of the SLC memory to operate as a three-level unit TLC memory.
Multi-level cell data loading optimizationTechnical fieldEmbodiments of the present disclosure relate generally to memory systems and, more particularly, to optimizing data loading in multi-level cell (MLC) memory.Background techniqueThe memory system may be a storage system, such as a solid state drive (SSD), and may include one or more memory components that store data. For example, a memory system may include memory devices such as non-volatile memory devices and volatile memory devices. In general, a host system may use a memory system to store data at a memory device of the memory system and retrieve data stored at the memory system.Summary of the inventionIn one aspect, the present disclosure relates to a method comprising: entering a first operation mode of a device including a memory device; receiving first information indicating subsequent download of second information at the memory device, the memory The device includes a first group of memory cells configured as a multi-level cell (MLC) memory; and in response to receiving the first information, converting a portion of the first group of memory cells from being configured as an MLC memory to being configured Is a single-level cell (SLC) memory; receiving and storing the second information at the memory device; and immediately reconfiguring at least a portion of the SLC memory as an MLC memory after leaving the first operating mode , While maintaining the second information stored in the memory device.In another aspect, the present disclosure relates to a memory circuit including: a memory cell configured to provide multi-level cell (MLC) storage; and a controller operatively coupled to the memory cell, so that The controller is configured to perform an operation including: receiving an indication of a production mode of a device including the memory circuit; and receiving an estimated size of subsequent downloads during the production mode to respond to the estimated size by At least a portion of the memory unit is configured from an operation such as an MLC to an operation stored as a single-level unit (SLC); receiving instructions for the subsequent download to the memory unit; and upon receiving that the device is leaving the device After the production mode is instructed, the at least part of the memory unit is immediately reconfigured from an operation such as SLC storage to an operation such as MLC storage, while maintaining the information contained in the subsequent download portion at the Memory unit.In yet another aspect, the present disclosure relates to a machine-readable medium including instructions that, when executed by a machine, cause the machine to perform an operation including: receiving a production mode that indicates a device that includes a memory device Receiving a first instruction indicating a subsequent download of second information at the memory device, the memory device including a unit configured to operate as a first group of multi-level cell (MLC) memory; a response For the reception of the first information, a part of the cells of the first group is converted from an operation as an MLC memory to an operation as a single-level cell (SLC) memory; receiving and storing the data at the memory device The second information; and immediately after leaving the first operation mode, at least a portion of the SLC memory is reconfigured to operate as an MLC memory while maintaining the second information stored in the memory device.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure will be more fully understood from the detailed description provided below and the accompanying drawings of various embodiments of the present disclosure.FIG. 1 illustrates an example computing environment including a memory system according to some examples of the present disclosure.FIG. 2 generally illustrates a flowchart of an example method 200 of receiving downloads at a memory system during a production phase and optimizing the downloads to save valuable production time and resources.FIG. 3 illustrates a faster download speed of a memory system of the subject matter according to the present invention.FIG. 4 illustrates an example machine of a computer system 400 within which executable instructions set may be executed to cause the machine to perform any one or more of the methods discussed herein.detailed descriptionAspects of the present disclosure are directed to information downloads that are time optimized to multi-level cell (MLC) memory. In some examples, an electronic device or computing environment with information (including instructions) designed to operate the electronic device on-site can be downloaded at the manufacturer. Such instructions may be part of an operating system, an application configured to run on the operating system, data for use with the application or operating system, or a combination thereof. Such download operations consume time and resources during the manufacturing stage of the electronic device and cause the cost of the computing system to be variable. Thus, each additional download of information previously downloaded to the computing environment at the manufacturer may reduce the margin of the computing device. The computing environment can be tested, reprocessed, and retested during production and after the first download. Keeping the information downloaded for the first time throughout the production phase eliminates costly download events and maintains a high margin on the computing device.In some examples, the memory system of the electronic device may include multi-level cell (MLC) technology, such as a two-level cell, a three-level cell (TLC), or a four-level cell (QLC). Compared to earlier versions of electronic devices or competitor devices using, for example, single-level cell (SLC) technology, such technologies allow for increased memory density without increasing the size of the electronic device. However, compared with a memory configured as an SLC, for a memory configured as an MLC, downloading a large information block such as a data image from a host to an electronic device and more specifically to the memory of the electronic device may take a long time.The present inventors have recognized techniques for saving time and resources when downloading a large amount of information from a host to an electronic device containing an MLC memory. In addition to saving time, especially during the production phase, the method also allows data to be stored in the SLC-configured memory of the MLC-capable memory during subsequent heating of the electronic device, for example for reflow purposes. In some examples, an MLC memory configured for SLC operation may provide more robust performance during reflow heating than MLC operation. In some instances, where a particular download is larger than expected, the SLC configured memory may be selectively reconfigured to operate as an MLC memory.FIG. 1 illustrates an example computing environment 100 including a memory system 110 according to some examples of the present disclosure. The memory system 110 may include media, such as memory devices 112A to 112N. The memory devices 112A to 112N may be volatile memory devices, non-volatile memory devices, or a combination of these. In some embodiments, the memory system is a storage system. An example of a storage system is an SSD. In some embodiments, the memory system 110 is a hybrid memory / storage system. In general, the computing environment 100 may include a host system 120 using a memory system 110. In some implementations, the host system 120 can write data to and read data from the memory system 110.The host system 120 may be a computing device such as a desktop computer, a portable computer, a web server, a mobile device, or such a computing device including a memory and a processing device. The host system 120 or the memory system 110 may be included in, for example, an IoT device (e.g., a refrigerator or other electrical equipment, sensors, motors or starters, mobile communication devices, automobiles, drones, etc.) that supports the processing, communication, or control of products. A variety of products. The host system 120 may include a processor, a memory card reader, or one or more other electronic devices external to the memory system 110. The host system 120 may include or be coupled to the memory system 110 such that the host system 120 may read data from or write data to the memory system 110. The host system 120 may be coupled to the memory system 110 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be a wired or wireless indirect communication connection or a direct communication connection (eg, without the need for intermediate components), including connections such as electrical, optical, magnetic, etc. . Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interfaces, Peripheral Component Interconnect High Speed (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fibre Channel, Serial Attached SCSI (SAS), eMMCTM interface. The physical host interface can be used to transfer data between the host system 120 and the memory system 110. When the memory system 110 is coupled with the host system 120 through the PCIe interface, the host system 120 may further use the NVM high-speed (NVMe) interface to access the memory devices 112A to 112N. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory system 110 and the host system 120.The memory system 110 is shown by way of example as including a memory system controller 115 and a medium, such as memory devices 112A to 112N. The memory devices 112A to 112N may include any combination of different types of non-volatile memory devices and / or volatile memory devices. Examples of non-volatile memory devices include a negative-and (NAND) type flash memory. Each of the memory devices 112A to 112N may include one or more memory cells such as a single-level cell (SLC) or a multi-level cell (MLC) (e.g., a three-level cell (TLC) or a four-level cell (QLC)). Arrays. In some implementations, a particular memory device may include both the SLC portion and the MLC portion of a memory cell (eg, a memory cell with a different bit capacity per cell). Each of the memory cells may store data bits (e.g., data blocks) for use by the host system 120. Although a non-volatile memory device is described, such as a NAND type flash memory, the memory devices 112A to 112N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory devices 112A to 112N may be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), Phase Change Memory (PCM), Magnetic Random Access Memory (MRAM), Negative-or (NOR) Flash Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM), and Non-Volatile Memory Cell Crosspoint Array . Cross-point arrays of non-volatile memory in combination with stackable cross-grid data access arrays can perform bit storage based on bulk resistance changes. In addition, compared to multiple flash-based memories, a cross-point nonvolatile memory can perform an in-situ write operation in which the nonvolatile memory cells can be programmed without having to erase the nonvolatile memory cells in advance. In addition, the memory cells of the memory device 320 may be grouped into several devices, planes, sub-blocks, blocks, or pages that may refer to a unit of a memory device to store data.In an example, the memory system 110 may be a discrete memory and / or storage device component of the host system 120. In other examples, the memory system 110 may be part of an integrated circuit (eg, a system on chip (SOC), etc.) that is stacked or otherwise included with one or more other components of the host system 120.Each of the media devices 112A to 112N may include a media controller (eg, media controllers 130A to 130N) that manages memory units of the memory devices 112A to 112N.The memory system 110 may include a memory system controller 115 that may communicate with the memory devices 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory devices 112A to 112N, and other such operations. The memory system controller 115 may include hardware such as one or more integrated circuits and / or discrete components, buffer memory, or a combination thereof. The memory system controller 115 may be a microcontroller, a dedicated logic circuit (eg, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processors. The memory system controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the memory system controller 115 includes embedded memory configured to store routines (including processing memory) for performing various processes, operations, logic flows, and controlling operations of the memory system 110 Communication between the system 110 and the host system 120). In some embodiments, the local memory 119 may include memory registers that store, for example, memory pointers, extracted data, and the like. The local memory 119 may also include a read-only memory (ROM) for storing microcode. Although the example memory system 110 in FIG. 1 has been illustrated as including a memory system controller 115, in another embodiment of the present disclosure, the memory system 110 may not include the memory system controller 115 and may instead rely on (e.g., External host or external control provided by a processor or controller separate from the memory system.In general, the memory system controller 115 may receive commands or operations from the host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 112A to 112N. The memory system controller 115 may be responsible for other operations, such as wear leveling operations (e.g., garbage collection operations, reclamation), error detection and error correction code (ECC) operations, encryption operations, cache operations, block logout, and memory device 112A Address translation between the logical block address and the physical block address to 112N. The memory system controller 115 may also include a host interface circuit that communicates with the host system 120 via a physical host interface. The host interface circuit may convert a command received from the host system into a command instruction to access the memory devices 112A to 112N and a response associated with the memory devices 112A to 112N into information of the host system 120.In an example, the memory system may include MLC-configured memory, and the controller may receive an indication of subsequent downloads. In some examples, the indication may indicate that the computing environment 100 is in a production phase. Such an indication may trigger the memory system controller to allocate some of the MLC-configured memory as SLC-configured memory. In general, production mode states may indicate that the computing environment is more likely to encounter extreme environmental conditions as part of the production process. Such extreme conditions may include heating for reflow, electrical testing, electromagnetic testing, electrostatic testing, or a combination thereof. During the production phase, downloading information to a computing system consumes time and resources. The information downloaded to the computing environment to store on the memory system during production can include operating systems and applications that can be used for testing before and after the computing environment experiences extreme conditions, and is likely to be the final operating environment and operating systems and applications application. Thus, in order to reduce the chance of losing downloaded information when the computing environment experiences extreme conditions, the memory controller may store the subsequently downloaded information in a version of the memory that provides the best performance under extreme conditions. In some instances, SLC-configured memory is more reliable for retaining information when the computing environment experiences extreme conditions than MLC-configured memory. In addition, when storing information, such as when receiving and storing subsequent download information, the memory configured by the SLC may be faster than the memory configured by the MLC.In some instances, when the computing environment is in the production phase, an indication of production status or a separate indication may include an estimate of the size of subsequent production downloads. Based on the amount of memory available when configured as MLC memory, the amount of memory available when configured as SLC memory, and the estimated size of the download, the memory controller may allocate or reconfigure a portion of the MLC memory to operate as SLC memory.For example, if the size of the information to be downloaded is estimated to be smaller than the potentially available SLC-configured memory, the memory controller can configure enough MLC memory to be the SLC memory to completely save the subsequently downloaded information in the SLC-only memory .In some instances, if the size of subsequent downloads is greater than the total number of potentially available SLC-configured memories, the memory controller may determine that the combined MLC memory provides the maximum transfer throughput to the SLC-configured memory while downloading while fully Optimal amount of memory to store SLC configuration for subsequent downloads. This type of determination provides the fastest download of information, while keeping large amounts of information in a robust SLC-configured memory.In some instances, the actual size of the downloaded information may be larger than the estimated size. Since information exceeding the estimated size was downloaded, the memory controller may reconfigure the SLC-configured memory back to the MLC memory to provide storage capacity for the amount of downloaded information that exceeds the estimated size of the downloaded information.FIG. 2 generally illustrates a flowchart of an example method 200 of receiving downloads at a memory system during a production phase and optimizing the downloads to save valuable production time and resources. At 201, the memory system may receive an indication or first information about a subsequent production mode download. In some examples, the first information may include an estimate of the size of the download. At 203, in response to the first information, the memory system may configure a portion of the MLC memory as the SLC memory to allow faster implementation of at least a portion of subsequent downloads. The amount of SLC allocated may depend on a number of factors, including but not limited to the size of the download, available MLC memory, maximum fill percentage, and so on. At 205, the memory system may receive the download. In some examples, during the download, the download information may be received for the first time and stored at the memory configured by the SLC. After filling the memory configured by the SLC, the download information can then be received at the MLC memory as needed. In some cases, the size estimates received at the memory system may be inaccurate. If the estimate is low, the memory system may have several options to accommodate additional download information.In some examples, the maximum fill percentage may be less than 100%, and additional download information may be received and saved to the remaining open memory that exceeds the maximum fill percentage. In some examples, the download information stored in the memory configured by the SLC may be buffered. At 207, the memory configured by the SLC may be reconfigured as the MLC memory, and the buffered and additional download information is saved to the recently configured MLC memory. In some examples, a combination of the above options for handling additional download information may be employed.At 209, the system or computing environment can continue production after downloading. In some instances, computing systems may experience extreme environmental conditions during subsequent production processes. Such processes may include, but are not limited to, reflow soldering. As discussed above, downloading large amounts of information from a host to a computing environment during production requires dedicated time and resources. If reloading of information can be avoided, then such a reloading is undesirable. For this reason, the production download discussed above attempts to maintain a large amount of download information in the memory configured by the SLC. SLC-configured memory generally maintains information better during extreme environmental conditions than MLC-configured memory. Therefore, for the production information download of the image of the operating system, for example, as much as possible, the download production information is stored in the memory configured by the SLC. After the production mode ends, or during a transition away from the production mode, at 211, when the memory configured by the SLC is converted back to the memory configured by the MLC to provide a computing environment with a predetermined memory capacity, the downloaded data may be simultaneously retained in the memory system in.FIG. 3 illustrates a faster download speed of the memory system of the subject matter according to the present invention compared to the download of a memory configured using only MLC. The graph shows a first curve 301 of the download speed of an example memory system using SLC-configured memory for downloading. Such downloads may include, for example, system images during production. Such downloads may be intended to remain in the system after production and may include an operating system and related applications and files for field use. The chart shows a second curve 302 of a download to a memory system that uses only the memory configured by the MLC to receive and save images. In the illustrated example, where all memories are configured for SLC operation, the size of the image is larger than the available memory. The second curve 302 downloaded using only MLC shows a very consistent download speed throughout the download process. The first curve 301 for downloading using the SLC-configured memory shows a much higher download speed before the SLC memory is filled with the download time. Once the memory configured by the SLC is filled, the download begins to fill the memory configured by the MLC and the download speed decreases accordingly. The higher download speeds associated with downloading SLC-configured memory represent significant time and resource savings in the production phase. In some instances, since at least a portion of an image is stored in a memory configured by the SLC during additional production processes such as reflow, the image has a greater chance of being retained than the entire image is stored in the MLC memory. Therefore, using SLC-configured memory can avoid extra downloads. Later in production, the memory system can retain the image and convert the SLC-configured memory back to the MLC-configured operation to provide the user with the specified memory capacity.FIG. 4 illustrates an example machine of a computer system 400 within which executable instructions set may be executed to cause the machine to perform any one or more of the methods discussed herein. In some implementations, the computer system 400 may correspond to a host system (eg, the host system 120 of FIG. 1) that includes or uses a memory system (eg, the memory system 110 of FIG. 1) or that can be used to perform operations of a controller. In alternative embodiments, the machine may be connected (e.g., a network connection) to a LAN, intranet, extranet, or other machine in the Internet. The machine may be a peer machine in a peer-to-peer (or decentralized) network environment, or a server or client machine in a cloud computing infrastructure or environment, a server or client machine in a client-server network environment Operating capacity.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network appliance, a server, a network router, a switch, or a bridge, or can execute (sequentially or Any other machine) specifying a set of instructions for actions to be taken by the machine. In addition, although a single machine is described, the term "machine" shall also include any set of machines that individually or collectively execute one or more sets of instructions to perform any one or more of the methods discussed herein.The example computer system 400 includes a processing device 402, a main memory 404 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RambusDRAM (RDRAM), etc.), static memory 406 (Eg, flash memory, static random access memory (SRAM), etc.), and a data storage system 418 that communicates with each other via a bus 430.The processing device 402 represents one or more general-purpose processing devices, such as a microprocessor, a central processing unit, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, Or a processor that implements a combination of instruction sets. The processing device 402 may also be one or more special-purpose processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and the like. The processing device 402 is configured to execute instructions 426 for performing the operations and steps discussed herein. The computer system 400 may also include a network interface device 408 that communicates via a network 420.The data storage system 418 may include a machine-readable storage medium 424 (also referred to as a computer-readable medium) on which one or more of instructions or software 426 embodying any or more of the methods or functions described herein are stored. Multiple collections. The instructions 426 may also reside entirely or at least partially within the main memory 404 and / or the processing device 402 during execution by the computer system 400, which also constitutes a machine-readable storage medium. The machine-readable storage medium 424, the data storage system 418, and / or the main memory 404 may correspond to the memory system 110 of FIG.In one embodiment, instruction 426 contains instructions that implement functionality corresponding to: reconfiguring memory operations from MLC to SLC during production for downloading information such as images, and saving and reconfiguring SLC memory after production as Memory operation of MLC memory. Although the machine-readable storage medium 424 is shown as a single medium in the example implementation, the term "machine-readable storage medium" should be considered to include a single medium or multiple media storing one or more instruction sets. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. The term "machine-readable storage medium" should therefore be considered to include, but not be limited to, solid-state memories, optical media, and magnetic media.Algorithms and symbolic representations regarding the operation of data bits in computer memory present some portions described in detail previously. These algorithms are described and represented as a means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In this paper, and generally the algorithm is conceived as a self-consistent sequence of steps that produce the desired result. These operations are operations requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Primarily for common reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. This disclosure may refer to computer systems that manipulate and transform the data in a register and memory of a computer system, expressed as physical (electronic) quantities, into computer system memory or registers, or other such information stored in a system similar to physical data. Or similar electronic computing devices.The present disclosure also relates to a device for performing operations herein. This device may be specially constructed for the required purpose, or it may contain a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media such as, but not limited to, any type of magnetic disk, including flexible disks, optical disks, CD-ROMs, and magneto-optical disks; read-only memory (ROM); random access memory (RAM) ); EPROM; EEPROM; magnetic or optical card; or any type of media suitable for storing electronic instructions and each coupled to a computer system bus.The algorithms and displays presented in this article are not intrinsically related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or the general-purpose systems may prove convenient to construct more specialized equipment to perform the methods. The structure for a variety of these systems will be presented from the description below. In addition, the present disclosure is not described with reference to any particular programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software that may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) -readable storage medium, such as a read-only memory ("ROM"), a random access memory ("RAM"), a magnetic disk Storage media, optical storage media, flash memory devices, and the like.In the foregoing specification, embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments of the disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative and not a restrictive sense.ExamplesExample 1 is a method comprising: entering a first operating mode of a device including a memory device; receiving first information indicating subsequent download of second information at the memory device, the memory device including a A memory cell of a first group of hierarchical unit (MLC) memory; in response to receiving the first information, converting a portion of the memory unit of the first group from being configured as an MLC memory to being configured as a single-level cell (SLC) memory; receiving and storing the second information at the memory device; and immediately reconfiguring at least a portion of the SLC memory as an MLC memory after leaving the first operation mode, while maintaining the The second information is stored in the memory device.In Example 2, the subject matter according to Example 1 includes heating the memory device after receiving and storing the second information and before reconfiguring the portion of the SLC memory.In Example 3, the subject matter according to Example 2, wherein the heating comprises heat of reflow solder that heats the memory device onto the memory device or on a component of the device containing the memory device Order characteristics.In Example 4, the subject matter according to any one of Examples 1-3, wherein the receiving the first information includes an estimated value of a size of the received microsecond information.In Example 5, the subject matter according to Example 4, wherein the conversion includes determining the storage available for the second information based on the estimated size of the second information and a capacity of the memory device. The maximum amount of SLC configurable memory.In Example 6, the subject matter according to any one of Examples 4-5, wherein receiving and storing the second information includes determining that the estimated size of the second information is smaller than that of the second information. Actual size; during the first operating mode, reconfiguring a portion of the SLC memory as an MLC memory while maintaining any second information stored on the portion of the SLC memory; and adding an additional second Information is received into the MLC memory.In Example 7, the subject matter according to any one of Examples 1-6, wherein the first operation mode is a manufacturing operation mode.In Example 8, the subject matter according to any one of Examples 1-7, wherein the second information includes an operating system.In Example 9, the subject matter according to any one of Examples 1-8, wherein the second information includes a car navigation, communication, and entertainment operating system.In Example 10, the subject matter according to Example 9, wherein the second information includes a car navigation, communication, or entertainment application.In Example 11, the subject matter according to any one of Examples 1-10, wherein the second information includes a car diagnostic operating system.In Example 12, the subject matter of any one of Examples 1-11, wherein the MLC memory comprises a three-level cell (TLC) memory.In Example 13, the subject matter of any one of Examples 1-12, wherein the MLC memory comprises a four-level cell (QLC) memory.Example 14 is a memory circuit comprising: a memory cell configured to provide multi-level cell (MLC) storage; and a controller operatively coupled to the memory cell, the controller configured to Performing operations include receiving an indication of a production mode of a device containing the memory circuit, and receiving an estimated size of subsequent downloads during the production mode, in response to the estimated size removing at least a portion of the memory unit from Operations such as MLC storage are configured as operations such as single-level cell (SLC) storage; receiving instructions for the subsequent download to the memory unit; and after receiving an indication that the device is leaving the production mode , Immediately reconfiguring the at least part of the memory unit from an operation stored as an SLC to an operation stored as an MLC, while maintaining information contained in the subsequent downloaded part in the memory unit.In Example 15, the subject matter of Example 14, wherein the memory cell is configured to provide three-level cell (TLC) storage.In Example 16, the subject matter of any one of Examples 14-15, wherein the memory cell is configured to provide four-level cell (QLC) storage.Example 17 is a machine-readable medium including instructions that, when executed by a machine, cause the machine to perform operations including: receiving a first instruction indicating a production mode of a device including a memory device; receiving First information indicating a subsequent download of second information at the memory device, the memory device including a unit configured to operate as a first group of multi-level cell (MLC) memory; in response to the first information Receiving, converting a portion of the cells of the first group from operation as an MLC memory to operation as a single-level cell (SLC) memory; receiving and storing the second information at the memory device; and After leaving the first operation mode, at least a part of the SLC memory is immediately reconfigured to operate as an MLC memory, while maintaining the second information stored in the memory device.In Example 18, the subject matter according to Example 17, wherein the operation further comprises: determining that the reception of the second information exceeds a size indication received with the first information, and the receiving with the second information Synchronously, some of the SLC memories are reconfigured to operate as MLC memories to accommodate storage of portions of the second information that exceed the size indication.In Example 19, the subject matter of Example 18, wherein the reconfiguring at least a portion of the SLC memory comprises reconfiguring at least a portion of the SLC memory to operate as a three-level cell (TLC) memory.Example 20 is at least one machine-readable medium containing instructions that, when executed by a processing circuit, cause the processing circuit to perform an operation to implement any of the examples 1-19.Example 21 is a device including means to implement any of the examples 1-19.Example 22 is a system to implement any one of Examples 1-19. Example 23 is a method to implement any of the examples 1-19.
A host controller interface to manage the complexity of accessing mass storage that takes into account the special handling needs of various memory technologies such as polymer memories.
Claims : 1. A system, comprising: a processor; a nonvolatile mass storage device; and a host control interface to couple the processor to the nonvolatile mass storage device and issue read/write commands to manage polarity. 2. The system of claim 1 wherein the nonvolatile mass storage device is a disk cache. 3. The system of claim 1 wherein the nonvolatile mass storage device has polymer memory devices. 4. The system of claim 3 wherein the nonvolatile mass storage device having polymer memory devices is a disk cache. 5. The system of claim 1 wherein polarity management ensures that data is stored in the nonvolatile mass storage device with a polarity opposite of that last used for a memory word. 6. The system of claim 1 wherein polarity management includes an explicit polarity control with a polarity indicator to determine data polarity for each write. 7. The system of claim 1 wherein polarity management includes a recovered polarity that uses a last polarity from a read operation for a subsequent write operation. <Desc/Clms Page number 15> 8. The system of claim 1 wherein polarity management includes an automatic polarity where contents of a polarity map determine polarity on reads and polarity in the polarity map is toggled for writes. 9. A computer system, comprising: a processor; multiple memory devices; and a host controller interface to couple the processor to the multiple memory devices and issue a multi-control command to address the multiple memory devices with potentially different operation types. 10. The computer system of claim 9 where the multiple memory devices form a disk cache. 11. The computer system of claim 9 where the multiple memory devices are polymer memory devices. 12. The computer system of claim 9 where the multiple memory devices are flash memory devices. 13. The computer system of claim 9, wherein the multi-control command allows one command packet to be fetched, decoded and executed to provide different memory operations to the multiple memory devices. 14. The computer system of claim 9, wherein the multi-control command accesses memory words in different devices of the multiple memory devices. 15. A system comprising: a processor; addressable mass storage devices; and <Desc/Clms Page number 16> a host controller interface to couple processor commands to the addressable mass storage devices and account for special handling needs of polymer devices in the addressable mass storage devices. 16. The system of claim 15 wherein the special handling needs include reporting a number of error corrections. 17. The system of claim 15 wherein the special handling needs include using a polarity map to determine how polarity is to be handled for a specific access. 18. The system of claim 15 wherein the special handling needs include using a timing control to specify on a per operation basis what timing should be used for read/write operations. 19. The system of claim 15 wherein the special handling needs include using dynamic addressing to write data to a location in a different segment from where the data was read in the addressable mass storage devices. 20. The system of claim 15 wherein the addressable mass storage devices represent a disk cache having multiple cache storage devices. 21. The system of claim 20 wherein the special handling needs further include storing a minimum and a maximum cache line size and metadata size. 22. A system comprising: a processor having a transceiver coupled to dual antennas; and a memory module coupled to the processor and including, (a) a memory controller, <Desc/Clms Page number 17> (b) storage devices to form a mass storage that is coupled to the memory controller, and (c) a host controller coupled to the processor to provide a refresh cycle issued through an interface to the storage devices. 23. The system of claim 22 wherein the storage devices are polymer memory devices. 24. The system of claim 22 wherein the storage devices are flash memory devices. 25. The system of claim 22 wherein the memory module is a bus master device that is given a list of commands to asynchronously process. 26. The system of claim 25 wherein the list of commands are processed without involvement by the processor. 27. The system of claim 22 wherein data stored by the storage devices on the memory module is not directly accessible by processor instructions. 28. The system of claim 27 wherein the data stored by the storage devices on the memory module is copied to/from system memory. 29. A system, comprising: a processor; main memory coupled to the processor; and a disk cache memory module having a programming interface capable of streaming read/write data without direct processor instruction access to storage devices on the disk cache memory module, where data stored in the storage devices is retrieved and stored in the main memory. <Desc/Clms Page number 18> 30. The system of claim 29, wherein the storage devices are flash devices. 31. The system of claim 29, wherein the storage devices are polymer devices. 32. The system of claim 31, wherein the polymer devices are a ferroelectric polarizable material. 33. A method including functions in a host control interface to facilitate read\write operations in a mass storage to include at least one of: (a) providing a continuous associated command to allow a group of commands to be issued together, (b) using a polarity map to determine how polarity is to handled for a specific access to the mass storage, (c) using a timing control to specify on a per operation basis what timings should be used for read/write operations, (d) using dynamic addressing to write data to a location in a different segment from where the data was read, (e) issuing a multi-command to allow different operations to multiple storage devices in the mass storage, (f) providing a refresh cycle, (g) recording a number or corrections applied to the mass storage, and (h) using a scatter gather list to correctly access data in the mass storage. 34. The method of claim 33, wherein facilitating read\write operations in the mass storage includes using the mass storage having a ferroelectric polarizable material. <Desc/Clms Page number 19> 35. The method of claim 33, wherein facilitating read\write operations in the mass storage includes using the mass storage having a resistive change polymer memory. 36. The method of claim 33, wherein facilitating read\write operations in the mass storage further includes facilitating read\write operations in a polymer storage. 37. The method of claim 33, wherein facilitating read\write operations in the mass storage further includes facilitating read\write operations in a disk cache. 39. The method of claim 37 further including storing a minimum and maximum cache line size and metadata size in the disk cache. 40. A method of error reporting, comprising: providing a periodic memory refresh cycle for storage devices; and allowing a memory controller to detect an error and interrupt the software controlling the storage devices to report a memory refresh failure. 41. The method of claim 40 further including : incorporating Polymer Ferroelectric Memory (PFEM) devices for the storage devices. 42. The method of claim 40 wherein providing the periodic memory refresh cycle for storage devices further includes providing the periodic memory refresh cycle for cache storage devices. 43. An article comprising a machine-readable storage medium containing instructions that if executed enable a host controller interface to control read/write operations for mass storage that include at least one of: <Desc/Clms Page number 20> providing a continuous list of commands to allow a group of commands to be issued together; using a polarity map to determine how polarity is to be handled for a specific access of the mass storage; using a timing control to specify on a per operation basis what timing should be used for read/write operations; using dynamic addressing to write data to a location in a different segment of the mass storage from where the data was read; issuing a multi-command to allow different operations to multiple devices in the mass storage; providing a refresh cycle ; and reporting a number of memory error corrections. 44. The article of claim 43 wherein the mass storage is a flash memory. 45. The article of claim 43 wherein the mass storage is a polymer storage. 46. The article of claim 45 wherein the polymer storage includes a ferroelectric polarizable material. 47. The article of claim 45 wherein the polymer storage includes a resistive change polymer memory. 48. The article of claim 45 wherein the mass storage is a disk cache.
<Desc/Clms Page number 1> INTERFACE FOR A BLOCK ADDRESSABLE MASS STORAGE SYSTEM There are several interfaces in use today for mass storage devices to facilitate data accesses between the processor and the cache mass storage. A direct memory mapped interface and a standard block addressable interface have been used for mass storage devices, but neither is suited for a disk cache. What is needed is an interface that can improve system performance for a disk cache. BRIEF DESCRIPTION OF THE DRAWINGS The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which: FIG. 1 illustrates a device having an interface between a processor and mass storage devices in accordance with the present invention;FIG. 2 is a diagram that highlights features of the present invention;FIG. 3 shows a five cache line disk request; andFIG. 4 shows a command sequence for the request in FIG. 3. It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. <Desc/Clms Page number 2> DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-know methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention. In the following description and claims, the terms"coupled"and "connected,"along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected"may be used to indicate that two or more elements are in direct physical or electrical contact with each other."Coupled"may mean that two or more elements are in direct physical or electrical contact. However,"coupled"may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. FIG. 1 illustrates a device 10 that may include a transceiver 14 that either receives or transmits a modulated signal from one or more antennas. The analog front end transceiver may be a stand-alone Radio Frequency (RF) integrated analog circuit, or alternatively, embedded with processor 12 as a mixed-mode integrated circuit. The received modulated signal is frequency down-converted, filtered, then converted to a baseband, digital signal. The digital data processed by processor 12 may be transferred across an interface 16 for storage by storage devices 20,22,..., 24 and 26 on a memory module. It should be understood that storage devices 20, 22,..., 24 and 26 may be used as a cache. A Network Interface Card (NIC) may facilitate the transfer of data across interface 16 and may incorporate a Peripheral Component Interconnect (PCI) bus as defined by the PCI Local Bus Specification, dated in June 1995, or alternately, a bus such as the PCI <Desc/Clms Page number 3> Express bus or any other high bandwidth bus. By way of example and for ease of description, the memory module shown in FIG. 1 has four storage devices 20,22, 24 and 26. In one embodiment, each of the four storage devices may have a memory size of 256 Mbyte, but neither the size of the storage devices nor the number of devices that populate the memory module are a limitation of the present invention. Further, storage devices 20,22, 24 and 26 may be packaged separately, stacked as multiple memory devices in one package or integrated together and addressable as separate blocks of memory. Storage devices 20, 22,.., 24 and 26 may store both data processed by processor 12 and metadata used by the memory management system for administrative purposes. The memory module has support to access data-only or independently accessible metadata-only or data plus metadata. A memory controller 28 on the memory module is connected via address and control buses to the storage devices. Memory controller 28 retrieves and processes current commands, and when processing is completed, a command status is appropriately set. Memory controller 28 further implements a memory mapping algorithm to improve the performance of device 10. Note that a host controller 30 is connected with a Host Controller Interface (HO) 18, memory controller 28, and processor 12. In one embodiment, storage devices 20,22, 24 and 26 may be a relatively large non-volatile disk cache memory adapted to cache information for a mass store system (not shown) coupled to processor 12. The mass store system typically has a storage capacity, for example, of at least about one gigabyte. The mass storage system may be an electromechanical hard disk memory, an optical disk memory, or a magnetic disk memory, although the scope of the present invention is not limited in this respect. In one embodiment, storage devices 20, 22,..., 24 and 26 may be polymer memory having a storage capacity of at least about 250 megabytes and may include ferroelectric memory cells, wherein each cell includes a ferroelectric polymer material located between at least two conductive lines. <Desc/Clms Page number 4> In this embodiment the ferroelectric polymer material may be a ferroelectric polarizable material and include a ferroelectric polymer material comprised of a polyvinyl fluoride, a polyethylene fluoride, a polyvinyl chloride, a polyethylene chloride, a polyacrylonitrile, a polyamide, copolymers thereof, or combinations thereof. In an alternate embodiment, storage devices 20, 22,..., 24 and 26 may be a polymer memory such as, for example, a plastic memory or a resistive change polymer memory. In this embodiment, the plastic memory may include a thin film of polymer memory material sandwiched at the nodes of an address matrix. The resistance at any node may be altered from a few hundred ohms to several megohms by an electric potential supplied across the polymer memory material and a positive or negative current flowing in the polymer material that alters the resistance of the polymer material. Potentially, different resistance levels may store several bits per cell and data density may be increased further by stacking layers. In addition to polymer memory, cache storage devices may be a NOR or NAND Flash or battery backed-up DRAM. Embodiments of the present invention for device 10 may be used in a variety of applications, with the claimed subject matter incorporated into microcontrollers, general-purpose microprocessors, Digital Signal Processors (DSPs), Reduced Instruction-Set Computing (RISC), Complex Instruction-Set Computing (CISC), among other electronic components. In particular, the present invention may be used in smart phones, communicators and Personal Digital Assistants (PDAs), medical or biotech equipment, automotive safety and protective equipment, and automotive infotainment products. However, it should be understood that the scope of the present invention is not limited to these examples. FIG. 2 illustrates a Host Controller Interface (HCI) 18 that in this embodiment has an add-in card for PCI-Express bus transfers across interface 16, but note that other embodiments may adopt other busses. In general, the memory module hardware in HCI 18 processes lists of software created <Desc/Clms Page number 5> commands that may be issued without processor 12 involvement until the module hardware signals process completion. Memory data stored by cache storage devices 20,22,..., 24 and 26 on the memory module is not directly accessible by CPU instructions. The cache stored data may be copied to/from system memory 32 such as, for example, Dynamic Random Access Memory (DRAM). The memory module is a bus master device that is given lists of commands to asynchronously process. A command identifies a buffer in system memory used to hold the data associated with a command. Thus, HCI 18 provides a memory module programming interface capable of streaming read/write data across interface 16 without direct CPU instruction access to the cache storage devices. In other words, HCI 18 is not a direct, memory-like interface to access memory storage. The present invention includes an interface (HCI 18) positioned between a processor and mass storage devices. HCI 18 provides associated functions and services required to support the mass storage devices, with various features of the present invention implemented in either hardware or software. In various embodiments, HCI 18 may include all or a subset of the described features. As shown in FIG. 2, the present invention includes features such as a continuous associated command 200 that allows a group of commands to be issued together; a polarity map mechanism 210, a timing control 220 and a dynamic addressing 230 designed to support characteristics of Polymer Ferroelectric Memory (PFEM) memory technology ; a multi-control command 240 to optimize performance for a disk caching environment; a refresh 250; a meta-data size & cache line size 260 that provides memory word read/write operations; a data errors 270 and Error Correction Code (ECC) correction 280 for reporting memory errors; and an optimized scatter gather list 290 to improve system performance. CONTINUOUS ASSOCIATED COMMAND 200 <Desc/Clms Page number 6> FIG. 2 includes a continuous associated command 200 issued within HCI 18 that is designed for cache accesses. User requests for data from cache storage devices 20, 22,..., 24 and 26 may require that multiple cache lines be accessed to fulfill the request. Due to the nature of set associative cache mapping algorithms, a request for continuous disk sectors may not necessarily map to continuous cache lines. (FIG. 3 illustrates continuous disk sectors mapped to different cache lines.)HCI 18 defines a command list structure in system memory and a doorbell register (not shown) that allows a group of commands to be issued together. Each command includes at least one bit to indicate if the command is active and a pointer to the next command. Upon receiving a pointer to the start of the command chain and having the doorbell'rung', HCI 18 will fetch a command, process the command and advance to the next command until a non active command is found. Additional commands may be inserted at the end of the chain to ensure that the cache hardware is always active if outstanding requests exist. A further optimization may be made to allow software to specify whether an interrupt should be generated when the command completes. This programmable interrupt bit allows a command list to be structured such that only one interrupt is generated per group of associated commands, which minimizes system overhead. FIG. 4 illustrates hardware and software activity related with the continuous associated command 200. A list of commands is shown, for example, as commands 402,404, 406 and 408. Each command includes at least one bit to indicate if the command is active (labeled ACTIVE SET) and the pointer to the next command. The diagram further illustrates that commands are fetched and processed, and advancement to the next command continues until a non active command 410 is found. POLARITY MAP MECHANISM 210FIG. 2 shows a polarity map 210 to support characteristics of PFEM memory technology within HCI 18. Data may be written into memory cells in <Desc/Clms Page number 7> any of cache storage devices 20, 22,..., 24 and 26 by controlling the voltages on selected word lines and bit lines. The memory cell may be programmed to either a"physical 0"state or physical 1"state, but memory controller 28 (see FIG. 1) may interpret whether the physical value of a storage cell read represents a 1 state or a 0 state. Various memory technologies may have different requirements for representation of the stored state, and accordingly, memory controller 28 is designed with a software controller polarity management mechanism that determines how polarity is to be dealt with for the specific access. In one embodiment software specifies the polarity mechanism on each read/write operation, although in alternate embodiments the polarity mechanism may be applied on a global basis through multiple operation control. Three polarity management mechanisms may be specified to ensure that each time data is stored in a memory word, the polarity used is opposite of that last used for the memory word. A first polarity management mechanism provides'explicit polarity control'where software specifies a TRUE/COMPLEMENT polarity indicator for each write and memory controller 28 recovers the polarity state from the storage location on a read. Data in system memory is always in TRUE polarity representation. Software doesn't need to make any transformations of data stored in the memory module in COMPLEMENT polarity. Memory controller 28 depends on software to do any required togging. Another polarity management mechanism for'recovered polarity' allows memory controller 28 to use the"last"polarity determined from a read operation to do a subsequent write operation. Software may specify"automatic polarity"for an access as another polarity management mechanism. Memory controller 28 keeps a separate volatile polarity map (kept in RAM) that has a polarity state for each word of the memory module, i. e. , each storage location or group of cells. During normal runtime, memory controller 28 uses the contents of the polarity map to determine polarity on reads and toggles the polarity in the map for writes. No recovery of polarity is required for reads. Software is required to load the <Desc/Clms Page number 8> polarity map before any automatic polarity mechanism is used (other mechanisms could be used before this). On system shutdown, software is responsible to read the polarity map from the memory controller and save it to some other non volatile storage media. TIMING CONTROL 220FIG. 2 shows a timing control 220 to support characteristics of PFEM memory technology within HCI 18. Different memory technologies may require different detailed hardware cycle timings for specific aspects of read/write operations to access stored values. For example, delays or pauses may be used for polymer memory technologies during reading and writing to the memory to avoid changes in cell polarization. Further, depending on whether the requested address is in the same memory segment as the last memory operation, a delay operation may or may not be performed. Certain memory technologies may require slower timings for locations that haven't been accessed for some time period, with either slow or fast timings specified to be used for a given read/write operations to memory locations. Accordingly, memory controller 28, under software control, may specify on a per operation basis what timings should be used for read/write operations. DYNAMIC ADDRESSING 230FIG. 2 shows a dynamic addressing 230 to support characteristics of PFEM memory technology within HCI 18. A read cycle for the polymer memory devices in cache storage devices 20, 22,..., 24 and 26 may be destructive and polarize electric dipoles in the polymer film material in one direction. Since information stored at a particular physical address of the memory may be lost during the destructive read operation, the information may be written back to the memory to restore the data. Thus, to read information from such a destructive read memory, a read cycle may include a subsequent write back operation. Within a segment of memory in a cache <Desc/Clms Page number 9> storage device there may be a vulnerability to writes following reads. The vulnerability imposes a performance penalty such as waiting to perform the write back until the vulnerability passes. However, in accordance with the present invention, HCI 18 provides an algorithm allowing data that was read to be written to a location in a different segment. Accordingly, one feature of the present invention is that HCI 18 includes two addresses for every access, one address for a read and another address for the write. Thus, every interface level access operates on two locations, ideally in different segments of the memory. A read operation specifies an address to read plus a blank location where the data may be written back. The read operation consumes a blank and creates a blank. The write operation specifies an address to erase (make blank) and an address that is already blank which is the destination for the write data. MULTI-CONTROL COMMAND 240FIG. 2 shows a multi-control command 240 issued within HCI 18 to optimize performance for a disk caching environment. Briefly referring to FIG. 1, HCI 18 provides the interface between commands issued by processor 12 and the operation of the M memory storage devices connected to memory controller 28. HCI 18 includes a multi-control command feature that allows software to issue the same operation or a different operation to multiple cache storage devices 20,22,..., 24 and 26 on the memory module card. The multi-command feature allows one command packet which can share common data and can be transferred more efficiently over PCI-express to be fetched, decoded, executed and potentially provide different memory operations for each cache storage device on the card. The multi-control command feature allows each cache storage device to address different address locations with potentially different operation types. By way of example, memory controller 28 may perform a read cycle that includes a destructive read operation within cache storage device 20 while simultaneously issuing a write operation to another device such as <Desc/Clms Page number 10> cache storage device 22. Thus, multi-commands access memory words in different cache storage devices. When a multiple cache storage device access is specified, each access may have unique operation parameters. Most memory accesses include an operation, a block count, and two block addresses along with other operation specific parameters for the command. <Desc/Clms Page number 11> REFRESH 250FIG. 2 shows a refresh 250 to support characteristics of PFEM memory technology within HCI 18. HCI 18 allows both time-based and cycle-based refresh cycles. Time-based refresh is similar to DRAM refresh in that the stored data is accessed periodically. Whereas, DRAM devices provide a refresh cycle to pump up leaking capacitors, the time-based refresh prevents the polymer memory devices in cache storage devices 20,22,..., 24 and 26 from becoming"imprinted"or stuck in a current state. HCI 18 provides an initial loop through all addresses at power up, followed by normal access time reads at regular time intervals to ensure that cells do not become imprinted during power on time. If information read from a requested address is written back to the same address, neighboring unselected memory cells sharing the same word line or bit lines as the selected memory cell may experience"disturbances". An interaction of the electrode material with the polymer memory material in a memory cell may result in a disturbance of the polarization if the memory operations are performed within a relatively short period of time. Thus, accesses to one location in a segment of memory may result in disturbances to other locations within the segment. Each disturb erodes the stored charge in the memory, and after N disturbs the stored data is read to ensure a reliable read operation. Thus, HCI 18 provides cycle-based refresh addresses inserted every N cycles to bound the effects of a disturb and to limit each location within the segment to N disturbs. META-DATA SIZE & CACHE LINE SIZE 260FIG. 2 shows a meta-data size & cache line size 260 that provides memory operations within HCI 18. The PFEM memory controlled by HCI 18 has the ability to atomically read/write meta-data and data for each cache line. In order to do this hardware must know the size of both the cache line and meta-data. A set of registers (not shown) are defined within HCI 18 to store the minimum and maximum cache line size and the metadata size, along <Desc/Clms Page number 12> with sizes that provide optimal hardware performance as determined by cache policies in software. Using these size values, HCI 18 is programmed to use the size values that best match the cache policy needs. DATA ERRORS 270FIG. 2 shows a data errors 270 for error detection within HCI 18. Data corruption may occur during the periodic memory refresh cycles for PFEM. PFEM memory is a destructive read memory technology and any errors that occur during the refresh cycle will leave the memory in an unknown state. A read operation on the memory location that has an error may potentially return incorrect data that will not be detected by Error Correcting Code (ECC). To prevent errors from being undetected during the refresh cycle, HCI 18 defines a set of registers (not shown) and an interrupt that allow memory controller 28 to interrupt the software controlling cache storage devices 20, 22,..., 24 and 26 and report the memory refresh failure. Software may then mark the corresponding cache line as bad and proceed with system operations. ECC CORRECTION 280FIG. 2 shows an Error Correcting Code (ECC) 280 for error correction within HCI 18. The hardware implements an ECC method as part of data storage and retrieval. The hardware maintains an error log of all corrections and may be accessed by cache policy software to explicitly determine the results of ECC corrections made during memory accesses. The correction log may be accessed by issuing a command through the normal command process that downloads the correction log into system memory. The correction log may be used by cache policy as an early indication of a possible cache line failure, allowing appropriate corrective steps to be taken to avoid data loss. OPTIMIZED SCATTER GATHER LIST 290 <Desc/Clms Page number 13> FIG. 2 shows an optimized Scatter Gather (SG) list 290 to improve system performance. Cache lines may span multiple 4Kbyte physical system memory pages (a typical cache line is 8Kbytes long), a scatter gather list correctly DMAs data from the cache line into system memory since the operating system makes no assurances of the buffer being physically contiguous. The scatter gather mechanism used by HCI 18 takes advantage of the fact that each command transfers one cache line worth of data, which allows for optimizations to be made to the scatter gather list. By way of example, very few entries are needed to fulfill a worse case request and a 16 Kbyte cache line at most spans five physical system memory pages. HCI 18 defines the scatter gather list that resides in the command and advances to the next entry when a system memory page is crossed (4Kbyte boundaries). The scatter gather list as defined allows for simplifications to be made in the controller logic for the cache. This reduces the cost of the controller plus provides performance benefits by the elimination of an extra system memory DMA request by the cache controller needed to get a separate scatter gather list. An additional memory address is provided to indicate the location of metadata for the cache line, this allows the command to update both data and metadata in the same command atomically. By now it should be apparent that the complexity of accessing a disk cache may be mitigated using features of the present invention. The host control interface takes into account the special handling needs of various memory technologies such as, for example, polymer memories. While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Stored units of information related to packet processing are associated with identifiers, each of which is maintained as an entry in a Content Addressable Memory (CAM). Each entry includes status information associated with the information unit with which the identifier is associated. The status information is used to determine validity of the information unit with which the status information is associated.
What is claimed is: 1. A method of accessing shared-access information stored in memory comprising: storing units of information related to packet processing, each unit having an associated identifier; maintaining each identifier as an entry in a Content Addressable Memory (CAM), each entry including status information associated with the information unit with which the identifier is associated; and using the status information to determine validity of the information unit with which the status information is associated. 2. The method of claim 1 wherein the status information comprises: a lock status to indicate that the information unit with which the status information is associated is in the process of being modified. 3. The method of claim 1 wherein the information units are information units stored in a cache, the cached information units collectively corresponding to a portion of all such information units stored in memory. <Desc/Clms Page number 34> 4. The method of claim 3 wherein the information units are queue descriptors. 5. The method of claim 4 wherein the associated identifiers are queue numbers. 6. The method of claim 1 wherein the information units correspond to information in state tables. 7. The method of claim 6 wherein the associated identifiers are packet flow identifiers. 8. The method of claim 3, further comprising: performing a lookup in the CAM of a selected one of the information units stored in the memory based on the associated identifier; and receiving from the CAM a lookup result that indicates if a match was found, a match indicating that the selected one of the information units is one of the cached information units. <Desc/Clms Page number 35> 9. The method of claim 5 wherein the lookup result includes the status information of the matched identifier. 10. The method of claim 8 wherein the CAM maintains a Least Recently Used (LRU) list of the identifiers in the CAM and, if no match is found, the result providing an index to an identifier from the LRU list. 11. The method of claim 10, further comprising using the LRU identifier to replace one of the cached information units with the selected one of the information units from memory. 12. The method of claim 11, further comprising: replacing the LRU identifier in the CAM with the identifier associated with the selected one of the information units. 13. The method of claim 8, further comprising: setting the lock status of the identifier associated with the selected one of the information units to indicate invalid status; modifying the selected one of the information units in the cache ; and <Desc/Clms Page number 36> upon completion of the modification, changing the lock status to indicate valid status. 14. A computer program product residing on a computer- readable medium comprising instructions to cause a computer to: store units of information related to packet processing, each unit having an associated identifier; maintain each identifier as an entry in a Content Addressable Memory (CAM), each entry including status information associated with the information unit with which the identifier is associated; and use the status information to determine validity of the information unit with which the status information is associated. 15. A computer program product of claim 14, wherein the information units are information units stored in a cache, the cached information units collectively corresponding to a portion of all such information units stored in memory. 16. An apparatus comprising: a processor; <Desc/Clms Page number 37> a memory storing a computer program product residing on a computer-readable medium comprising instructions to cause a computer to: store units of information related to packet processing, each unit having an associated identifier; maintain each identifier as an entry in a Content Addressable Memory (CAM), each entry including status information associated with the information unit with which the identifier is associated; and use the status information to determine validity of the information unit with which the status information is associated. 17. The apparatus of claim 16 wherein the information units are information units stored in a cache, the cached information units collectively corresponding to a portion of all such information units stored in memory.
<Desc/Clms Page number 1> MECHANISM FOR PROVIDING EARLY COHERENCY DETECTION TO ENABLE HIGH PERFORMANCE MEMORY UPDATES IN A LATENCY SENSITIVE MULTITHREADED ENVIRONMENT CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority from U. S. Provisional Patent Application Ser. No. 60/315, 144 (Attorney Docket No. 10559-579P01), filed August 27,2001. BACKGROUND In a pipelined processing environment, work arrives at a fixed rate. For example, in a network processor application, network packets may arrive every"n"ns. Each arriving packet requires access to information stored in memory (e. g. , SRAM). Because the memory access speed is slower than the arrival rate, a pipeline is used to process the packets. The exit rate must match the arrival rate. Each packet is classified into a flow. Successively arriving packets may be from different flows, or from the same flow. In the case of the same flow, processing steps must be performed for each packet in strict arrival order. In prior pipelined network processor implementations, data rates and memory access speeds for"same flow"packet processing are in a ratio such that the memory read access time is not greater than the packet arrival rate. Thus, the <Desc/Clms Page number 2> network processor cannot rely on a full pipeline rate without requiring faster memory access speeds. DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram of a communication system employing a processor having multithreaded microengines to support multiple threads of execution. FIG. 2 is a block diagram of a programmable processor datapath (of the microengine from FIG. 1) that includes a CAM. FIG. 3 is a diagram depicting the microengines as a multi-stage, packet processing pipeline. FIG. 4 is a block diagram of the CAM of FIG. 2. FIG. 5A is a depiction of a queue and queue descriptor in SRAM memory. FIG. 5B is a depiction of a cache of queue descriptors and corresponding tag store implemented using the CAM (of FIG. 4). FIG. 6 is a flow diagram illustrating an exemplary use of the CAM during a queue operation by one of the microengines programmed to perform queue management. FIG. 7 is a flow diagram illustrating an exemplary use of the CAM to support Cyclic Redundancy Check (CRC) processing by one of the pipeline microengines programmed to perform CRC processing. <Desc/Clms Page number 3> DETAILED DESCRIPTION Referring to FIG. 1, a communication system 10 includes a processor 12 coupled to one or more I/O devices, for example, network devices 14 and 16, as well as a memory system 18. The processor 12 is multi-threaded processor and, as such, is especially useful for tasks that can be broken into parallel subtasks or functions. In one embodiment, as shown in the figure, the processor 12 includes multiple microengines 20, each with multiple hardware controlled program threads that can be simultaneously active and independently work on a task. In the example shown, there are sixteen microengines 20, microengines 20a-20p (corresponding to microengines 0 through 15), and each of the microengines 20 is capable of processing multiple program threads, as will be described more fully below. The maximum number of context threads supported in the illustrated embodiment is eight, but other maximum amount could be provided. Each of the microengines 20 is connected to and can communicate with adjacent microengines via next neighbor lines 21, as shown. In the illustrated embodiment, the microengines 0-7 are organized as a first cluster (ME Cluster 0) 22a and the microengines 8-15 are organized as a second cluster (ME Cluster 1) 22b. The processor 12 also includes a processor 24 that assists in loading microcode control for other resources of <Desc/Clms Page number 4> the processor 12 and performs other general purpose computer type functions such as handling protocols and exceptions, as well as provides support for higher layer network processing tasks that cannot be handled by the microengines. In one embodiment, the processor 24 is a StrongARM (ARM is a trademark of ARM Limited, United Kingdom) core based architecture. The processor (or core) 24 has an operating system through which the processor 24 can call functions to operate on the microengines 20. The processor 24 can use any supported operating system, preferably a real-time operating system. Other processor architectures may be used. The microengines 20 each operate with shared resources including the memory system 18, a PCI bus interface 26, an I/O interface 28, a hash unit 30 and a scratchpad memory 32. The PCI bus interface 26 provides an interface to a PCI bus (not shown). The I/O interface 28 is responsible for controlling and interfacing the processor 12 to the network devices 14,16. The memory system 18 includes a Dynamic Random Access Memory (DRAM) 34, which is accessed using a DRAM controller 36 and a Static Random Access Memory (SRAM) 38, which is accessed using an SRAM controller 40. Although not shown, the processor 12 also would include a nonvolatile memory to support boot operations. The DRAM 34 and DRAM controller 36 are typically used for processing large <Desc/Clms Page number 5> volumes of data, e. g. , processing of payloads from network packets. The SRAM 38 and SRAM controller 40 are used in a networking implementation for low latency, fast access tasks, e. g. , accessing look-up tables, memory for the processor 24, and so forth. The SRAM controller 40 includes a data structure (queue descriptor cache) and associated control logic to support efficient queue operations, as will be described in further detail later. The microengines 20a- 20p can execute memory reference instructions to either the DRAM controller 36 or the SRAM controller 40. The devices 14 and 16 can be any network devices capable of transmitting and/or receiving network traffic data, such as framing/MAC devices, e. g. , for connecting to 10/lOOBaseT Ethernet, Gigabit Ethernet, ATM or other types of networks, or devices for connecting to a switch fabric. For example, in one arrangement, the network device 14 could be an Ethernet MAC device (connected to an Ethernet network, not shown) that transmits packet data to the processor 12 and device 16 could be a switch fabric device that receives processed packet data from processor 12 for transmission onto a switch fabric. In such an implementation, that is, when handling traffic to be sent to a switch fabric, the processor 12 would be acting as an ingress network processor. Alternatively, the processor 12 could operate as an egress network processor, handling traffic that is <Desc/Clms Page number 6> received from a switch fabric (via device 16) and destined for another network device such as network device 14, or network coupled to such device. Although the processor 12 can operate in a standalone mode, supporting both traffic directions, it will be understood that, to achieve higher performance, it may be desirable to use two dedicated processors, one as an ingress processor and the other as an egress processor. The two dedicated processors would each be coupled to the devices 14 and 16. In addition, each network device 14,16 can include a plurality of ports to be serviced by the processor 12. The I/O interface 28 therefore supports one or more types of interfaces, such as an interface for packet and cell transfer between a PHY device and a higher protocol layer (e. g. , link layer), or an interface between a traffic manager and a switch fabric for Asynchronous Transfer Mode (ATM), Internet Protocol (IP), Ethernet, and similar data communications applications. The I/O interface 28 includes separate receive and transmit blocks, each being separately configurable for a particular interface supported by the processor 12. Other devices, such as a host computer and/or PCI peripherals (not shown), which may be coupled to a PCI bus controlled by the PC interface 26 are also serviced by the processor 12. <Desc/Clms Page number 7> In general, as a network processor, the processor 12 can interface to any type of communication device or interface that receives/sends large amounts of data. The processor 12 functioning as a network processor could receive units of packet data from a network device like network device 14 and process those units of packet data in a parallel manner, as will be described. The unit of packet data could include an entire network packet (e. g. , Ethernet packet) or a portion of such a packet, e. g. , a cell or packet segment. Each of the functional units of the processor 12 is coupled to an internal bus structure 42. Memory busses 44a, 44b couple the memory controllers 36 and 40, respectively, to respective memory units DRAM 34 and SRAM 38 of the memory system 18. The I/O Interface 28 is coupled to the devices 14 and 16 via separate I/O bus lines 46a and 46b, respectively. Referring to FIG. 2, an exemplary one of the microengines 20a is shown. The microengine (ME) 20a includes a control store 50 for storing a microprogram. The microprogram is loadable by the processor 24. The microengine 20a also includes an execution datapath 54 and at least one general purpose register (GPR) file 56 that are coupled to the control store 50. The datapath 54 includes several datapath elements, including an ALU 58, a <Desc/Clms Page number 8> multiplier 59 and a Content Addressable Memory (CAM) 60. The GPR file 56 provides operands to the various datapath processing elements including the CAM 60. Opcode bits in the instruction select which datapath element is to perform the operation defined by the instruction. The microengine 20a further includes a write transfer register file 62 and a read transfer register file 64. The write transfer register file 62 stores data to be written to a resource external to the microengine (for example, the DRAM memory or SRAM memory). The read transfer register file 64 is used for storing return data from a resource external to the microengine 20a. Subsequent to or concurrent with the data arrival, an event signal from the respective shared resource, e. g. , memory controllers 36,40, or core 24, can be provided to alert the thread that the data is available or has been sent. Both of the transfer register files 62, 64 are connected to the datapath 54, as well as the control store 50. Also included in the microengine 20a is a local memory 66. The local memory 66 is addressed by registers 68a, 68b, which supplies operands to the datapath 54. The local memory 66 receives results from the datapath 54 as a destination. The microengine 20a also includes local control and status registers (CSRs) 70, coupled to the transfer registers, for storing local inter-thread and <Desc/Clms Page number 9> global event signaling information and other information, and a CRC unit 72, coupled to the transfer registers, which operates in parallel with the execution datapath 54 and performs CRC computations for ATM cells. The microengine 20a also includes next neighbor registers 74, coupled to the control store 50 and the execution datapath 54, for storing information received from a previous neighbor ME in pipeline processing over a next neighbor input signal 21a, or from the same ME, as controlled by information in the local CSRs 70. In addition to providing an output to the write transfer unit 62, the datapath can also provide an output to the GPR file 56 over line 80. Thus, each of the datapath elements, including the CAM 60 that can return a result value from an executed. A next neighbor output signal 21b to a next neighbor ME in the processing pipeline can be provided under the control of the local CSRs 80. Other details of the microengine have been omitted for simplification. However, it will be appreciated that the microengine would include (and the control store 50 would be coupled to) appropriate control hardware, such as program counters, instruction decode logic and context arbiter/event logic, needed to support multiple execution threads. Referring to FIG. 3, an exemplary ME task assignment for a software pipeline model of the processor 12 is <Desc/Clms Page number 10> illustrated in 90. The processor 12 supports two pipelines: a receive pipeline and a transmit pipeline. The receive pipeline includes the following stages: re-assembly pointer search ("RPTR") 92, re-assembly information update ("RUPD") 94, receive packet processing (six stages) 96a-96f, metering stages ME1 98 and ME2 100, congestion avoidance ("CA") 102, statistics processing 104 and a queue manager ("QM") 106. The receive pipeline begins with data arriving in a receive block of the I/O interface 28 and ends with transmits queues 107 (stored in SRAM). The transmit pipeline stages include: a TX scheduler 108, the QM 106, a Transmit Data stage 110 and the statistics processing 104. The RPTR, RUPD and packet processing pipe stages work together to re-assemble segmented frames back into complete packets. The RPTR stage 92 finds the pointer to the reassembly state information in the SRAM 38 and passes this pointer to the RUPD 98. The RUPD 98 manages the reassembly state, which involves allocating DRAM buffers, and calculating offsets, byte counts and other variables, and provides the packet processing stage 96 with a pointer to the location in DRAM where the network data should be assembled. The threads of the packet processing stages 96 complete the re-assembly process by writing the data (payload) to the allocated DRAM buffer and also look at the L2 through L7 <Desc/Clms Page number 11> packet headers to process the packet. These stages are application dependent and can therefore vary from one application to another. For example, one application may support IP destination searches to determine destination port, and a 7-tuple search to identify flows and support access lists. To support ATM re-assembly, the RX pipeline requires a cyclic redundancy code (CRC) stage in addition to the pipe stages already described. CRC support can be provided by replacing the first one of the packet processing stages (stage 96a, as shown) and including additional information in the re-assembly state table. The CRC 96a reads the re- assembly state to get the AAL type and CRC residue, verifies the Virtual Circuit (VC) is configured for AAL5, performs CRC calculation over the cell, and updates the CRC residue in the re-assembly state. Metering 98,100 is used to monitor bandwidth of a flow. It checks whether each incoming packet is in profile or not. When a connection is made, a set of parameters are negotiated, e. g., Committed Information Rate (CIR) and Committed Burst Size (CBS), which define the bandwidth used by the flow. The metering function can be implemented according to any one of a number of known schemes, such as token bucket. <Desc/Clms Page number 12> Congestion avoidance 102 monitors network traffic loads in an effort to anticipate and avoid congestion at common network bottlenecks. The QM 106 is responsible for performing enqueue and dequeue operations on the transmit queues 107 for all packets,, as will be described in further detail below. The receive pipeline threads parse packet headers and perform lookups based on the packet header information. Once the packet has been processed, it is either sent as an exception to be further processed by the core 24, or stored in the DRAM 34 and queued in a transmit queue by placing a packet link descriptor for it in a transmit queue associated with the transmit (forwarding port) indicated by the header/lookup. The transmit queue is stored in the SRAM 38. The transmit pipeline schedules packets for transmit data processing, which then sends the packet out onto the forwarding port indicated by the header/lookup information during the receive processing. Collectively, the stages 92,94, and 96a-96f form a functional pipeline. The functional pipeline uses 8 microengines (MEs) in parallel, and each of the eight threads (threads 0 through 7) in each ME is assigned a single packet for processing. Consequently, at any one time there are 64 packets in the pipeline. Each stage executes <Desc/Clms Page number 13> at one packet arrival rate times execution period of eight threads. The stages 98,100, 102,104, 106,108 and 110 are context pipe-stages and, as such, are each handled by a single (different) ME. Each of the eight threads in each stage handles a different packet. Some of the pipe stages, such as CRC 96a, RUPD 94, QM 106, for example, operate on a"critical section"of code, that is, a code section for which only one ME thread has exclusive modification privileges for a global resource at any one time. These privileges protect coherency during read-modify-write operations. Exclusive modification privileges between MEs are handled by allowing only one ME (one stage) to modify the section. Thus, the architecture is designed to ensure that an ME not transition into a critical section stage until a previous ME has completed its processing in the critical section. For example, the RUPD 98 is a critical section that requires mutual exclusivity to shared tables in external memory. Thus, when transitioning from RPTR 92 to RUPD 94, thread 0 of ME1 of the RUPD 94 will not begin until all threads on ME 0 have completed the previous RUPD pipe stage. In addition, strict thread order execution techniques are employed in the pipeline at critical section code points to ensure sequence management of packets being handled by the different threads. <Desc/Clms Page number 14> The processor 12 also supports the use of caching mechanisms to reduce packet processing times and improve the speed at which the processor 12 operates with respect to incoming traffic. For example, the SRAM controller 40 (FIG. 1) maintains a cache of most recently used queue descriptors (stored in the SRAM 38), as will be further described. Also, the local memory 66 (FIG. 2) caches CRC information, such as CRC residue (also stored in the SRAM) 38, used by the CRC 96a. If more than one thread in a pipe stage such as the QM 106 is required to modify the same critical data, a latency penalty is incurred if each thread reads the data from external memory (that is, SRAM), modifies it and writes the data back to external memory. To reduce the latency penalty associated with the read and write, the ME threads can use the ME CAM 60 (FIG. 2) to fold these operations into a single read, multiple modifications and, depending on the cache eviction policy, either one or more write operations, as will be described. FIG. 4 shows an exemplary embodiment of the CAM 60. The CAM 60 includes a plurality of entries 120. In the illustrated embodiment, there are 16 entries. Each entry 120 has an identifier value (or tag) 122, e. g. , a queue number or memory address that can be compared against an input lookup value. As will be discussed later, each identifier value is associated with a stored unit of <Desc/Clms Page number 15> information that is related to and used during packet processing, e. g. , a queue descriptor, re-assembly state data, and so forth. Each entry also'includes an entry number 124 and state information 126 associated with the identifier 122 in that same entry. Compare results 128 are provided to a Status and LRU logic unit 130, which produces a lookup result 132. The lookup result 132 includes a hit/miss indicator 134, state information 136 and an entry number 138. Collectively, the fields 134 and 136 provide status 140. The width of the identifiers 122 is the same as the source registers being used to provide load the CAM entries or provide lookup values, e. g. , the registers of the GPR file 56 (FIG. 3). In the embodiment shown, the state information 126 is implemented as a state bit. The width and format of the state information, and the number of identifiers are based on design considerations. During a CAM lookup operation, the value presented from a source such as the GPR file 56 is compared, in parallel, to each identifier 122 with a resulting Match signal 142 per identifier. The values of each identifier were previously loaded by a CAM load operation. During that load operation, the values from the register file 56 specified which of the identifiers and the values of the identifiers to be loaded. <Desc/Clms Page number 16> The state information is also loaded into the CAM during the CAM load operation. The identifier 122 is compared against the lookup value in a source operand provided by an instruction, e. g., Lookup [destreg, srcreg]. The source operand specified by the parameter"src~reg" holds the lookup value to be applied to the CAM 60 for lookup. The destination register specified by parameter "dest~reg"is the register that receives the result of the CAM lookup 60. All entries 120 are compared in parallel. In one embodiment, the lookup result 132 is a 6-bit value which is written into the specified destination register in bits 8: 3, with the other bits of the register set to zero. The destination register can be a register in the GPR file 56. Optionally, the lookup result 132 can also be written into either of the LMADDR registers 68a, 68b (FIG. 2) of the ME 22. For a hit (that is, when the hit/miss indicator 134 of the result 132 indicates a hit), the entry number 138 is the entry number of the entry that matched. When a miss occurs and the hit/miss indicator 134 thus indicates a miss, the entry number 138 is the entry number of the Least Recently- Used (LRU) entry in the CAM array. The state information <Desc/Clms Page number 17> 136 is only useful for a hit and includes the value in the state field 126 for the entry that hit. The LRU logic 130 maintains a time-ordered list of CAM entry usage. When an entry is loaded, or matches on a lookup, it is moved to a position of Most Recently Used (MRU), a lookup that misses does not modify the LRU list. All applications can use the hit/miss indication 134. The entry number 138 and state information 136 provide additional information that may be used by some applications. On a miss, for example, the LRU entry number can be used as a hint for cache eviction. The software is not required to use the hint. The state information 136 is information produced and used only by software. It can differentiate different meanings for a hit, such as unmodified versus modified data. The software can use the information for branch decisions, as offset into data tables, among other uses. Other instructions that use and manage the CAM can include: Write [entry, src~reg], opt~tok ; Write-State (state-value, entry) ReadTag (destreg, entry); ReadState (destreg, entry); and Clear. <Desc/Clms Page number 18> The Write instruction writes an identifier value in the src~reg to the specified CAM entry. An option token can be used to specify state information. The Read Tag and Read-State instructions are used for diagnostics, but can also be used in normal functions. The tag value and state for the specified entry are written into the destination register. Reading the tag is useful in the case where an entry needs to be evicted to make room for a new value-that is, the lookup of the new value results in a miss, with the LRU entry number returned as a result of the miss. The read instruction can then be used to find the value that is stored in that entry. The Read~Tag instruction eliminates the need to keep the identifier value corresponding to the LRU entry number in another register. The Clear instruction is used to flush all information out of the CAM. When the CAM is used as a cache tag store, and each entry is associated with a block of data in Local Memory 66, the result of the lookup can be used to branch on the hit/miss indicator 134 and use the entry number 138 as a base pointer into the block in Local Memory 66. In another embodiment, the state 126 can be implemented as a single lock bit and the result 132 can be implemented to include a status code (instead of the separate indicator and state fields) along with the entry number 138. For example, the code could be defined as a two-bit code, with <Desc/Clms Page number 19> possible results to include a"miss" (code 01'),"hit" (code 10') and"locked" (code 11'). A return of the miss code would indicate that the lookup value is not in the CAM, and the entry number of the result value is the Least Recently Used (LRU) entry. As discussed above, this value could be used as a suggested entry to be replaced with the lookup value. A hit code would indicate that the lookup value is in the CAM and the lock bit is clear, with the entry number in the result being the entry number of the entry that has matched the lookup value. A locked code would indicate that the lookup value is in the CAM and the locked bit 126 is set, with the entry number that is provided in the result again being the entry number of the entry that matched the lookup value. The lock bit 126 is a bit of data associated with the entry. The lock bit could be set or cleared by software, e. g. , using a LOCK or UNLOCK instruction, at the time the entry is loaded, or changed in an already loaded entry. The lock bit 126 can be used to differentiate cases where the data associated with the CAM entry is in flight, or pending a change, as will be discussed in further detail later. As mentioned earlier, a context pipe stage that uses critical data is the only ME that uses that critical data. Therefore, the replacement policy for the CAM entries is to replace the LRU only on CAM misses. On the other hand, a <Desc/Clms Page number 20> functional pipeline (like the pipeline 114 of FIG. 3) performs the same function on multiple MEs. In a functional pipeline, therefore, a given ME is required to evict all critical data to external memory before it exits a stage that uses critical data and also must ensure that the CAM is cleared prior to any threads using the CAM. Before a thread uses the critical data, it searches the CAM using a critical data identifier such as a memory address as a lookup value. As described earlier, the search results in one of three possibilities: a"miss", a"hit"or a"lock". If a miss is returned, then data is not saved locally. The thread reads the data from external memory (that is, from the SRAM 38) to replace the LRU data. It evicts LRU data from local memory (SRAM controller cache, or local memory 66) back to external memory, optionally locks the CAM entry and issues a read to get the new critical data from external memory. In certain applications, as will be described later, the lock is asserted to indicate to other threads that the data is in the process of being read into local memory, or to indicate to the same thread (the thread that initiated the read) that the memory read is still in progress. Once the critical data is returned, the thread awaiting the data processes the data, makes any modifications to the data, writes it to local memory, <Desc/Clms Page number 21> updates the entry from which LRU data was evicted with the new data and unlocks the CAM entry. If the result is a lock, the thread assumes that another ME thread is in the process of reading critical data ! and that it should not attempt to read the data. Instead, it tests the CAM at a later time and used the data when the lock is removed. When the result is a hit, then the critical data resides in local memory. Specific examples of CAM use will now be described with reference to FIGS. 5 through 8. As discussed above, and as shown in FIG. 3, the processor 12 can be programmed to use one of the microengines 20 as the QM 106. The CAM 60 in the QM 106 serves as a tag store holding the tags of queue descriptors that are cached by the SRAM controller 40. The QM 106 receives enqueue requests from the set of microengines functioning as the receive functional pipeline 114. The receive pipeline 114 is programmed to process and classify data packets received by one of the network devices 14,16 (FIG. 1), e. g. , the physical layer device 14. The enqueue requests specify which output queue an arriving packet should be sent to. The transmit scheduler 108 sends dequeue requests to the QM 106. The dequeue requests specify the output queue from which a packet is to be <Desc/Clms Page number 22> removed for transmittal to a destination via one of the network devices, 14,16, e. g. , the switch fabric 16. An enqueue operation adds information that arrived in a data packet to one of the output queues and updates the corresponding queue descriptor. A dequeue operation removes information from one of the output queues and updates the corresponding queue descriptor, thereby allowing the network device 16 to transmit the information to the appropriate destination. Referring to FIG. 5A, an example of"n"transmit queues 150 and their corresponding queue descriptors 152 residing in external memory (SRAM 38) is shown. Each output queue 150 includes a linked list of elements 154, each of which has a pointer with the address of the next element in the queue. Each element 154 also includes a pointer that points to information that is stored elsewhere and that the element represents. Typically, the pointer of the last element in the queue 150 contains a null value. The queue descriptor 152 includes an end of pointer EOP indicator 156, a segment count 158, a head pointer 160, a tail pointer 162 and a frame count 164. The descriptor 152 may also include other queue parameters (not shown). The head pointer 160 points to the first element of the transmit queue 150, and the tail pointer 30 points to the last element of the transmit queue <Desc/Clms Page number 23> t 150. The segment count 158 identifies the number of elements in the transmit queue 150. Referring now to FIG. 5B, executing enqueue and dequeue operations for a large number of transmit queues 150 in the SRAM memory 38 at high-bandwidth line rates can be accomplished by storing some of the queue descriptors 152 in a cache 170 in the SRAM controller 40. The ME 20 executing as the queue manager 106 uses the identifiers 122 of the entries 120 in its CAM 60 to identify the memory addresses of the sixteen queue descriptors 152 most-recently-used in enqueue or dequeue operations, that is, the cached queue descriptors. The cache 170 stores the corresponding queue descriptors 152 (the EOP value 156, the segment count 158, the head pointerl60, tail pointer 162 and the frame count 164) stored at the addresses identified in the tag store (CAM 60). The queue manager 106 issues commands to return queue descriptors 152 to memory 38 and fetch new queue descriptors 152 from memory such that the queue descriptors stored in the cache 170 remain coherent with the addresses in the tag store 60. The queue manager 106 also issues commands to the SRAM controller 38 to indicate which queue descriptor 152 in the cache 170 should be used to execute the command. The commands that reference the head pointer 160 or tail pointer <Desc/Clms Page number 24> 162 of a queue descriptor 152 in the cache 170 are executed in the order in which they arrive at the SRAM controller 38. Locating the cache 170 of queue descriptors 152 at the memory controller 40 allows for low latency access to and from the cache 170 and the memory 38. Also, having the control structure for queue operations in a programming engine can allow for flexible high performance while using existing micro-engine hardware. The threads associated with the QM 106 execute in strict order. The threads use local inter-thread signaling to maintain strict order. To ensure that the QM 106 keeps up with in an incoming line rate, each thread performs one enqueue and one dequeue operation in a time slot equal to the minimum frame arrive time. FIG. 6 illustrates an exemplary queue operation 180 (representing either an enqueue or dequeue operation) performed by the QM 106. The QM 106 receives 182 a request for a queue operation 182. The request is received from the CA content pipestage ME when it is an enqueue request and is received from the TX scheduler content pipe-stage ME when it is request for a dequeue operation. The QM 106 reads 184 a queue number from the request. The QM 106 then uses its CAM to detect temporal dependencies between the queue specified in the request and the last 16 queues to which the QM 106 performed such an <Desc/Clms Page number 25> operation. Thus, the QM 106 performs a CAM lookup 186 based on the queue number identified in the request. If there is a dependency, i. e. , the QM thread detects 188 a CAM hit, the latency of reading a queue descriptor is eliminated because the CAM hit indicates that the descriptor corresponding to the queue number is currently maintained in the queue descriptor cache 170 (FIG. 5B). In the event that a hit occurs, the QM 106 proceeds to execute an instruction 190 that commands the SRAM controller 40 to perform the requested operation. If, at 188, it is determined that the CAM search results in a miss, the entry number of the least recently used CAM entry is returned to the QM 106. There is a direct mapping between the CAM entry and a cache entry (queue descriptor). In other words, an LRU CAM entry"n" indicates that the cache entry"n"should be evicted. Therefore, the QM 106 evicts 192 from the cache the queue descriptor corresponding to the queue number stored in the LRU CAM entry. Once the cache entry is evicted, the QM 106 reads 194 the"new"queue descriptor (that is, the queue descriptor of the queue number in the request) into the cache from the SRAM. The new queue descriptor includes the linked list head pointer (for dequeue) and tail pointer (for enqueue), and a count that indicates the number of frames or buffers on the queue (as shown in FIGS. 5A-5B). The QM 106 <Desc/Clms Page number 26> also stores 196 the queue number of the new queue descriptor in the CAM entry that had been identified as the LRU entry to replace the number of the evicted queue descriptor. The QM 106 executes an instruction 190 that commands the SRAM controller 40 to perform the requested operation. The SRAM controller 40 performs the linked list operation for enqueue or dequeue. When an operation of either type (enqueue or dequeue) is performed, the QM 106 sends a message to the TX scheduler 108. After a dequeue operation, the QM 106 passes a transmit request to the TX data context pipe-stage 110. Another stage that uses the CAM 60 is the CRC processing pipe stage 96a. The ME 20 in this stage of the receive functional pipeline 114 uses its internal CAM 60 to maintain coherency of the CRC residue (in the re-assembly state table) between the eight threads executing the CRC processing pipe stage 96a. Referring now to FIG. 7, a CRC pipe-stage program flow 200, including the use of the CAM 60 in support of the function, is shown. The CRC stage 96a is entered only when the previous ME has indicated (via the next neighbor line 21a (FIG. 2) ) that is has exited the stage. This ensures that the ME will access the most recent critical data (CRC residue). It is also critical that, throughout this pipe- stage, all threads execute in strict order to ensure that <Desc/Clms Page number 27> the CRC is calculated correctly. Because the CRC stage 96a uses the CAM 60, it firsts clears 202 the CAM of any data still in the CAM from a previous pipe-stage. It reads 204 the port type and determines 206 if it has been assigned an ATM cell. If the cell is not an ATM cell (that is, it is some other type, such as Ethernet or POS), the ME performing the CRC stage passes 208 the cell through without any processing. If the cell is an ATM cell, the ME 20 performs the CRC processing. The processing includes the following activities : reading the CRC residue, ATM type and SOP/EOP state in SRAM; determining if the cell is carrying an SOP, body or EOP ; validating that the VC is carrying AAL5 cells and, if so, performing the CRC computation; and updating CRC residue and EOP-SOP status in SRAM. The CRC computation is performed using the CRC unit 72 (FIG. 2) in the ME 20. The CRC computation must be performed in strict order to ensure that the CRC for cells that belong to the same VC are computed with the correct CRC residue. The CRC processing is divided into a read phase and a modify/write phase. The CAM 60 is used in both phases. In the first phase, the CAM 60 is used to decide whether a thread should read the residue/type fields from SRAM 38 or use the result from a previous thread stored in the Local <Desc/Clms Page number 28> Memory 66 (FIG. 2). The first phase begins with a given thread searching the CAM 210 using the pointer to the re- assembly state. If the thread detects 212 a CAM miss, the thread writes 214 a CAM entry with the re-assembly pointer and state information to lock the entry, and issues a read to obtain the CRC residue and AAL type from SRAM memory 38. If, at 212, the thread detects a hit, it does not issue a read. When the thread receives 216 the appropriate event signaling, that is, an event signal indicating that the previous thread has completed processing, the thread wakes and begins phase 2 processing. It searches 218 the CAM using the same re-assembly pointer. If the thread had issued a read and determines 220 a locked status for a matched CAM entry, the thread moves 222 the read result in the transfer registers to the local memory. The thread that moves the result also unlocks the entry, thereby ensuring a hit for future CAM lookups for that particular pointer. Otherwise, if the CAM entry is not locked, then a hit has occurred, and the thread simply reads 224 the corresponding information, that is, the residue and type, from the Local Memory. After the second phase CAM search, each thread validates that the VC is carrying AAL5 by examining the type field from the VC table. For an AAL5 type, the thread <Desc/Clms Page number 29> computes 226 the CRC over the cell. If the type is not AAL5, the cell is handed off to an exception handler, or discarded, depending on the implementation. If the thread determines 228 that the PTI bits in the ATM header indicate that the cell is an EOP cell, the thread updates 230 the re-assembly state by setting the CRC residue to all zeroes and setting the SOP bit to a one. If the cell is not an EOP cell, the thread updates 232 the state with the new residue and sets SOP to zero. It saves 235 the updated CRC residue and SOP in the Local Memory for use by other threads and, according to its writeback cache policy, also writes the CRC residue and SOP back to the re-assembly state in the SRAM 38. The thread passes 236 the SOP, EOP and body status to the next (packet processing) stage. It is important that other stages in the RX pipeline know if the ATM cell contains an EOP, SOP or body. For ATM, the settings of the SOP and EOP bit indicate whether an entire cell was received (as opposed to an entire packet), so the CRC threads must use the EOP bit status provided in the header PTI field. The PTI bits only support EOP, so when an EOP is detected, the CRC thread sets an SOP bit in its section of the re-assembly state table indicating to the next thread that it has an SOP. Each time the CRC thread reads the re-assembly state, it reads the SOP bit, and if it <Desc/Clms Page number 30> is set, and the PTI bits in the ATM header indicate no EOP, it clears the SOP bit. Because other stages do not read the CRC threads re- assembly state area, the CRC thread also passes the EOP/SOP status down the pipeline. Once the CRC threads have completed the CRC calculation and the re-assembly state table is updated, the threads are ready to move onto the next pipe-stage. When a thread completes its CRC calculation and issues its SRAM write of the residue/type, it also signals the thread of the next ME indicating that it can start its CRC pipe-stage. It is important that the signaling ensures that the next ME is not provided a signal until it can be assured that any pending residues will be written before the next ME issues its residue reads. It will be understood that, while the implementation described thus far uses the CAM 60 to reduce the number of read accesses (via"folding", as discussed earlier), the strict sequential ordering of the execution of context threads in a given stage is maintained not through the use of CAM, but instead by using local inter-thread signaling and by ensuring that read reference and modification activity completes before that same data in needed by successive threads. <Desc/Clms Page number 31> It will be appreciated, however, that the CAM 60 could be used to maintain coherency and correct packet processing sequence as well. For example, say threads are handling two successive packets that are in the same flow (or are associated with the same queue number) and access the same SRAM location. Because packet arrival rates are faster than SRAM access speeds, the thread handling the second packet will be ready to access the data before the SRAM read and modify activities of the thread handling the first (earlier) packet have completed. In this situation, the software- controlled CAM cache implementation can be used to recognize the dependency and to ensure that the most current information is always used. Thus, each thread uses the CAM 60 to do multiple compares in parallel using the CAM Lookup instruction, with a source register providing the flow number or queue number as the lookup value, as described earlier. If a miss results, the thread commences the SRAM read and allocates a CAM entry into which the thread places the flow number. If the flow is already in the CAM, a hit indicator is returned along with a unique pointer value (for example, which entry number in the CAM matched). The thread that gets a hit in the CAM can obtain the latest copy of the data from local memory (cache in SRAM controller 40, or ME Local Memory 66) without having to do an SRAM read. <Desc/Clms Page number 32> When a thread loads a flow number into a CAM entry, it also stores state information in the entry to enable subsequent thread lookups to determine that either a) the SRAM read has been started, but is not yet completed (it is "in-flight") ; or b) the SRAM read has been completed, and the data is valid. If the"in-flight"status is determined, the subsequent thread knows that it should not start a read, but that it cannot yet use the read data. It can continue to test the status of the entry until it determines that the status has been changed to reflect valid data. Other embodiments are within the scope of the following claims.
A flexible aes instruction set for a general purpose processor is provided. The instruction set includes instructions to perform a 'one round' pass for aes encryption or decryption and also includes instructions to perform key generation. An immediate may be used to indicate round number and key size for key generation for 128/192/256 bit keys. The flexible aes instruction set enables full use of pipelining capabilities because it does not require tracking of implicit registers.
A processor to perform encryption comprising:a plurality of cores;a level 1 instruction cache (202) to cache instructions;a level 1 data cache (202);a bus interface unit (200);a plurality (304) of 128-bit registers;a decode unit (206) to decode the instructions from the level 1 instruction cache including a first Advanced Encryption Standard, AES, encryption round instruction corresponding to an AES encryption having at least one round, wherein the first AES encryption round instruction is to use a 128-bit source/destination register (306) of the plurality of 128-bit registers andanother register (308) of the plurality of 128-bit registers; andan execution unit (210) coupled to the decode unit (206), the decode unit (206) being responsive to the first AES encryption round instruction to cause the execution unit (210) to perform operations for an AES single encryption round including a byte substitution (404, 502), a shift rows (406, 504), and an exclusive OR (410, 506) and to store (412, 508) a result in the 128-bit source/destination register (306).The processor of claim 1, wherein the AES encryption comprises a plurality of rounds and wherein the first AES encryption round instruction (AESENCRYPTLastRound) is to perform a last encryption round of the plurality of encryption rounds and wherein the operations performed by the execution unit responsive to the decoding of the first AES encryption round instruction omit a mix columns operation.The processor of claim 2, wherein the first AES encryption round instruction belongs to an AES instruction set further comprising a second AES encryption round instruction to perform at least one of the plurality of encryption rounds other than the last encryption round and wherein the decode unit is responsive to the second AES encryption instruction to cause the execution unit (210) to perform operations including a mix columns operation and to store a result in a register of the plurality (304) of 128-bit registers.The processor of any one of claims 1 to 3, wherein the decode unit is to decode instructions from the level 1 instruction cache corresponding to an AES decryption, the stored instructions comprising a first AES decryption instruction that is to use a 128-bit source/destination register (306) of the plurality of 128-bit registers and another 128-bit register (308) of the plurality of 128-bit registers and wherein the decode unit is responsive to the first AES decryption instruction to cause the execution unit (210) to perform operations for an AES single decryption round including an inverse byte substitution (702), an inverse shift rows (704), and an exclusive OR (706) and store a result in the 128-bit source/destination register (306).The processor of claim 4, wherein the decode unit is to decode instructions from the level 1 instruction cache comprising a second AES decryption instruction that is to use a 128-bit source/destination register (306) of the plurality of 128-bit registers and another 128-bit register (308) of the plurality of 128-bit registers and wherein the decode unit is responsive to the second AES decryption instruction to cause the execution unit (210) to perform operations for an AES single decryption round including an inverse mix columns operation and store a result in the 128-bit source/destination register (306).The processor of any one of claims 1 to 3, wherein the AES encryption and corresponding instruction(s) is capable of using any one of a 128-bit round key, a 192-bit round key, and a 256-bit round key or the processor of any one of claims 4 to 5 wherein the AES encryption and the AES decryption are capable of using any one of a 128-bit round key, a 192-bit round key, and a 256-bit round key.The processor of any one of claims 1 to 3, wherein the processor further comprises a microcode Read Only Memory (ROM) (214) to store micro-operations to implement the AES encryption or the processor of any one of claims 4 to 5 wherein the processor further comprises a microcode Read Only Memory (ROM) (214) to store micro-operations to implement the AES decryption.A processor (101) to perform decryption comprising:a plurality of cores;a level 1 instruction cache (202) to cache instructions;a level 1 data cache (204);a bus interface unit (200);a plurality (304) of 128-bit registers;a decode unit (206) to decode the instructions from the level 1 instruction cache including a first Advanced Encryption Standard, AES, decryption round instruction corresponding to an AES decryption having at least one round, wherein the first AES decryption round instruction is to use a 128-bit source/destination register (306) of the plurality of 128-bit registers and another 128-bit register (308) of the plurality of 128-bit registers; andan execution unit (210) coupled to the decode unit, the decode unit (206) being responsive to the first AES decryption round instruction to cause the execution unit (210) to perform operations for an AES single decryption round including an inverse byte substitution (702), an inverse shift rows (704), and an exclusive OR (706) and store a result in the 128-bit source/destination register (306).The processor of claim 8, wherein the AES decryption comprises a plurality of rounds and wherein the first AES decryption round instruction (AESDECRYPTLastRound) is to perform a last decryption round of the plurality of encryption rounds and wherein the operations performed by the execution unit responsive to the decoding of the first AES decryption round instruction omit an inverse mix columns operation.The processor of claim 9, wherein the first AES decryption round instruction belongs to an AES instruction set comprising a second AES decryption round instruction to perform at least one of the plurality of decryption rounds other than the last decryption round and wherein the decode unit is responsive to the second AES decryption instruction to cause the execution unit (210) to perform operations including an inverse mix columns operation and to store a result in a register of the plurality (304) of 128-bit registers.The processor of any one of claims 8 to 10, wherein the AES decryption is capable of using any one of a 128-bit round key, a 192-bit round key, and a 256-bit round key.The processor of any one of claims 8 to 12, further comprising a microcode Read Only Memory (ROM) to store micro-operations to implement the AES decryption.A system comprising:a memory controller to control communication with a memory; anda processor as claimed in any one of claims 1 to 12.The system of claim 13, wherein the memory controller (106) is to control communication with a double data rate memory.A mobile computer comprising:a random access memory (RAM);a network interface controller; anda processor as claimed in any one of claims 1 to 12, the processor being coupled to the RAM.The mobile computer of claim 15, further comprising an input/output controller.The mobile computer of claim 16, further comprising a storage device coupled to the input/output controller.
FIELDThis disclosure relates to cryptographic algorithms and in particular to the advanced encryption standard (AES) algorithm.BACKGROUNDCryptology is a tool that relies on an algorithm and a key to protect information. The algorithm is a complex mathematical algorithm and the key is a string of bits. There are two basic types of cryptology systems: secret key systems and public key systems. A secret key system also referred to as a symmetric system has a single key ("secret key") that is shared by two or more parties. The single key is used to both encrypt and decrypt information.The Advanced Encryption Standard (AES), published by the National Institute of Standards and Technology (NIST) as Federal Information Processing Standard (FIPS) 197 is a secret key system. AES is a symmetric block cipher that can encrypt and decrypt information.Encryption (cipher) performs a series of transformations using the secret key (cipher key) to transforms intelligible data referred to as "plaintext" into an unintelligible form referred to as "cipher text". The transformations in the cipher include: (1) Adding a round key (value derived from the cipher key) to the state (a two dimensional array of bytes) using a Exclusive OR (XOR) operation; (2) Processing the state using a non-linear byte substitution table (S-Box) (3) Cyclically shifting the last three rows of the state by different offsets; and (4) Taking all of the columns of the state and mixing their data (independently of one another) to produce new columns.Decryption (inverse cipher) performs a series of transformations using the cipher key to transform the "cipher text" blocks into "plaintext" blocks of the same size. The transformations in the inverse cipher are the inverse of the transformations in the cipher.The Rijindael algorithm is specified in the AES standard to process data blocks of 128 bits, using cipher keys with lengths of 128, 192 and 256 bits. The different key lengths are typically referred to as AES-128, AES-192 and AES-256.The AES algorithm transforms the plaintext into cipher text or cipher text into plaintext in 10, 12, or 14 consecutive rounds, with the number of rounds dependent on the length of the key.BRIEF DESCRIPTION OF THE DRAWINGSFeatures of embodiments of the claimed subject matter will become apparent as the following detailed description proceeds, and upon reference to the drawings, in which like numerals depict like parts, and in which:Fig. 1 is a block diagram of a system that includes an embodiment of a flexible architecture and instruction for performing AES encryption and decryption in a general purpose processor according to the principles of the present invention;Fig. 2 is a block diagram of an embodiment of the processor shown in Fig. 1 ;Fig. 3 is a block diagram that includes an embodiment of the execution unit shown in Fig. 2 for performing AES encryption and decryption according to the principles of the present invention;Fig. 4 is a flow graph illustrating the flow of an aes encrypt round instruction through the execution unit shown in Fig. 3 ;Fig. 5 is a flow graph illustrating the flow of an aes encrypt last round instruction through the execution unit shown in Fig. 3 ;Fig. 6 is a flow graph illustrating the flow of an aes decrypt round instruction through the execution unit shown in Fig. 3 ;Fig. 7 is a flow graph illustrating the flow of an aes decrypt last round instruction through the execution unit shown in Fig. 3 ; andFig. 8 illustrates an embodiment of an aes round instruction with immediate byte that may be used to generate round keys and perform encryption and decryption.Although the following Detailed Description will proceed with reference being made to illustrative embodiments of the claimed subject matter, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly, and be defined only as set forth in the accompanying claims.DETAILED DESCRIPTIONThe Advanced Encryption Standard (AES) algorithm is a compute intensive algorithm that is typically performed in software or in a special purpose processor. Thus, encryption is typically only used for encrypting a subset of the information stored in computers, for example, information that may be classified as "top secret". However, there is a need to encrypt more of the information that is stored on computers. For example, if all information stored on a mobile computer was encrypted, this information would be protected in the event that the mobile computer was stolen.AES is a block cipher that operates on a 128-bit block of bits with a key size of 128, 192 or 256 bits. A sequence of operations is iterated for a number of rounds (10, 12 or 14) based on the key size.The generation of the keys for each round may be performed on the fly (that is, just prior to each round) using implicit 128-bit registers to store the round key. However, the use of implicit registers may reduce the performance of x86 register-based processors due to dependency on a result of a previous instruction.There are some applications, for example, an application that processes network packets that may have different keys per flow that benefit from on-the-fly key generation. There may be other applications where greater performance is required with the single key, for example, a single key that is used for encrypting/decrypting contents of a disk drive. Thus, there arises a need for flexibility of key generation. An embodiment of the invention provides a flexible architecture and instruction for performing AES encryption and decryption in a general purpose processor.Fig. 1 is a block diagram of a system 100 that includes an embodiment of a flexible architecture and instruction for performing AES encryption and decryption in a general purpose processor according to the principles of the present invention. The system 100 includes a processor 101, a Memory Controller Hub (MCH) or (Graphics Memory Controller Hub (GMCH)) 102 and an Input/Output (I/O) Controller Hub (ICH) 104. The MCH 102 includes a memory controller 106 that controls communication between the processor 101 and memory 108. The processor 101 and MCH 102 communicate over a system bus 116.The processor 101 may be any one of a plurality of processors such as a single core Intel® Pentium IV ® processor, a single core Intel Celeron processor, an Intel® XScale processor or a multi-core processor such as Intel® Pentium D, Intel® Xeon® processor, or Intel® Core® Duo processor or any other type of processor.The memory 108 may be Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronized Dynamic Random Access Memory (SDRAM), Double Data Rate 2 (DDR2) RAM or Rambus Dynamic Random Access Memory (RDRAM) or any other type of memory.The ICH 104 may be coupled to the MCH 102 using a high speed chip-to-chip interconnect 114 such as Direct Media Interface (DMI). DMI supports 2 Gigabit/second concurrent transfer rates via two unidirectional lanes.The ICH 104 may include a storage I/O controller 110 for controlling communication with at least one storage device 112 coupled to the ICH 104. The storage device may be, for example, a disk drive, Digital Video Disk (DVD) drive, Compact Disk (CD) drive, Redundant Array of Independent Disks (RAID), tape drive or other storage device. The ICH 104 may communicate with the storage device 112 over a storage protocol interconnect 118 using a serial storage protocol such as, Serial Attached Small Computer System Interface (SAS) or Serial Advanced Technology Attachment (SATA).The processor 101 includes an AES function 103 to perform aes encryption and decryption operations. The AES function 103 may be used to encrypt or decrypt information stored in memory 108 and/or stored in the storage device 112.Fig. 2 is a block diagram of an embodiment of the processor 101 shown in Fig. 1 . Processor 101 includes a fetch and decode unit 206 for decoding processor instructions received from Level 1 (L1) instruction cache 202. Data to be used for executing the instruction may be stored in register file 208. In one embodiment, the register file 208 includes a plurality of 128-bit registers, which are used by an aes instruction to store data for use by the aes instruction.In one embodiment, the register file is a group of 128-bit registers similar to the 128-bit MMX registers provided in Intel Pentium MMX Processors that have a Streaming (Single Instruction Multiple Data (SIMD)) Extension (SSE) Instruction set. In a SIMD processor, data is processed in 128-bit blocks with one 128-bit block loaded at one time.The fetch and decode unit 202 fetches macroinstructions from L1 instruction cache 202, decodes the macroinstructions and breaks them into simple operations called micro operations (µops) that may be stored in microcode Read Only Memory (ROM) 214. The execution unit 210 schedules and executes the micro operations. In the embodiment shown, the aes function 103 in the execution unit 210 includes micro operations for an aes instruction set. The retirement unit 212 writes the results of the executed instructions to registers or memory. A round key 214 used by the aes instruction may be stored in L1 data cache 204 and loaded into the execution unit 210 for use by the micro operations to execute an aes instruction in the aes instruction set. Storing the round key 214 in the data cache 204 protects the round key from side channel attacks, for example, attempts to obtain the round key in order to get access to encrypted information stored in the system 100.Fig. 3 is a block diagram that illustrates an embodiment of the execution unit 210 shown in Fig. 2 for performing AES encryption and decryption according to the principles of the present invention. Fig. 3 will be described in conjunction with Fig. 2 .After an aes instruction has been decoded by the fetch and decode unit 206, the execution of an aes instruction by the execution unit 210 involves performing the micro operations associated with the aes instruction that may be stored in the microcode ROM 214.A flexible AES instruction set according to an embodiment of the present invention allows a programmer to make performance tradeoffs with respect to the amount of data to be processed, and memory bandwidth and capacity.Some applications may continuously use the same key. In applications in which performance is very important, a tradeoff can be made in terms of pre-computing a key schedule for the key (that is, a round key per round) once and storing it in memory. Other applications may want to minimize the amount of memory used to store the key schedule while still achieving good performance on multi-block operations. For such applications the key schedule may be pre-computed for multiple blocks before being processed. The memory footprint may be further minimized by only storing the cipher key or the inverse cipher key, and then deriving the other as necessary at the expense of some performance.In an x86 -type processor, the area and the number of execution ports that are available for AES round key operations and AES scheduling operations constrain the performance of an AES instruction. In a system in which key expansion is required for every block encryption, performance may be improved by placing the AES scheduling operations and the AES round key operations on separate execution ports. However, separate execution ports and the additional area for controlling the separate ports may not be available in an x86-type processor.In an embodiment, an aes instruction set is provided that includes separate aes instructions for performing an encryption round, a decryption round, an encryption last round, a decryption last round and for computing an encryption round key or a decryption round key. In one embodiment there are six aes instructions in the aes instruction set. Each aes round instruction has a unique operation code (opcode). The aes round instructions in the aes instruction set for one embodiment for a fixed width round key (for example, 128-bits) are shown below in Table 1.Table 1AESENCRYPTRound xmmsrcdst xmm Input:data (=destination), round key Output:data after transformation through the AES round using the round keyAESENCRYPTLastRound xmmsrcdst xmm Input:data (=destination), round key Output:data after transformation through the AES last round using the round keyAESDECRYPTRound xmmsrcdst xmm Input:data (=destination), round key Output:data after transformation through the AES round using the round keyAESDECRYPTLastRound xmmsrcdst xmm Input:data (=destination), round key Output:data after transformation through the AES last round using the round keyAESNextRoundKey xmmsrc1,2 xmm dst (immediate) Input:low 128 bits of key, high 128 bits of key, indicator for round number. Output:next round key derived from the inputAESPreviousRoundKey xmmsrc1,2 xmm dst (immediate) Input:low 128 bits of key, high 128 bits of key, indicator for round number Output:previous round key derived from the inputThe aes instruction set includes four aes round instructions (encrypt, decrypt, encrypt last round, decrypt last round) and two aes round key instructions (next round key and previous round key). The aes round instructions in the aes instruction set include single round operations to perform encryption and decryption round operations that are to be used for all rounds but the last round. For example, in the AESENCRYPTRound single round instruction in Table 1, the input data is stored in a 128-bit register (xmmsrcdst) and the round key stored in another 128-bit register (xmm). This instruction performs an aes round operation on input data (source) that is stored in the 128-bit xmmsrcdst register and overwrites the input data stored in the 128-bit xmmsrcdst register with the result of the execution of the round operation. Thus xmmsrcdst first stores the input data and later stores the result of the aes round operation.The aes instruction set also includes an aes decryption instruction for a last decryption round and an aes encryption instruction for a last encryption round. For example, in the 'AESENCRYPTLastRound single round instruction in Table 1, the input data is stored in a 128-bit register (xmmsrcdst) and the round key stored in another 128-bit register (xmm). This instruction performs an aes round operation on input data (source) that is stored in the xmmsrcdst register and overwrites the input data stored in the xmmsrcdst register with the result of the execution of the round operation. Thus xmmsrcdst first stores the input data and later stores the result of the round operation. The xmm register stores the round key for the round operation.In another embodiment, the round and last round instructions, for example, 'AESENCRYPTRound and AESENCRYPTLastRound may take the input from memory (m/128) instead of from the register file 304, for example, the aes round instruction may be AESENCRYPTRound xmmsrcdst m/128.The other two aes instructions in the aes instruction set generate a round key for an aes round dependent on the size of the key, that is, 128-bits, 192-bits or 256-bits. One of the aes round key instructions generates a round key for use in an encryption operation and the other aes round key instruction generates a round key for use in a decryption operation. The immediate field in the AESNextRoundKey and the AESPreviousRoundKey instructions specify the size of the key {128, 192, 256}.In yet another embodiment, instead of an immediate field, the different key sizes may be implemented as separate instructions each having a unique operation code. In this embodiment, the number of aes round key instructions includes three separate instructions for each round key operation, for example, AESNextRoundKey_128 AESNextRoundKey_192 and AESNextRoundKey_256 and there would be a similar set of three instructions for AESPreviousRoundKey. In this embodiment, the total number of instructions in the instruction set is 10 instead of 6 in the previously discussed embodiment.The register file 304 has a plurality of 128-bit registers which may be used by the aes instructions in the aes instruction set. The 128-bit registers may store source operand(s), round keys and the result of the aes instruction. For the first round, the aes instruction receives a source operand that may be 128-bit of plaintext to be encrypted or 128-bits of cipher text to be decrypted. A key for generating a key schedule for a 128-bit, 192-bit or 256-bit key may be stored in any of the 128-bit registers 308 in the register file 304. The round keys may also be stored in any of the 128-bit registers 308 in the register file. All of the instructions use registers in the register file and may also take input directly from memory as discussed earlier.An example of source code that uses an embodiment of the aes instruction set shown in Table 1 is shown in Table 2 below. In the example, performance is optimized in an application for performing encryption that uses the same key for many blocks. One such application is the use of a single key for encrypting contents of a disk in which the same key is used for encrypting all of the data prior to being stored on the disk. In the example, AES-128 encryption is performed.The size of the key may be 128-bits, 192-bits or 256-bits. The number of rounds to be performed (n) may be 1, 10, 12 or 14 dependent on the size of the key with each round key being a fixed size (128-bits). With a number of rounds value of 10, 12, 14, the aes micro operations may perform standard aes encryption and decryption for key sizes of 128-bits, 192-bits or 256-bits.When the same key is used for many blocks, the round key for each round (key schedule) may be pre-computed and stored in memory (for example, level 1 data cache 204) so that the same key schedule does not have to be recomputed prior to an encryption/decryption operation on each block.Table 2RK[0] = Input KeyFor i = 1..10 RK [i] = AESNextRoundKey (RK[i-1])EndSTATE = Input BlockSTATE = STATE xor RK[0]For i = 1..9 STATE = AESENCRYPTRound (STATE, RK[i])EndSTATE = AESENCRYPTLastRound (STATE, RK[10])An array (RK) having 10 elements is used to store the key schedule for the key. The input key for AES-128 encryption is stored in RK[0] and the 9 round keys RK[0] - RK[1] are pre-computed through a call to the AESNextRoundKey instruction from the aes instruction set. The AESNextRoundKey instruction computes the next round based on the current round key. The pre-computed round keys for the key schedule may be stored in round key 214 in level 1 data cache 204.In this example, as the portion of the key schedule (expanded key), that is the round key for the round is input directly from the register file 304, an exclusive OR (XOR)operation is performed on the state and key prior to entering the loop for performing the aes rounds. For each round 1 through 9, the AESENCRYPTRound instruction from the aes instruction set is called to perform the aes round operation for one round. For the last round (round 10) the AESNECYRPTLastRound instruction from the aes instruction set is called to perform the aes round operation for the last round.Information to be encrypted or decrypted by the aes instruction is loaded into a source/destination register 306 in the register file 304 prior to issuing the first aes instruction to start an encrypt or decrypt operation. The key to be used to encrypt/decrypt the information in the source register 306 is stored in one or more other registers 308 in the register file 308. In the case of a 128-bit key, the entire 128-bits of the key are stored in any one of the other 128-bit registers in the register file 304. For key sizes greater than 128 bits, the most significant bits (greater than 128 bits) are stored in another one of the 128-bit registers.In the example shown in Table 2, the round key for each round is pre-computed based on the key and may be stored in level 1 data cache 204 prior to being loaded into any one of the registers 308 in the register file 304. The key for each round may also be stored in one or more registers in the register file 304 or may be stored in round key 214 in level 1 data cache 204.AES has a fixed block size of 128 bits and a key size of 128, 192 or 256 bits and operates on a 4×4 array of bytes (that is, 16 bytes (128-bit fixed block size)), which is referred to as the 'state'. The AES algorithm transforms a 128-bit plaintext block into a 128-bit block of cipher text (encrypts) or a 128-bit block of cipher text into a 128-bit block of plaintext (decrypts) in 10, 12, or 14 consecutive rounds, with the number of rounds dependent on the key size (128, 192 or 256-bits).Prior to performing the per round encryption or decryption operation, the execution unit 210 retrieves the state and the key which are stored in the register file 304. Each encryption/decryption round operation is performed using the micro operations for the aes instruction stored in the key scheduler 302 in the Read Only Memory (ROM) 214. In the embodiment shown, the state (128-bit block state) is stored in register 306 and the key is stored in one or more of the other registers 308 in the register file 304. After the execution of the aes instruction is complete, the resulting state is stored in register 306 in the register file 304. The state may be an intermediate round date to be used by a next aes round or the final result of the AES encryption or decryption operation.In the embodiment shown, a key scheduler 302 generates the round key to be used in an aes round. The key scheduler 302 may be implemented as microcode operations and may include microcode operations to perform the sequence of operations for generating round keys for 128-bit, 196-bit and 256-bit keys as defined by FIPS Publication 197.In another embodiment, the key scheduler may be implemented as a hardware state machine sequence in the execution unit 210. In yet another embodiment, some portion of the key scheduler may be implemented as microcode operations stored in the microcode ROM 214 and the remainder of the key scheduler may be implemented as a hardware state machine sequence in the execution unit 210.The key scheduler 302 expands the n-bytes of a key into b-bytes of an expanded key (key schedule) with the first n-bytes of the expanded key being the original key. For example, for a 128-bit key, the 128-bit key is expanded into a 176-bytes expanded key, that is, 11 x 16-bytes (128-bits), with the first 16-bytes being the original 128-bit key, and thus the number of rounds is 10. The 24 bytes of a 192-bit key are expanded into 208 bytes (13 x 16 bytes) to provide 12 "round keys" one for each of the 12 rounds and the 32 bytes of a 256-bit key are expanded into 240 bytes (15 x 16 bytes) to provide 14 "round keys" one for each of the 14 rounds.Upon decoding the operation code (opcode) in an aes instruction, a number of parameters to be used to control the flow in the aes instruction for one aes round are stored in control logic 322. The parameters include the type of operation (encryption or decryption) and whether it is a last round.Aes round logic 324 may include micro operations for the following stages: block state 314, s-box/inverse S-box 316, shift rows 316 and mix inverse, mix columns or null (referred to as "mix columns") 320 and add round key 326.In block state 314, the 128-bit input (state) to the aes round logic 324 is added with a key (128-bit portion of the expanded key associated with the round) using bitwise XOR to produce a 128-bit intermediate value (state).In the S-box/inverse S-box 316, each byte of this 128-bit intermediate value is substituted with another byte value that may be stored and retrieved from a lookup table also referred to as a substitution box or "S-Box". The S-box takes some number of input bits, m, and transforms them into some number of output bits, n and is typically implemented as a lookup table. A fixed lookup table is typically used. This operation provides non-linearity through the use of the inverse function over Galois Field (GF)(28). For example, the n-bit output may be found by selecting a row in the lookup table using the outer two bits of the m-bit input, and selecting a column using the inner bits of the m-bit input.In Shift Rows 318, the results from S-box/inverse S-box 316 passes through a bit-linear transform in which bytes in each row of the 4 x 4 array (128-bit (16 bytes) state) received from the Sub Bytes stage are shifted cyclically to the left. The number of places each byte is shifted differs for each row in the 4 x 4 array.In Mix Columns 320, the results from Shift Rows 320 passes through a bit-linear transform in which each column of the 4 x 4 array (state) is treated as a polynomial over a binary Galois Field (GF)(28) and is then multiplied modulo x4+ 1 with a fixed polynomial c(x) = 3x3+ x2+ x + 2. A last aes round differs from the other aes rounds in that it omits Mix Columns 320.Add Round Key 324 after the Mix Columns stage 320 performs an exclusive OR function on the round key from the expanded key and the result of Shift Rows 318 or Mix Columns 320 for the aes round.For example, the following aes instruction may be issued to perform one round of aes decryption:AESDECRYPTRound xmmsrcdst xmmThis example performs a 128-bit AES encrypt round operation with a key whose expanded key is represented as {RK[1], RK[2], ... RK[10]}. The round key may be generated by issuing a AESPreviousRoundKey xmmsrc1, 2 xmm dst (immediate) instruction prior to issuing the AESDECRYPTRound instruction. The round key may be loaded directly into the block state 314 from Level 1 data cache 204 or may first be stored in a register (xmm) in the register file 304 and then loaded into the block state 314 from the register.When a different key is used to encrypt/decrypt each block, for example, in the case of a network interface controller (NIC) that is encypting/decrypting data packets, the round key may computed on-the-fly prior to performing encryption/decryption for each round as shown in the pseudo code below in Table 3 for AES-128 encryption:Table 3RK[0] = Input KeySTATE = Input BlockSTATE = STATE xor RK[0]For i = 1..9 RK [i] = AESNextRoundKey (RK[i-1]) STATE = AESENCRYPTRound (STATE, RK[i])EndRK [10] = AESNextRoundKey (RK[9])STATE = AESENCRYPTLastRound (STATE, RK[10])In this example, the round key for the round is generated prior to performing encryption using the round key for each of the 10 rounds in the key schedule (expanded key), that is, rounds 1-9 and round 10 (the last round).The set of aes instructions that include single aes round instructions and single aes round key generation instructions allows variants of AES with different number of rounds and key schedules, that is, variants of AES not defined by FIPS Publication 197. Thus, the single round aes instructions in the aes instruction set provide flexibility in performing aes encryption and decryption.As the number of rounds performed by the aes instruction set is not fixed, any numbers of rounds, if required, may be performed. For example, the number of rounds may be varied to support future encryption/decryption standards if new standards for hashing or MAC-ing attacks, or attacks on AES are introduced.Fig. 4 is a flow graph illustrating the flow of an aes encrypt round instruction through the execution unit 210 shown in Fig. 3 .At block 400, the execution unit 210 waits for an aes encrypt round instruction. If an AES encrypt round instruction has been decoded by the fetch and decode unit 206, processing continues with block 402. If not, processing remains in block 400 waiting for an aes encrypt round instruction.At block 402, during the instruction decode by the fetch and decode unit 206, an indication that encryption is to be performed is stored in the control logic 322 and the round key and 128-bit block state (source) for use in performing the encryption round are loaded into the execution unit 210 from the register file 304. Processing continues with block 404.At block 404, a substitution operation is performed on the 128-bit block state that is, the result from block 406 or 418. Each byte of the 128-bit block state is substituted with another byte value that can be stored and retrieved from a lookup table also referred to as a substitution box or "S-Box". The S-box takes some number of input bits, m, and transforms them into some number of output bits, n and is typically implemented as a lookup table. The result is stored as a 128-bit block state. Processing continues with block 406.At block 406, the 128-bit block state (4 x 4 array) passes through a bit-linear transform in which bytes in each row of the 4 x 4 array are shifted cyclically to the left. The number of places each byte is shifted differs for each row in the 4 x 4 array. Processing continues with block 408.At block 408, the 128-bit block state (4 x 4 array) passes through a bit-linear transform in which each column of the 4 x 4 array (state) is treated as a polynomial over GF(28) and is then multiplied modulo x4+ 1 with a fixed polynomial c(x) = 3x3+ x2+ x + 2. Processing continues with block 410.At block 410, an exclusive OR function is performed on the round key from the expanded key and the result of Shift Rows 318 or Mix Columns 320 for the aes round. Processing continues with block 412.At block 412, the result of the encryption operation for the round (128-bit block state) is stored in the source/destination register 302 in the register file 304. Processing for the aes encrypt instruction is complete.Table 4 below shows an example of the result of performing AES-128 encryption using a 128-bit key on a 128-bit block input after execution of the pseudo code shown in Table 3.Table 4128-bit Input:00112233445566778899aabbccddeeff (Hexadecimal)128-bit Key:000102030405060708090a0b0c0d0e0f (Hexadecimal)128-bit Result:69c4e0d86a7b0430d8cdb78070b4c55a (Hexadecimal)Fig. 5 is a flow graph illustrating the flow of an aes encrypt last round instruction through the execution unit 210 shown in Fig. 3 .At block 500, the execution waits for an aes encrypt last round instruction. If an AES encrypt last round instruction has been decoded by the fetch and decode unit 206, processing continues with block 502. If not, processing remains in block 500 waiting for an aes instruction.At block 502, an S-box lookup is performed for the last round in a similar manner to the S-box lookup discussed in conjunction with block 404 ( Fig. 4 ). Processing continues with block 504.At block 504, a shift rows operation is performed for the last round in a similar manner to that discussed in conjunction with the other rounds in block 406 ( Fig. 4 ). Processing continues with block 506.At block 506, an exclusive OR function is performed on the round key from the expanded key and the result of Shift Rows 318 or Mix Columns 320 for the aes round. Processing continues with block 508.At block 508, the result of the encryption last round operation is stored in the source/destination register 306 in the register file 304. Processing for the aes instruction is complete.Fig. 6 is a flow graph illustrating the flow of an aes decrypt round instruction through the execution unit 210 shown in Fig. 3 .At block 600, the execution waits for an aes decrypt round instruction. If an AES decrypt round instruction has been decoded by the fetch and decode unit 206, processing continues with block 602. If not, processing remains in block 600 waiting for an aes decrypt round instruction.At block 602, during the instruction decode by the fetch and decode unit 206, an indication that a decrypt round is to be performed is stored in the control logic 322 and the round key and source (128-bit block state) for use in performing the decrypt round are loaded into the execution unit 210 from the register file 304. Processing continues with block 604.At block 604, the operation to be performed is decryption. A substitution operation is performed on the 128-bit block state by performing an inverse s-box lookup as defined by the AES standard. Processing continues with block 606.At block 606, an inverse shift rows operation is performed as defined by FIPS publication 197. Processing continues with block 608.At block 608, an inverse shift rows operation is performed as defined by FIPS publication 197. Processing continues with block 610.At block 610, an exclusive OR function is performed on the round key from the expanded key and the result of Shift Rows 318 or Mix Columns 320 for the aes round. Processing continues with block 612.At block 612, the result of the decryption operation for the round (128-bit block state) is stored in the source/destination register 302 in the register file 304. Processing for the aes decrypt round instruction is complete.Fig. 7 is a flow graph illustrating the flow of an aes decrypt last round instruction through the execution unit 210 shown in Fig. 3 .At block 700, the execution unit 210 waits for an aes decrypt last round instruction. If an AES decrypt last round instruction has been decoded by the fetch and decode unit 206, processing continues with block 702. If not, processing remains in block 700 waiting for an aes decrypt last round instruction.At block 702, a substitution operation is performed on the 128-bit block state for the last round by performing an inverse s-box lookup as defined by FIPS publication 197. Processing continues with block 704.At block 704, an inverse shift rows operation is performed for the last round as defined by FIPS publication 197. Processing continues with block 706.At block 706, an exclusive OR function is performed on the round key from the expanded key and the result of Shift Rows 318 or Mix Columns 320 for the aes round. Processing continues with block 708.At block 708, the result of the decrypt last round operation is stored in the source/destination register 306 in the register file 304. Processing for the aes decrypt last round instruction is complete.In one embodiment, the blocks in the flowgraphs of Figs 4-7 may be implemented as a hardware state machine sequence in the execution unit 210. In another embodiment portions of the blocks may be implemented as a micro-program that may be stored in Read Only Memory (ROM) 214. The embodiment in which the blocks are implemented as a hardware state machine sequence may provide higher performance.Fig. 8 illustrates an embodiment of an aes round instruction with immediate byte that may be used to generate round keys and perform encryption and decryption. Instead of the aes instruction set shown in Table 1, a single aes round instruction is provided to perform the functions of the aes instruction set. The particular function to be performed by the single aes instruction is encoded in bits in the immediate byte (key_select_modifier). The immediate byte allows the aes round instruction to be expanded to add new features instead of creating a plurality of new instructions with each instruction having a unique operation code.The aes round instruction may be defined symbolically as follows:dest := aes_key_round (source2, source 1), key_select_modifierThe aes_key_round instruction is issued to a particular execution unit 210 based on port number in order to perform an AES encrypt or decrypt operation. In the embodiment shown, port number 4 is the designated execution port for the AES round instruction. The execution unit 210 is divided into many parallel ports (super-scalar). However, not all ports are equal. Some ports have specialized resources such as a large integer multiplier, or floating-point multiplier or divider. Simpler and more common instructions such as addition, subtraction and exclusive OR are supported on multiple ports for maximum performance. Thus for each instruction or micro-operation, issue control logic determines the port to which to issue the micro-operation/instruction. In this embodiment, the aes instruction is always issued to port number 4. However, in other embodiments other port numbers may be used.Referring to Fig. 8 , the dest stores 128 bits of expanded key for round N, source2 stores 128 bits of expanded key for round N-1, and source1 stores 128 bits of expanded key for round N-2. The key_select_modifier is an 8-bit immediate value used to provide current round number (N), direction of operation (encrypt/ decrypt) and AES key size. For AES-128, source1 is not needed and is ignored. The execution unit is AES unit and no flags (integer or floating point) are used.In one embodiment, the bit encoding of the four least significant bits of the immediate value indicate the round number, for example, a round number from 1 - 10 for AES-128, a round number from 1 - 12 for AES-192 and a round number from 2 -14 for AES 256. For AES-128 and 192 round number 0 is not valid because the first round uses the unmodified input key. For AES-256 round numbers 0 and 1 are not valid as the unmodified 256-bit input key is used for the first 2 128-bit rounds.Bit 4 of the immediate byte indicates the direction of operation (encryption or decryption), for example, in one embodiment 0=encrypt, and 1=decrypt and in another embodiment 1=encrypt, and O=decrypt. Bits 5 and 6 of the immediate byte indicate the AES key size. In one embodiment the AES key size is defined as shown in Table 5 below:Table 5Bits[6:5]Key Size00128011921025611ReservedIn another embodiment, bits [6:5] having a value of 11 is also an indicator for a 128-bit key size. In this embodiment, all values of bits [6:5] are valid and may be parsed.It will be apparent to those of ordinary skill in the art that methods involved in embodiments of the present invention may be embodied in a computer program product that includes a computer usable medium. For example, such a computer usable medium may consist of a read only memory device, such as a Compact Disk Read Only Memory (CD ROM) disk or conventional ROM devices, or a computer diskette, having a computer readable program code stored thereon.While embodiments of the invention have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of embodiments of the invention encompassed by the appended claims.The following section of the description consists of numbered paragraphs simply providing statements of the invention already described herein. The numbered paragraphs in this section are not claims. The claims are set forth below in the later section headed "claims".1. An apparatus comprising: an execution unit to perform a sequence of operations for an aes instruction, the sequence of operations to perform a programmable number of aes rounds, the operations to cause the execution unit to: if the number of aes rounds is greater than 1: load a key into a temporary key register; and prior to performing each aes round, generate a round key for the aes round based on the key; and for each aes round, perform a sequence of aes round operations on an input to the aes round and the round key for the aes round to provide a next input to a next aes round or a result for the aes instruction.2. The apparatus of clause 1, wherein if the number of aes rounds is equal to 1, prior to performing the sequence of aes round operations, the execution unit to:load a pre-computed round key for the aes round based on the key.3. The apparatus of clause 2, wherein the sequence of aes round operations cause the execution unit to: perform an exclusive OR (XOR) operation on an input to the round and the round key for the aes round to produce an intermediate value; perform a substitution operation for each byte in the intermediate value based on values stored in a lookup table; and pass results of the substitution operation through a bit- linear transform that shifts rows in the intermediate value.4. The apparatus of clause 1, wherein the sequence of aes round operations for the number of aes rounds -1 cause the execution unit to: perform an exclusive OR (XOR) operation on the input to the aes round and the round key for the aes round to produce an intermediate value; perform a substitution operation for each byte in the intermediate value based on values stored in a lookup table; pass results of the substitution operation through a bit- linear transform that shifts rows in the intermediate value; and pass results of the substitution operation through bit linear transform that mixes columns in the intermediate value.5. The apparatus of clause 4, wherein the sequence of aes round operations for the last round cause the execution unit to: perform an exclusive OR (XOR) operation on an input to the round and the round key for the aes round to produce an intermediate value; perform a substitution operation for each byte in the intermediate value based on values stored in a lookup table; and pass results of the substitution operation through a bit- linear transform that shifts rows in the intermediate value.6. The apparatus of clause 1, wherein the result is an encrypted value.7. The apparatus of clause 1, wherein the result is a decrypted value.8. The apparatus of clause 1, wherein the key and the input for a first aes round are stored in a register file.9. The apparatus of clause 1, wherein the register file includes a plurality of 128-bit registers.10. A method comprising: if the number of programmable aes rounds for an aes instruction is greater than 1, loading a key into a temporary key register and prior to performing each aes round, generating a round key for the aes round based on the key; and for each aes round, performing a sequence of aes round operations on an input to the aes round and the round key for the aes round to provide a next input to a next aes round or a result for the aes instruction.11. The method of clause 10, wherein if the number of aes rounds is equal to 1, prior to performing the sequence of aes round operations, loading a pre-computed round key for the aes round based on the key.12. The method of clause 11, wherein performing the sequence of aes round operations comprises: performing an exclusive OR (XOR) operation on an input to the round and the round key for the aes round to produce an intermediate value; performing a substitution operation for each byte in the intermediate value based on values stored in a lookup table; and passing results of the substitution operation through a bit-linear transform that shifts rows in the intermediate value.13. The method of clause 10, wherein performing the sequence of aes round operations for the number of rounds- 1 comprises: performing an exclusive OR (XOR) operation on the input to the aes round and the round key for the aes round to produce an intermediate value; performing a substitution operation for each byte in the intermediate value based on values stored in a lookup table; passing results of the substitution operation through a bit-linear transform that shifts rows in the intermediate value; and passing results of the substitution operation through bit linear transform that mixes columns in the intermediate value.14. The method of clause 13, wherein performing the sequence of aes round operations for a last aes round comprises: performing an exclusive OR (XOR) operation on an input to the round and the round key for the aes round to produce an intermediate value; performing a substitution operation for each byte in the intermediate value based on values stored in a lookup table; and passing results of the substitution operation through a bit-linear transform that shifts rows in the intermediate value.15. The method of clause 10, wherein the result is an encrypted value.16. The method of clause 10, wherein the result is a decrypted value.17. The method of clause 10, wherein the key and the input for a first aes round are stored in a register file.18. The method of clause 10, wherein the register file includes a plurality of 128-bit registers.19. An article including a machine-accessible medium having associated information, wherein the information, when accessed, results in a machine performing: if the number of programmable aes rounds for an aes instruction is greater than 1 , loading a key into a temporary key register and prior to performing each aes round, generating a round key for the aes round based on the key; and for each aes round, performing a sequence of aes round operations on an input to the aes round and the round key for the aes round to provide a next input to a next aes round or a result for the aes instruction.20. The article of clause 10, wherein if the number of aes rounds is equal to 1, prior to performing the sequence of aes round operations, loading a pre-computed round key for the aes round based on the key.21. A system comprising : a dynamic random access memory to store data and instructions; and a processor coupled to said memory to execute the instructions, the processor comprising: an execution unit to perform a sequence of operations for an aes instruction, the sequence of operations to perform a programmable number of aes rounds, the operations to cause the execution unit to: if the number of aes rounds is greater than 1 : load a key into a temporary key register; and prior to performing each aes round, generate a round key for the aes round based on the key; and for each aes round, perform a sequence of aes round operations on an input to the aes round and the round key for the aes round to provide a next input to a next aes round or a result for the aes instruction.22. The system of clause 22, wherein if the number of aes rounds is equal to 1, prior to performing the sequence of aes round operations, the execution unit to:load a pre-computed round key for the aes round based on the key.
A semiconductor-on-insulator (SOI) device (10). The SOI device includes a semiconductor substrate layer (18); an insulator layer (16) disposed on the substrate layer; a semiconductor active region (19) disposed on the insulator layer, the active region including a source (20), a drain (22), and a body (24) disposed therebetween, at least one of the source and the drain forming a hyperabrupt junction (40, 42) with the body; and a gate (46) disposed on the body such that the gate, source, drain and body are operatively arranged to form a transistor. The at least one of the source and drain forming the hyperabrupt junction with the body includes a silicide region (54, 56). The silicide region ahs a generally vertical interface (70, 74) which is laterally spaced apart from the hyperabrupt junction by about 60 ANGSTROM to about 150 ANGSTROM .
CLAIMSWhat is claimed is: 1. A semiconductor-on-insulator (SOI) device (10) comprising: a semiconductor substrate layer (18); an insulator layer (16) disposed on the substrate layer; a semiconductor active region (19) disposed on the insulator layer, the active region including a source (20), a drain (22), and a body (24) disposed therebetween, at least one of the source and the drain forming a hyperabrupt junction (40,42) with the body; and a gate (46) disposed on the body such that the gate, source, drain and body are operatively arranged to form a transistor; wherein the at least one of the source and drain forming the hyperabrupt junction with the body includes a silicide region (54,56), the silicide region having a generally vertical interface, the generally vertical interface (70,74) being laterally spaced apart from the hyperabrupt junction by about 60 A to about 150 A. 2. An SOI device as set forth in claim 1, wherein the vertical interface is laterally spaced apart from the hyperabrupt junction by a distance which is less than 100 A. 3. An SOI device as set forth in claim 1, wherein the generally vertical interface extends adjacent the hyperabrupt junction along a distance of about 70 A to about 130 A. 4. An SOI device as set forth in claim 1, wherein the other of the at least one of the source and the drain form a hyperabrupt junction with the body and has a silicide region having a generally vertical interface being laterally spaced apart from the respective hyperabrupt junction by about 60 A to about 150 A. 5. An SOI device as set forth in claim 4, wherein the source silicide region and drain silicide region are substantially symmetric with one another about the gate. 6. An SOI device as set forth in claim 4, wherein the generally vertical interfaces of each of the silicide regions extend adjacent the respective hyperabrupt junctions along a distance of about 70 to about 130 . 7. A semiconductor-on-insulator (SOI) device (10) comprising: a semiconductor substrate layer (18); an insulator layer (16) disposed on the substrate layer; a semiconductor active region (19) disposed on the insulator layer, the active region including a source (20), a drain (22), and a body (24) disposed therebetween, the source and the drain forming respective hyperabrupt junctions (40,42) with the body; and a gate (46) disposed on the body such that the gate, source, drain and body are operatively arranged to form a transistor; wherein the source and the drain each include a silicide region (54,56), the silicide regions being spaced from the respective hyperabrupt junctions by a lateral distance of less than about 100 A. 8. An SOI device as set forth in claim 7, wherein the silicide regions each have a generally vertical interface (70,74), the generally vertical interfaces extending adjacent the respective hyperabrupt junctions along a distance of about 70 A to about 130 A.
SOI MOSFET WITH HYPERABRUPT SOURCE AND DRAIN JUNCTIONSTECHNICAL FIELDThe invention relates generally to semiconductor-on-insulator (SOI) devices and methods for forming the same and, more particularly to controlling floating body effects and contact resistance within an SOI device. BACKGROUND ARTTraditional semiconductor-on-insulator (SOI) integrated circuits typically have a silicon substrate having a buried oxide (BOX) layer disposed thereon. A semiconductor active layer, typically made from silicon, is disposed on the BOX layer. Within the active layer, active devices, such as transistors, are formed in active regions. The size and placement of the active regions are defined by isolation regions. As a result of this arrangement, the active devices are isolated from the substrate by the BOX layer. More specifically, a body region of each SOI transistor does not have body contacts and is therefore"floating."SOI chips offer potential advantages over bulk chips for the fabrication of high performance integrated circuits for digital circuitry. Such digital circuitry is typically made from partially-depleted metal oxide semiconductor field effect transistors (MOSFETs). In such circuits, dielectric isolation and reduction of parasitic capacitance improve circuit performance, and virtually eliminate latch-up in CMOS circuits. In addition, circuit layout in SOI can be greatly simplified and the packing density greatly increased. However, devices formed from SOI materials typically exhibit parasitic effects due to the presence of the floating body (i. e.,"floating body effects"). These floating body effects may result in undesirable performance in SOI devices. Therefore, it will be appreciated that a need exists for SOI MOSFETs having reduced floating body effects. DISCLOSURE OF THE INVENTIONAccording to one aspect of the invention, the invention is a semiconductor-on-insulator (SOI) device.The SOI device includes a semiconductor substrate layer; an insulator layer disposed on the substrate layer; a semiconductor active region disposed on the insulator layer, the active region including a source, a drain, and a body disposed therebetween, at least one of the source and the drain forming a hyperabrupt junction with the body region; and a gate disposed on the body such that the gate, source, drain and body are operatively arranged to form a transistor. The at least one of the source and drain forming the hyperabrupt junction with the body includes a silicide region. The silicide region has a generally vertical interface, which is laterally spaced apart from the hyperabrupt junction by about 60 A to about 150 A. According to another aspect of the invention, the invention is a semiconductor-on-insulator (SOI) device. The SOI device includes a semiconductor substrate layer; an insulator layer disposed on the substrate layer; a semiconductor active region disposed on the insulator layer, the active region including a source, a drain, and a body disposed therebetween, the source and the drain forming respective hyperabrupt junctions with the body; and a gate disposed on the body such that the gate, source, drain and body are operatively arranged to form a transistor. The source and the drain each include a silicide region, the silicide regions being spaced from the respective hyperabrupt junctions by a lateral distance of less than about 100 A. BRIEF DESCRIPTION OF THE DRAWINGSThese and further features of the present invention will be apparent with reference to the following description and drawings, wherein:FIG. 1 is a cross-sectional view of a semiconductor-on-insulator (SOI) device in accordance with the present invention;FIG. 1A is an enlarged, partial view of the SOI device of FIG. 1 ;FIG. 2 is a flow chart of a method of making the SOI device of FIG. 1; andFIGS. 3-9 are cross-sectional views of SOI in various stages of fabrication. DISCLOSURE OF THE INVENTIONIn the detailed description which follows, identical components have been given the same reference numerals, regardless of whether they are shown in different embodiments of the present invention. To illustrate the present invention in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form. Referring initially to FIG. 1, a semiconductor-on-insulator (SOI) device 10 according to the present invention is shown. In the illustrated embodiment, the SOI device is a transistor and, more specifically, is a partially depleted metal oxide semiconductor field effect transistors (MOSFET). The device 10 is fabricated in conjunction with an SOI wafer 12. The SOI wafer includes an active layer 14 (also referred to as a semiconductor layer 14), a buried insulator layer 16 (also referred to herein as a buried oxide (BOX) layer 14), and a semiconductor substrate 18. In one embodiment, the wafer 12 has a silicon semiconductor layer 14, a silicon substrate 18, and a silicon dioxide (SOI) buried insulator layer 16. Within the semiconductor layer 14, isolation regions 17 define the size and placement of an active region 19, the active region 19 having a source region (or source 20), a drain region (or drain 22) and a body region (or body 24) disposed therebetween. The source 20 and the drain 22 are doped as described in more detail below, such that the source 20 and the drain 22 are doped to form N-type regions or P-type regions as desired. The body 24 is doped to have opposite doping as the source 20 and the drain 22. Alternatively, the body 24 can be undoped. The source 20 and the drain 22 each include extensions 43 (FIG. IA) extending under sidewall spacers 44, the sidewall spacers 44 being disposed adjacent a gate stack (or gate 46). The gate 46 is disposed on top of the body 24. The gate 46 includes a gate dielectric 50 and a gate electrode 48 disposed thereon as is known in the art. The gate dielectric 50 may be formed from conventional materials, such as silicon dioxide, silicon oxynitride, or silicon nitride (Si3N4), and the gate electrode 48 can be formed from a conductive material, such as polysilicon. The source 20 and the drain 22 also include deep implants as described below in more detail. The deep implants are doped so that a source/body hyperabrupt junction 40 is formed and a drain/body hyperabrupt junction 42 is formed. In addition, the junctions 40 and 42 are physically steep and are formed to be as vertical as possible. Therefore, the hyperabrupt junctions 40 and 42 generally extend at least from the lower edge of the extensions 43 (i. e., at the"corner"where the deep implant intersects with the extensions 43) towards the BOX layer 16. The depth of the hyperabrupt junctions 40 and 42 is defined by the depth to which the source 20 and the drain 22 are amorphized during an amorphization step carried out prior to dopant implantation. Below the amorphization depth, the doping concentration of the deep implants falls off, reducing the degree of abruptness of the sourcelbody junction and the drain/body junction below the amorphization depth. The device 10 also includes a source silicide region 54, a drain silicide region 56 and a gate silicide region 55. In the illustrated embodiment, the source and drain silicide regions 54 and 56 are substantially symmetric about the gate 46, although it will be appreciated that the silicide regions 54 and 56 may be asymmetrical relative to the gate 46. The silicide regions 54 and 56 have upper surfaces 58 and 60, respectively, for external electrical connection using components such as contacts, vias and conductor lines. The illustrated source silicide region 54 interfaces the non-silicided portion of the source 20 along a lateral interface 68 and a generally vertical interface 70. The interfaces 68 and 70 are generally smooth and are generally perpendicular to one another, although a corner radius may be present at the junction where the interfaces 68 and 70 meet and the interfaces 68 and 70 may be bowed, arced or otherwise non-linear. Similarly, the drain silicide region 56 has a lateral interface 72 and a vertical interface 74, which are generally smooth and perpendicular to one another, although a corner radius may be present at the junction where the interfaces 72 and 74 meet and the interfaces 72 and 74 may be bowed, arced or otherwise non-linear. As shown in FIG. 1A, the interface 70 is laterally spaced from the hyperabrupt junction 40 as indicated by reference number 80. The lateral distance 80 is about 60 A to about 150 A. In another embodiment, the lateral distance is about 90 A to about 120 A, and in another embodiment, the lateral distance is less than about100 A, but not contacting the hyperabrupt junction 40. With respect to the foregoing ranges, and all other ranges and ratios herein, the range and ratio limits can be combined. As indicated by reference number 82, the interface 70 extends in a generally vertical arrangement adjacent the hyperabrupt junction 40 along a distance of about 70 A to about 130 A. In one embodiment, the vertical distance 82 is about 1.0 to about 1.5 times the lateral distance 80, and in one embodiment, the vertical distance 82 is about 1.2 to about 1.3 times the lateral distance 80. Similarly, the same or similar spacing parameters for the drain silicide region 56 are formed. According to the invention, the proximity of the silicide regions 54 and 56 to the respective source/body hyperabrupt junction 40 and drain/body hyperabrupt junction 42 enhances junction recombination and reduces floating body effects. In addition, the hyperabrupt source/body junction 40 and the hyperabrupt drain/body junction 42 allows for lower contact resistance. More particularly, the proximity of the silicide regions 54 and 56 to the hyperabrupt junctions 40 and 42 tends to make the device 10 more leaky. However, in the presence of these leaky diode junctions, the silicide may have a tendency to attract with lightly doped portions of the junction, increasing the tunneling barrier and, thus, increasing the contact resistance. In the present invention, the hyperabrupt nature of the junctions 40 and 42 allows for the placement of the silicide interfaces 70 and 74 to be in close proximity thereto (e. g., a distance of less than 100 A). FIG. 2 is a flow chart of a method 100 for forming the device 10. In step 102 and as illustrated in FIG. 3, an SOI wafer 110 is provided. As mentioned, the SOI wafer 12 includes the substrate 18, the active, or semiconductor, layer 14 and the BOX layer 16 disposed therebetween. The semiconductor layer 14 may be suitably doped for the formation of a device with a body having P or N type doping. The wafer 12 may be formed using techniques known in the art, such as a wafer bonding technique of a separated by implanted oxygen (SIMOX) technique. Thereafter, in step 104 and as illustrated in FIG. 4, isolation regions 17 are formed to define the active region 19. In step 106 and as illustrated in FIG. 4, the gate 46, including the gate dielectric 50 and the gate electrode 48, is formed using conventional techniques. For example, a layer of dielectric material (e. g., Si02 or Si3N4) may be deposited on and/or grown on the semiconductor layer 14. Thereafter a layer of conductive gate electrode material (e. g., polysilicon) may be deposited on the layer of dielectric material by using, for example, low pressure chemical vapor deposition (LPCVD). The dielectric and electrode materials may be selectively removed, for example by well-known photolithography and selective etching methods, to form the gate 46 in a desired location. An example of a suitable etching method is reactive ion etching (RIE), using an appropriate etchant. It will be appreciated that a wide variety of other suitable gate structures as are known in the art may be formed in step 106. In addition, the gate 46 can be pre-doped and activated using known techniques. In step 108, a halo can be implanted as is well known in the art. In step 110 and as illustrated in FIG. 5, respective source 20 and drain 22 extensions 43 are formed by implanting ions 112 using, for example, a lightly doped drain (LDD) technique. Exemplary ions 112 for extension 43 formation include phosphorous or arsenic to establish N-type doping and boron or antimony to establish P-type doping. An exemplary implantation energy range is about 5 to 80 KeV, and an exemplary dosage range is about 1 x 10'2 to about 5 x 10'5 atoms/cm2. It will be appreciated that the gate 46 acts as a selfaligned mask during extension 43 formation. Some dopant may diffuse under the gate 46 as is conventional. It will further be appreciated that, if desired, a separate doping mask or temporary spacer may be used in place of or in addition to the gate 46. Thereafter, in step 114, the halo (if formed) and the extensions 43 are activated with a thermal cycle, such as a rapid temperature anneal (RTA). As an alternative, the extensions 43 can be formed using a solid phase epitaxy (SPE) process. More specifically, SPE is used to amorphize the semiconductor layer 14 with ion species, such as, silicon or germanium. The energy and dosage of the ion species can be determined empirically for the device being fabricated. Next, dopant is implanted to achieve the desired N-type or P-type doping and then the semiconductor layer 14 is recrystalized using a low temperature anneal (i. e., at a temperature of less than about 700 C). Referring to FIG. 6, in step 116, the side wall spacers 44 are formed adjacent the gate 46. The spacers 44 are formed using conventional techniques and are made from a material such as silicon oxide (SiO2) or a nitride (e. g., Si3Na). In step 118 and as illustrated in FIG. 7, source 20 and drain 22 deep implant regions are formed, thereby forming the source 20 and the drain 22 from the respective deep implant regions and the extensions 43. In one embodiment, the deep implants are formed using an SPE process. More specifically, SPE is used to amorphize the semiconductor layer 14 with ion species, such as silicon or germanium. The energy and dosage of the ion species can be determined empirically for the device being fabricated. In one embodiment, silicon ions are used to amorphize the semiconductor layer 14 and an exemplary energy range is about 5 keV to about100 keV and an exemplary dosage range is about 1 x 10'5 atoms/cm2 to about 1 x 1016 atoms/cm2. Next, dopant is implanted with ions 119 to achieve the desired N-type of P-type doping and then the semiconductor layer 14 is recrystalized using a low temperature anneal (i. e., at a temperature of less than about 700 C). The semiconductor layer 14 is amorphized to a desired depth, wherein the depth defines the depth of the hyperabrupt junctions formed along the diode interfaces between the source 20 and the body 24 and between the drain 22 and the body 24, respectively. The gate 46 and the spacers 44 act as a self aligned mask during ion119 implantation, however, some diffusion of the implanted ions 119 under the spacers 44 will generally occur as is known in the art. Exemplary ions 119 include phosphorous or arsenic to establish N-type doping and boron or antimony to establish P-type doping. An exemplary energy range for the deep implantation 182 is about 5 KeV to about 50 KeV, depending of the dopant species. An exemplary dosage range for the deep implantation is about lx10'5atoms/cm2 to about 1X10l6 atoms/cm2. Following step 118, an exemplary range of concentrations of the dopants in the source 20 and the drain 22 at or near the hyperabrupt junctions 40 and 42 is about 1 x I 020 atoms/cm3 or greater. An exemplary range of concentrations of the dopants in the body 24 at or near the hyperabrupt junctions 40 and 42 is about I x 10 atoms/cm3 to about 1 x 10'9 atoms/cm3. In step 120 and as illustrated in FIG. 8, silicide formation is initiated by depositing a layer of metal 122 upon the gate 46, the spacers 44, and the exposed portions of the semiconductor layer 14 in at least the area of the active region 19. The metal layer 122 is formed from a suitable metal, such as titanium, cobalt, or nickel.The metal layer 122 may be deposited, for example, by sputtering. Silicide is formed by reacting the metal layer 220 with the portions of the source 20, the drain 22 and the gate electrode 48 that are in contact with the metal layer 122 using one of a number of silicidation or salicidation processes and thereby forming the silicide regions 54,56 and 55 discussed above. An exemplary method includes annealing by raising the temperature of the semiconductor device 10 being formed to a suitable level (e. g., about 500 C to about 700 C) for a suitable length of time (e. g., about 10 seconds to about 10 minutes). Rapid thermal annealing (RTA) may also be employed, for example at a temperature of about 600 C to about 900 C for about 5 second to about 120 seconds. It will be appreciated that other temperatures and heating times may be employed. As illustrated, the silicide regions 54 and 56 will tend to encroach underneath the spacers 44. In one embodiment, the silicide regions 54 and 56 will encroach under the spacers a lateral distance of about zero A to about 100 A. As mentioned, the vertical interfaces 70 and 72 and the lateral interfaces 68 and 72 of the respective silicide regions 54 and 56 are smooth. Various techniques to control the roughness of silicide formation are known in the art. For example, if titanium is used in the silicidation or salicidation process, a pre-amorphization implant (PAI) to form a layer of amorphous silicon on or in the source 20 and drain 22 can be carried out to control the silicide interface smoothness and to lower the interface sheet resistance. Excess metal of the metal layer 122 can be removed by conventional, well-known methods. As discussed above, the proximity of the silicide regions 54 and 56 to the respective hyperabrupt junctions 60 and 62 enhances junction recombination, thereby reducing floating body effects. In addition, the hyperabrupt junctions 60 and 62 lowers contact resistance within the device 10. As a result, overall operational performance of the device is improved. Although particular embodiments of the invention have been described in detail, it is understood that the invention is not limited correspondingly in scope, but includes all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Systems, methods and computer program products download application packages to a mobile device from a traditional application store. The application package may include two applications, a first application executed on the mobile device (e.g., tablet computer, smartphone etc. running an Android or iOS operating system) and a second application for execution on the router. When a user downloads and runs the first application on the mobile device, the first application determines if the router is present (e.g., determines if the mobile device is connected to the router) and if so, downloads the second application to the router. The second application may be the router application itself, or it may be an application that when executed on the router, downloads the router application to the router. Alternatively, the first application may issue a command that causes the router to download the router application.
WHAT IS CLAIMED IS:1. A method for providing an application to a router, the method comprising:receiving, by a device, an application package from a first application source, theapplication package including a first application for execution on the device; and executing, by the device, the first application;wherein the first application causes a router application to be downloaded to the router.2. The method of claim 1, wherein the first application is executable on a first operating system, and wherein the router application is executable on a second operating system different from the first operating system.3. The method of claim 1 , wherein the application package includes the router application, and further comprising downloading the router application to the router from the device.4. The method of claim 1, wherein the application package includes a second application for execution on the router, and further comprising downloading the second application to the router from the device, wherein the second application, when executed on the router, downloads the router application to the router.5. The method of claim 1, wherein the first application communicates location dataspecifying a source for the router application to the router, and wherein the router downloads the router application from the source.6. The method of claim 1, further comprising:determining a router type for the router; anddetermining the router application based, at least in part, on the router type.7. The method of claim 1, further comprising:associating the device with the router application; andlimiting access to the router application to the device.8. The method of claim 1, further comprising determining, by the first application, that the router application is not present on the router, wherein the first application causes the router application to be downloaded to the router in response to determining that the router application is not present on the router.9. The method of claim 1, further comprising configuring the router application by the first application.10. A method for providing an application to a router, the method comprising:receiving a router application for execution on the router from a router applicationsource, wherein reception of the router application is in response to a device receiving a first application from an application source and further in response to execution of the first application by the device; andreceiving, from the first application, configuration data for the router application.1 1. The method of claim 10, wherein the router application is included in an application package, and further comprising receiving the router application from the device.12. The method of claim 10, further comprising receiving a second application for execution on the router, wherein the second application downloads the router application to the router.13. The method of claim 10, further comprising:receiving, from the device, location data specifying a source for the router application; anddownloading, by the router, the router application from the source.14. The method of claim 10, further comprising:associating the device with the router application; andlimiting access to the router application to the device.15. The method of claim 10, further comprising receiving configuration data for the router application from the device.16. A method for providing an application to a router, the method comprising:receiving by an application store, a request for an application package from a device, the application package including a first application for execution on the device; and transmitting the application package to the device;wherein the first application is configured to cause a router application to be downloaded to the router.17. The method of claim 16, wherein the application package includes the router application, and wherein the first application is configured to download the router application to the router.18. The method of claim 16, wherein the application package includes a second application for execution on the router, wherein the first application is configured to download the second application to the router, and wherein the second application downloads the router application to the router.19. The method of claim 16, wherein the first application is configured to communicate location data specifying a source for the router application to the router.20. The method of claim 16, wherein the first application is operable to present a userinterface for providing configuration data for the router application.21. An apparatus comprising:a processor; anda machine readable storage medium having machine usable program code embodied therewith, the machine usable program code executable by the processor to cause the apparatus to:receive an application package from a first application source, the application package including a first application for execution on the apparatus; and execute the first application, wherein the first application causes a routerapplication to be downloaded to a router.22. The apparatus of claim 21, wherein the first application is executable on a first operating system executed by the processor, and wherein the router application is executable on a second operating system different from the first operating system.23. The apparatus of claim 21, wherein the application package includes the routerapplication, and wherein the first application downloads the router application to the router.24. The apparatus of claim 21, wherein the application package includes a second application for execution on the router, wherein the first application downloads the second application to the router, and wherein the second application, when executed on the router, downloads the router application to the router.25. The apparatus of claim 21, wherein the first application communicates location data specifying a source for the router application to the router, and wherein the router downloads the router application from the source.26. The apparatus of claim 21, wherein the machine usable program code further includes machine usable program code to cause the apparatus to:determine a router type for the router; anddetermine the router application based, at least in part, on the router type.27. The apparatus of claim 21, wherein the first application includes a user interface toconfigure the router application.28. A router comprising:a processor; anda machine readable storage medium having machine usable program code embodied therewith, the machine usable program code executable by the processor to cause the router to: receive a router application from a router application source, wherein reception of the router application is in response to a device communicably coupled to the router receiving a first application from an application source and further in response to execution of the first application by the device; and receive, from the first application, configuration data for the router application.29. The router of claim 28, wherein the router application is included in an applicationpackage, and wherein the machine usable program code further includes machine usable program code to cause the router to receive the router application from the device.30. The router of claim 28, wherein the machine usable program code further includesmachine usable program code to cause the router to receive a second application, wherein the second application downloads the router application to the router.31. The router of claim 28, wherein the machine usable program code further includesmachine usable program code to cause the router to:receive, from the device, location data specifying a source for the router application; and download the router application from the source.32. The router of claim 28, wherein the machine usable program code further includesmachine usable program code to cause the router to:associate the device with the router application; andlimit access to the router application to the device.33. A machine-readable storage medium having stored thereon an application package, the application package including a device application and a router application, the device application comprising a first program product which, when executed by a first processor, causes the first processor to download the router application to a router.34. The machine-readable storage medium of claim 33, wherein the device application is executable on a first operating system, and wherein the router application is executable on a second operating system different from the first operating system.35. The machine-readable storage medium of claim 33, wherein the router applicationincludes a second program product which, when executed by a second processor, causes the second processor to perform operations that comprise:associating a device executing the device application with the router application; and limiting access to the router application to the device.36. One or more machine-readable media having stored therein a program product, which, when executed by a processor, causes the processor to perform operations that comprise: receiving a router application for execution on a router from a router application source, wherein reception of the router application is in response to a device receiving a first application from an application source and further in response to execution of the first application by the device; andreceiving, from the first application, configuration data for the router application.37. The one or more machine-readable media of claim 36, wherein the router application is included in an application package, and wherein the operations further comprise receiving the router application from the device.38. The one or more machine-readable media of claim 36, wherein the operations further comprise:receiving, from the device, location data specifying a source for the router application; anddownloading, by the router, the router application from the source.39. The one or more machine-readable media of claim 36, wherein the operations further comprise: associating the device with the router application; andrestricting access to the router application to the device.The one or more machine-readable media of claim 36, wherein the operations further comprise receiving configuration data for the router application from the device.
DISTRIBUTION MECHANISM FOR ROUTER APPLICATIONSRELATED APPLICATIONS[0001] This application claims the priority benefit of U.S. Application Serial No. 14/151,557 filed Jan 9, 2014.BACKGROUND[0002] Embodiments of the inventive subject matter generally relate to the field of computers, and, more particularly, to distribution mechanisms for router applications.[0003] As routers become more powerful, the ability to run third party networking applications is becoming more appealing. For example, applications such as parental control applications and virus scanning applications may be executed on a router. A router that is enhanced with such applications may be referred to as a "smart gateway" because it performs functions in addition to those traditionally performed by conventional routers. Applications for routers are typically obtained from the manufacturer or vendor of the router. For example, an application for a router may be obtained from an application store maintained by the router vendor. The proliferation of application stores can lead to confusion or other difficulties for a router owner that desires to obtain applications for their router.SUMMARY[0004] Various embodiments are disclosed for implementing distribution mechanisms for router applications. In one embodiment, an application package is received by a device from a first application source. The application package includes a device application for execution on the device. Upon execution of the device application, the device application causes a router application to be downloaded to a router.[0005] In some embodiments, a method for providing an application to a router comprises receiving, by a device, an application package from a first application source, where the application package includes a first application for execution on the device; and executing, by the device, the first application; wherein the first application causes a router application to be downloaded to the router.[0006] In some embodiments, the first application is executable on a first operating system, and the router application is executable on a second operating system different from the first operating system.[0007] In some embodiments, the application package includes the router application, and the method further comprises downloading the router application to the router from the device.[0008] In some embodiments, the application package includes a second application for execution on the router, and the method further comprises downloading the second application to the router from the device, wherein the second application, when executed on the router, downloads the router application to the router.[0009] In some embodiments, the first application communicates location data specifying a source for the router application to the router, and the router downloads the router application from the source.[0010] In some embodiments, the method further comprises determining a router type for the router; and determining the router application based, at least in part, on the router type.[0011] In some embodiments, the method further comprises associating the device with the router application; and limiting access to the router application to the device.[0012] In some embodiments, the method further comprises determining, by the first application, that the router application is not present on the router. The first application causes the router application to be downloaded to the router in response to determining that the router application is not present on the router.[0013] In some embodiments, the method further comprises configuring the router application by the first application.[0014] In some embodiments, a method for providing an application to a router comprises receiving a router application for execution on the router from a router application source, wherein reception of the router application is in response to a device receiving a first application from an application source and further in response to execution of the first application by the device; and receiving, from the first application, configuration data for the router application.[0015] In some embodiments, the router application is included in an application package, and the method further comprises receiving the router application from the device.[0016] In some embodiments, the method further comprises receiving a second application for execution on the router, wherein the second application downloads the router application to the router.[0017] In some embodiments, the method further comprises receiving, from the device, location data specifying a source for the router application; and downloading, by the router, the router application from the source.[0018] In some embodiments, the method further comprises associating the device with the router application; and limiting access to the router application to the device.[0019] In some embodiments, the method further comprises receiving configuration data for the router application from the device.[0020] In some embodiments, a method for providing an application to a router comprises receiving by an application store, a request for an application package from a device, where the application package includes a first application for execution on the device; and transmitting the application package to the device; wherein the first application is configured to cause a router application to be downloaded to the router.[0021] In some embodiments, the application package includes the router application, and the first application is configured to download the router application to the router.[0022] In some embodiments, the application package includes a second application for execution on the router, wherein the first application is configured to download the second application to the router, and wherein the second application downloads the router application to the router. [0023] In some embodiments, the first application is configured to communicate location data specifying a source for the router application to the router.[0024] In some embodiments, the first application is operable to present a user interface for providing configuration data for the router application.[0025] In some embodiments, an apparatus comprises a processor, and a machine readable storage medium having machine usable program code embodied therewith, where the machine usable program code is executable by the processor to cause the apparatus to receive an application package from a first application source, where the application package includes a first application for execution on the apparatus; and execute the first application, wherein the first application causes a router application to be downloaded to a router.[0026] In some embodiments, the first application is executable on a first operating system executed by the processor, and the router application is executable on a second operating system different from the first operating system.[0027] In some embodiments, the application package includes the router application, and wherein the first application downloads the router application to the router.[0028] In some embodiments, the application package includes a second application for execution on the router, wherein the first application downloads the second application to the router, and wherein the second application, when executed on the router, downloads the router application to the router.[0029] In some embodiments, the first application communicates location data specifying a source for the router application to the router, and the router downloads the router application from the source.[0030] In some embodiments, the machine usable program code further includes machine usable program code to cause the apparatus to determine a router type for the router; and determine the router application based, at least in part, on the router type.[0031] In some embodiments, the first application includes a user interface to configure the router application. [0032] In some embodiments, a router comprises a processor and a machine readable storage medium having machine usable program code embodied therewith, the machine usable program code executable by the processor to cause the router to receive a router application from a router application source, wherein reception of the router application is in response to a device communicably coupled to the router receiving a first application from an application source and further in response to execution of the first application by the device; and receive, from the first application, configuration data for the router application.[0033] In some embodiments, the router application is included in an application package, and the machine usable program code further includes machine usable program code to cause the router to receive the router application from the device.[0034] In some embodiments, the machine usable program code further includes machine usable program code to cause the router to receive a second application, wherein the second application downloads the router application to the router.[0035] In some embodiments, the machine usable program code further includes machine usable program code to cause the router to receive, from the device, location data specifying a source for the router application; and download the router application from the source.[0036] In some embodiments, the machine usable program code further includes machine usable program code to cause the router to associate the device with the router application; and limit access to the router application to the device.[0037] In some embodiments, a machine-readable storage medium has stored thereon an application package, the application package including a device application and a router application. The device application comprises a first program product which, when executed by a first processor, causes the first processor to download the router application to a router.[0038] In some embodiments, the device application is executable on a first operating system, and the router application is executable on a second operating system different from the first operating system. [0039] In some embodiments, the router application includes a second program product which, when executed by a second processor, causes the second processor to perform operations that comprise associating a device executing the device application with the router application; and limiting access to the router application to the device.[0040] In some embodiments, one or more machine-readable media has stored therein a program product, which, when executed by a processor, causes the processor to perform operations that comprise receiving a router application for execution on a router from a router application source, wherein reception of the router application is in response to a device receiving a first application from an application source and further in response to execution of the first application by the device; and receiving, from the first application, configuration data for the router application.[0041] In some embodiments, the router application is included in an application package, and the operations further comprise receiving the router application from the device.[0042] In some embodiments, the operations further comprise receiving, from the device, location data specifying a source for the router application; and downloading, by the router, the router application from the source.[0043] In some embodiments, the operations further comprise associating the device with the router application; and restricting access to the router application to the device.[0044] In some embodiments, the operations further comprise receiving configuration data for the router application from the device.BRIEF DESCRIPTION OF THE DRAWINGS[0045] The present embodiments may be better understood, and numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.[0046] Figure 1 is a block diagram illustrating a system for providing an application to a router according to some embodiments. [0047] Figures 2A-2C are block diagrams illustrating application packages according to some embodiments.[0048] Figure 3 is a flow diagram illustrating a method for providing an application to a router according to some embodiments.[0049] Figures 4-6 are sequence diagrams illustrating sequences of operations for providing an application to a router according to some embodiments.[0050] Figure 7 is an example block diagram of one embodiment of an electronic device for implementing a router application distribution mechanism.DESCRIPTION OF EMBODIMENT(S)[0051] The description that follows includes example systems, methods, techniques, instruction sequences and computer program products that embody techniques of the present inventive subject matter. However, it is understood that the described embodiments may be practiced without these specific details. For instance, although examples refer to mobile devices receiving application packages from an application store, other types of devices such as desktop or server computers may also receive application packages from an application store. In other instances, well-known instruction instances, protocols, structures and techniques have not been shown in detail in order not to obfuscate the description.[0052] The embodiments include a distribution mechanism for router applications. An application package can be downloaded to a mobile device from a traditional application store. The application package may include two applications. A first application can be executed on the mobile device (e.g., tablet computer, smartphone, etc. running an Android™ or iOS operating system) and a second application can be a router application for execution on the router, where the router may run an operating system that is different from the mobile device. When a user downloads and runs the first application on the mobile device, the first application determines if the router is present (e.g., determines if the mobile device is connected to the router) and if so, downloads the second application to the router. The second application may be the router application itself, or it may be a small application that when executed on the router, downloads the router application to the router. Alternatively, the first application may issue a command that causes the router to download the router application.[0053] Figure 1 is a block diagram illustrating a system 100 for providing an application to a router according to some embodiments. System 100 includes an application store 102, a router 1 12, and can include one or more of a mobile device 110, a remote device 1 18, or a router application source 1 16. Various network technologies may be used to communicably couple components of system 100. For example, router 1 12 may include either or both wired and wireless networking capabilities for communication over network 106 and with device 1 10. Examples of wireless networks include Wireless Local Area Network (WLAN),BLUETOOTH® (hereinafter "Bluetooth"), Worldwide Interoperability for Microwave Access (WiMAX), ZigBee®, etc. Examples of wired networks include Ethernet and powerline communication networks. The embodiments described herein are not limited to any particular wired or wireless network technology.[0054] Mobile device 1 10 may be any type of mobile computing device. In some embodiments, device 1 10 may be a mobile phone such as a smartphone. In alternative embodiments, mobile device 110 may be a tablet or laptop computer. The embodiments are not limited to any particular type of mobile device.[0055] Mobile device 1 10 may be configured to communicate with an application store 102. Application store 102 provides applications to mobile devices. In some embodiments, application store 102 may be a computer system configured to provide applications specific to a particular type of device or particular operating system that may be different from an operating system executing on router 112. For example, the Google® Play Store provides applications specific to devices (smartphones, tablet computers etc.) running the Android operating system. Similarly, the App Store® from Apple Computer, Inc. provides applications specific to the Apple® iPhone® series of smartphones and iPad® series of table computers that run the iOS operating system.[0056] Mobile device 1 10 may communicate with application store 102 using any of the networking technologies available to the mobile device 1 10. For example, mobile device 1 10 may communicate to the application store via a wireless connection 120 established with router 1 12 (e.g., an IEEE 802.11 wireless connection). Alternatively, mobile device 1 10 may communicate with application store 102 via a 3G (Third Generation) or 4G LTE (Fourth Generation Long Term Evolution) connection 122 provided by a cellular communications service provider.[0057] In some embodiments, mobile device 1 10 may provide a user interface for interacting with application store 102. The user interface can provide the ability for a user to select and download an application package 104 from application store 102. The application package can include a device application 124 for execution on mobile device 110. Device application 124, when executed, causes a router application to be downloaded to router 1 12. The router application may be a firewall application, a parental control application, or any other application executable on router 1 12. The router application may be included as part of application package 104. Alternatively, the router application may be downloaded from application store 102 separately from the download of application package 104 to mobile device 110. For example, the device application 124 may issue a command to router 1 12 instructing router 1 12 to obtain the router application from application store 102. Further, the router application may be downloaded to router 1 12 from a router application source 116. For example, router application source 1 16 may be a download site available through the Internet that is maintained by a manufacturer of router 112.[0058] In some embodiments, installer 114 may receive a downloaded router application and install it on router 112. In alternative embodiments, installer 114 may receive a command from mobile device 1 10 that specifies a router application and instructs installer 1 14 to download and install the router application.[0059] Router 112 may not have a user interface for configuring router applications. Device application 124 may include a configuration interface 126. Configuration interface 126 provides a user interface that can provide configuration parameters to a router application. For example, in the case of a firewall router application, configuration interface 126 may be used to provide rules and settings for the firewall router application. Similarly, in the case of a parental control router application, configuration interface 126 may be used to provide configuration parameters that are used by the parental control router application to filter content or determine whether access is provided to a network site through router 112.[0060] In addition to mobile device 110, a remote device 1 18 may be used to provide applications to router 112 in a similar manner as described above with respect to mobile device 1 10. Remote device 118 may be any type of computer system, including a desktop computer, laptop computer, tablet computer, smartphone etc. Remote device 1 18 may also interface with application store 102 to download application packages that include a device application 124. The device application can execute on remote device 118 to cause a router application to be downloaded to router 1 12. Remote device can be used to access router 112 remotely, i.e., over a network such as the Internet. In some embodiments, remote device 1 18 may create a secure network tunnel to router 1 12 when downloading router applications to router 112 or when configuring router 112.[0061] Further details on the operation of the above-described system will be provided below.[0062] Figures 2A-2C are block diagrams illustrating application packages according to some embodiments. Figure 2A illustrates an embodiment in which application package 104 includes both device application 124 and router application 204. In such embodiments, application package 104 may be downloaded to mobile device 110. When mobile device 110 executes device application 124, router application 204 may be extracted from application package 104 and downloaded to router 1 12. As discussed above, device application 124 may be designed to execute on an operating system for mobile device 110, while router application 204 may be designed to execute on an operating system for router 1 12 that may be different from the operating system executing on mobile device 1 10.[0063] Figure 2B illustrates an embodiment in which application package 104 includes device application 124 and a router application downloader 206. In such embodiments, application package 104 may be downloaded to mobile device 110. When mobile device 110 executes device application 124, router application downloader 206 may be extracted from application package 104 and downloaded to router 1 12. Router application downloader 206 may be an application that runs on router 112 to determine various router characteristics and uses the router characteristics to select and download an appropriate router application for the router. For example, router application downloader 206 may determine combinations of one or more of manufacturer, type and version information for router 112 so that a router application may be downloaded that is appropriate for the manufacturer, type and version of router 1 12. Other information from router 1 12 may be used to determine an appropriate router application. For example, operating system or hardware information for router 1 12 such as processor type may be used to determine an appropriate router application for router 1 12.[0064] Figure 2C illustrates an embodiment in which application package 104 includes device application 124. Device application 124 includes router application location data 208 that identifies a source for a router application. For example, router application location data 208 may include a Uniform Resource Locator (URL) that identifies a web site, server site, or other location where a router application is available. When mobile device 110 executes device application 124, router application location data 208 is communicated from device application 124 to router 112 informing router 1 12 to obtain the router application from the source indicated in router application location data 208. For example, device application 124 may communicate router application location data 208 to installer 114, which uses the location information to download and install the router application.[0065] Figure 3 is a flow diagram illustrating a method 300 for providing an application to a router according to some embodiments. Method 300 begins at block 302 with receiving an application package from an application store. As discussed above, the application package includes a device application for execution on a device such as mobile device 1 10. In some embodiments, the application package can be selected by a user based on a desired router application and a router type. In response to the selection, the application package is downloaded to the user's device.[0066] At block 304, the device application is executed on the device.[0067] At block 306, in some embodiments, the device application determines if the desired router application is already present on the router. The determination may include determining that the router application does not exist at all on the router. Alternatively, the determination may include determining that the router application is installed on the router, but the version of the currently installed router application is not the same as the version of the desired router application. For example, the version of a router application currently available for execution on a router may be an outdated version, and the desired version may be a most recent release of the router application.[0068] At block 308, if the desired router application is not present on the router (or the desired version of the router application is not present), the device application causes the router application to be downloaded to the router. In some embodiments, the router application may be included in the same application package as the device application that was downloaded from the application store. In such embodiments, the device application may cause the router application to be extracted from the application package and downloaded to the router. In alternative embodiments, the device application may issue a command to the router directing the router to download the router application. The device application may supply a source for the router to use to obtain the router application.[0069] In some embodiments, before a router application is downloaded or installed, the authenticity and authorization of the device executing the device application may be checked to determine if the device application is authorized to cause the router application to be downloaded and installed on the router. The authentication and authorization may be determined using wireless network security parameters. In such embodiments, the fact that the device is successfully connected to a secured wireless network is considered sufficient to determine the authenticity and authorization for the device to cause a router application to be downloaded to the router. In alternative embodiments, other mechanisms such as a user name and password combination or security certificates may be used to determine that a device application is authentic and authorized to cause router applications to be downloaded to a router.[0070] At block 310, in some embodiments, the device application presents a configuration interface that may be used to supply configuration parameters for the newly installed router application. For example, a virus scanning router application may utilize configuration parameters that specify a level of scanning to be performed, or configuration parameters that specify file types, packet types etc. that are to be scanned. The configuration interface presented by the device application may be used to provide such configuration parameters for the router application.[0071] In embodiments where the device application is executed on a remote device and may communicate with the router via public networks such as the Internet, a secure network tunnel may be established between the remote device and the router. The secure network tunnel provides a secure mechanism for downloading a router application to the router. Additionally, the secure network tunnel provides for secure transmission of configuration parameters and password so that a malicious user may be prevented from intercepting such information.Examples of secure network tunneling protocols include Virtual Private Network (VPN) and Secure Shell (SSH) tunneling protocols.[0072] In addition to the security parameters that may be used to determine if the device application is authorized to download the router application to the router, the router application itself may include security mechanisms to restrict access to the router application. In some embodiments, the router application may restrict access to the application by requiring entry of a valid user name and password. In alternative embodiments, the router application may restrict access to the router application to the mobile device that caused the router application to be downloaded to the router. For example, the router application may store identification information for the device that caused the router application to be downloaded to the router. In some embodiments, the identification information may be a Media Access Control (MAC) address of the mobile device. The router may use the identification information to limit access to the router application to the mobile device. For example, the router may deny access to devices where the MAC address of the device seeking to access the router application does not match that of the mobile device that originally caused the router application to be downloaded to the router.[0073] Figures 4-6 are sequence diagrams illustrating example sequences of operations for providing an application to a router according to embodiments. As will be appreciated by one of ordinary skill in the art having the benefit of the disclosure, the examples that follow are merely possible sequences of operations, and that variations and other sequences are possible and within the scope of the present disclosure. [0074] Figure 4 illustrates a sequence of operations 400 where an application package 104 received by a mobile device 110 from an application store 102 includes both a device application 124 and a router application 204. The example sequence of operations begins with operation 402, where mobile device 1 10 issues a query to application store 102 for a router application. In some embodiments, the query may include user provided parameters such as parameters describing the type of router application that the user desires. In some embodiments, a router type may also be included in the query to limit the query results to router applications that are compatible with the router type. The router type parameter may be provided by the user in some embodiments. In alternative embodiments, the mobile device 1 10 may query the router 1 12 for its router type, or may otherwise determine the router type from connection information available to mobile device 1 10.[0075] At operation 404, the application store 102 returns results of the query from operation 402. The query result may comprise a list of applications that satisfy the query parameters received as part of the query.[0076] At operation 406, mobile device 110 receives a selection of a router application from the applications in the query results provided at operation 404. For example, a user may provide a selection of an application via a user interface provided on mobile device 110 that displays the query results and provides a selection mechanism for selecting one or more of the query results.[0077] At operation 408, in response to receiving the selection, application store 102 downloads to mobile device 1 10 an application package that corresponds to the selected router application. For the example illustrated in Figure 4, the application package may be one such as described with reference to Figure 2 A that contains both a device application 124 and the selected router application 204.[0078] Subsequent to completion of the download, a user may select device application 124 for execution on mobile device 1 10. In response to the user's selection, mobile device 110 can begin executing the device application. Device application 124 may detect that it iscommunicably coupled to router 112 (e.g., it is within range of router 112 and has established a network connection with router 1 12). At operation 410, device application 124 can download router application 204 to router 112. After router application 204 has been downloaded to router 1 12, router application 204 may be installed on router 1 12 and begin execution.[0079] In some embodiments, at operation 412, device application 124 may present a user interface allowing a user to provide configuration parameters for router application 204. In alternative embodiments, configuration parameters may be retrieved and provided to router application 204 without user intervention. For example, device application 124 may receive configuration parameters from a configuration file included as part of application package 104. Alternatively, the configuration file may be read from mobile device 110 or from router 1 12.[0080] Figure 5 illustrates a sequence of operations 500 where an application package 104 received by a mobile device 110 from an application store 102 includes a device application 124, but does not include a router application. Operations 502, 504, 506, and 508 are the same as operations 402, 404, 406, and 408 described above with reference to Figure 4.[0081] At operation 510, device application 124 issues a command to router 112 instructing router 1 12 to retrieve router application 204. The command may include parameters identifying the router application to be retrieved.[0082] At operation 512, router 112 requests router application 204 from application store 102.[0083] At operation 514, the requested router application 204 is downloaded from the application store 102 to router 112. After router application 204 has been downloaded to router 1 12, router application may be installed on router 112 and begin execution.[0084] In some embodiments, at operation 516, device application 124 may present a user interface allowing a user to provide configuration parameters for router application 204. In alternative embodiments, configuration parameters may be retrieved and provided to the router application without user intervention. For example, device application 124 may receive configuration parameters from a configuration file included as part of application package 104. Alternatively, the configuration file may be read from mobile device 110 or from router 112. [0085] Figure 6 illustrates a sequence of operations 600 where an application package 104 received by a mobile device 110 from an application store 102 includes a device application 124, and the device application 124 includes or has access to data identifying a source for router application 204. For purposes of the example illustrated in Figure 6, the application package may be one such as described with reference to Figure 2C. Operations 602, 604, 606, and 608 are the same as operations 402, 404, 406, and 408 described above with reference to Figure 4.[0086] At operation 610, device application sends information identifying a source for router application 204 to router 112. As noted above, the source location may be a URL identifying a web site or server that can provide router application 204 to router 1 12.[0087] At operation 612, router 1 12 uses the source location information to identify the router application source 116 and issues a request for router application 204 to router application source 1 16.[0088] At operation 614, router application source 1 16 downloads router application 204 to router 1 12. Router 112 may then install and begin execution of router application 204.[0089] In some embodiments, at operation 616, device application 124 may present a user interface allowing a user to provide configuration parameters for router application 204. In alternative embodiments, configuration parameters may be retrieved and provided to the router application without user intervention. For example, device application 124 may receive configuration parameters from a configuration file included as part of application package 104. Alternatively, the configuration file may be read from mobile device 110 or from router 112.[0090] Embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or anembodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, embodiments of the inventive subject matter may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium. The described embodiments may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic device(s)) to perform a process according to embodiments, whether presently described or not, since every conceivable variation is not enumerated herein. A machine readable medium includes any mechanism for storing or transmitting information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions. In addition, embodiments may be embodied in an electrical, optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.), or wireline, wireless, or other communications medium.[0091] Computer program code for carrying out operations of the embodiments may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN), a personal area network (PAN), or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).[0092] Figure 7 is an example block diagram of one embodiment of an electronic device 700 including a router distribution mechanism in accordance with various embodiments of this disclosure. In some implementations, the electronic device 700 may be one of a laptop computer, a netbook, a mobile phone, a powerline communication device, a personal digital assistant (PDA), or other electronic systems. The electronic device 700 includes a processor unit 702 (possibly including multiple processors, multiple cores, multiple nodes, and/or implementing multi-threading, etc.). The electronic device 700 includes a memory unit 706. The memory unit 706 may be system memory (e.g., one or more of cache, SRAM, DRAM, zero capacitor RAM, Twin Transistor RAM, eDRAM, EDO RAM, DDR RAM, EEPROM, NRAM, RRAM, SONOS, PRAM, etc.) or any one or more of the above already described possible realizations of machine- readable media. The electronic device 700 also includes a bus 710 (e.g., PCI, ISA, PCI-Express, HyperTransport®, InfiniBand®, NuBus, AHB, AXI, etc.), and network interface 704, that may be at least one of a wireless network interface (e.g., a WLAN interface, a Bluetooth® interface, a WiMAX interface, a ZigBee® interface, a Wireless USB interface, etc.) or a wired network interface. In one example embodiment the first network interface 704 may comprise a 2.4GHz or 5GHz wireless interface capable ofutilizing IEEE 802.1 1a, 802.1 1b, 802. l lg, 802.1 1η, or 802.1 lac protocol. The electronic device 700 also includes application 712. Application 712 can be downloaded onto device 700 via network interface 704.[0093] In implementations where device 700 is a mobile device or other computing device, application 712 may be a device application 124 that can be downloaded via network interface 704 and can be executed by processor unit 702. As described above in Figures 1 - 6, a device application may include functionality described above to download a router application to a router device. Additionally, electronic device 700 may include a display unit (not shown). The display unit may be used to provide a user interface for selecting application packages and for configuring a router application 204 through device application 124.[0094] In implementations where device 700 is a router 112, application 712 may be a router application 204 that can be downloaded via network interface 704 for execution by processor unit 702. Further, device 700 may include installer 114 that can be executed by processor unit 702.[0095] Further, realizations may include fewer or additional components not illustrated in Figure 7 (e.g., video cards, audio cards, additional network interfaces, peripheral devices, etc.). The processor unit 702, the memory unit 706, and the network interface 704 are coupled to the bus 710. Although illustrated as being coupled to the bus 710, the memory unit 706 may be coupled to the processor unit 702.[0096] While the embodiments are described with reference to various implementations and exploitations, it will be understood that these embodiments are illustrative and that the scope of the inventive subject matter is not limited to them. In general, techniques for providing a router application to a router as described herein may be implemented with facilities consistent with any hardware system or hardware systems. Many variations, modifications, additions, and improvements are possible.[0097] Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the inventive subject matter. In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the inventive subject matter.
An integrated device package is disclosed. The integrated device package can include a package substrate having a plurality of contact pads on a first side of the package substrate, the plurality of contact pads configured to electrically connect to a sensor assembly. The package can include a radiation shield attached to a second side of the package substrate by way of a first adhesive, the first side opposite the second side. The package can include an integrated device die attached to the radiation shield by way of a second adhesive. The integrated device die can comprise sensitive active electronic circuitry in a sensitive active region of the integrated device die. A molding compound can be disposed over the integrated device die and the radiation shield.
1.Integrated device packaging, including:A packaging substrate having a plurality of contact pads on a first side of the packaging substrate, the plurality of contact pads being configured to be electrically connected to the sensor assembly;A radiation shield, attached to the second side of the packaging substrate through a first adhesive, the first side being opposite to the second side;The integrated device die of the radiation shield is attached by a second adhesive, the integrated device die includes sensitive active electronic circuits in the sensitive active area of the integrated device die, wherein the integrated The device die is configured to process signals transmitted from the sensor assembly to the integrated device die through the contact pads, and wherein the integrated device die is configured to process signals through the contact pads Transmit to an external device; andMolding compound on the integrated device die and the radiation shield.2.The package of claim 1, wherein no electrical connectors extend from the package substrate through the mold compound to the outer surface of the mold compound.3.The package of claim 1, wherein the lateral footprint of the radiation shield is larger than the lateral footprint of the sensitive active area of the integrated device die.4.The package of claim 1, wherein the lateral footprint of the radiation shield is larger than the lateral footprint of the integrated device die.5.The package of claim 1, wherein the lateral footprint of the radiation shield is smaller than the lateral footprint of the integrated device die.6.The package of any one of claims 1 to 5, wherein the radiation shield includes tungsten.7.The package of any one of claims 1 to 6, wherein the radiation shielding cover includes metal with a density in the range of 9 g/cm3 to 22 g/cm3.8.The package of any one of claims 1 to 7, wherein the thickness of the radiation shield is in the range of 0.4 mm to 1.2 mm.9.The package of any one of claims 1 to 8, wherein the integrated device die is connected to the package substrate through one or more bonding wires.10.The package of any one of claims 1 to 9, further comprising a spacer mounted to the integrated device die by a third adhesive.11.The package of claim 10, further comprising an element mounted to the spacer by a fourth adhesive.12.The package of claim 11, wherein the element is narrower than the spacer.13.The package of any one of claims 11 to 12, wherein the component includes a second integrated device die.14.The package of any one of claims 11 to 12, wherein the component includes a second radiation shield mounted to a pad opposite to the integrated device die, the radiation shield positioned to shield the integrated device The electromagnetic radiation on the first side of the device die, the second radiation shield is positioned to shield the electromagnetic radiation on the second side of the integrated device die, the first side is opposite to the second side.15.The package of any one of claims 1 to 14, further comprising a mounting structure on the integrated device die, the mounting structure including a film.16.The package of claim 15 further comprising a component mounted to the mounting structure.17.The package of any one of claims 15 to 16, wherein the film has a flowable state and a cured state.18.The package of any one of claims 15 to 17, wherein the film comprises a polymer.19.The package of any one of claims 1 to 18, wherein the package substrate comprises an insulating substrate with conductive routing traces that connect the contact pads with the second side of the package substrate The corresponding pads are electrically connected.20.Integrated device packaging, including:A packaging substrate comprising an insulating substrate with conductive routing traces, the packaging substrate having a plurality of contact pads on a first side of the packaging substrate, the plurality of contact pads being configured to be electrically connected to a sensor assembly ;A radiation shield, attached to a second side of the packaging substrate by a first adhesive, the first side being opposite to the second side; andThe integrated device die of the radiation shield is attached by a second adhesive, the integrated device die includes sensitive active electronic circuits in the sensitive active area of the integrated device die, wherein the integrated The device die is configured to process signals transmitted from the sensor assembly to the integrated device die through the contact pads, and wherein the integrated device die is configured to process signals through the contact pads Transfer to an external device.21.The integrated device package of claim 20, further comprising a molding compound on the integrated device die and the radiation shield.22.The integrated device package of claim 20, further comprising a package cover mounted to the package substrate to define a cavity in which the integrated device die and the radiation shield are disposed.23.The integrated device package of any one of claims 20 to 22, further comprising a component mounted to the integrated device die.24.The integrated device package of claim 23, wherein the component includes a second radiation shield.25.The integrated device package of claim 23, wherein the component includes a second integrated device die.26.The integrated device package of claim 23, wherein the component includes a pad, and the integrated device package further includes a second integrated device die mounted to the pad.27.Sensor modules, including:Integrated device packaging, including integrated device die and radiation shield;The sensor assembly includes a sensor substrate and a sensor chip mounted to the front surface of the sensor substrate; andAn electrical connector on the back surface of the sensor substrate, the electrical connector being configured to be electrically connected to an external device, wherein the integrated device package is electrically connected to the electrical connector through the sensor substrate.28.The sensor module of claim 27, wherein the integrated device package includes a package substrate having a first side and a second side opposite to the first side, the integrated device die and the radiation The shielding case is disposed on the second side of the packaging substrate, wherein the contact pads on the first side of the packaging substrate are physically and electrically connected to the contact pads on the back surface of the sensor substrate.29.The sensor module of claim 27 or 28, wherein the radiation shield is attached to the second side of the packaging substrate by a first adhesive, and wherein the integrated device die is attached by a second adhesive The radiation shielding cover.30.The sensor module of claim 28, wherein the integrated device die is soldered to the package substrate by wire bonding.31.The sensor module of claim 27 or 28, wherein the integrated device die is mounted to the first side of the package substrate by flip chip connection, and wherein the radiation shield is attached to the integrated device by an adhesive. Device die.32.The sensor module of any one of claims 27 to 31, further comprising a molding compound on the integrated device die and the radiation shield.33.The sensor module of any one of claims 27 to 31, further comprising a packaging cover mounted to the packaging substrate to define a cavity in which the integrated device die and the radiation shield are disposed.34.Integrated device packaging, including:Package substrate;The integrated device die mounted to the package substrate by flip-chip connection includes a plurality of solder balls between the package substrate and the integrated device die; andThe radiation shield of the integrated device die is attached by an adhesive.35.The package of claim 34, further comprising a molding compound on the integrated device die and the radiation shield.36.The package of claim 34, further comprising a package cover mounted to the package substrate to define a cavity in which the integrated device die and the radiation shield are disposed.
Shielded integrated device packageCross-references to related applicationsThis application claims priority to U.S. Provisional Patent Application No. 62/776,340 filed on December 6, 2018, the entire content of which is incorporated herein by reference in its entirety and is used for all purposes.Technical fieldThe field relates to shielded integrated device packaging.Background techniqueIn various types of integrated device packaging, electromagnetic radiation (such as X-rays) irradiating the integrated device die can damage the circuit of the integrated device die. For example, in some medical imaging applications, such as X-ray imaging or computer tomography (CT) imaging, radiation can irradiate the die and can damage or otherwise degrade the performance of the die. Therefore, there is always a need to reduce or prevent damage to the integrated die of the device due to the influence of electromagnetic radiation.Summary of the inventionIn one embodiment, an integrated device package is disclosed. The integrated device package may include a packaging substrate having a plurality of contact pads on a first side of the packaging substrate, the plurality of contact pads being configured to be electrically connected to the sensor assembly. The integrated device package may include a radiation shielding case attached to a second side of the package substrate by a first adhesive, the first side being opposite to the second side. The integrated device package may include an integrated device die attached to the radiation shield through a second adhesive. The integrated device die may include a sensitive active electronic circuit in a sensitive active area of the integrated device die, wherein the integrated device die is configured to handle transmission from the sensor assembly to the integrated device die through the contact pads. The signal of the integrated device die, and wherein the integrated device die is configured to transmit the processed signal to an external device through the contact pad. The molding compound can be provided on the integrated device die and the radiation shield.In another embodiment, an integrated device package is disclosed. The integrated device package may include: a packaging substrate including an insulating substrate with conductive routing traces, the packaging substrate having a plurality of contact pads on a first side of the packaging substrate, the plurality of contact pads being configured It is electrically connected to the sensor assembly. The integrated device package may include a radiation shielding case attached to a second side of the package substrate by a first adhesive, the first side being opposite to the second side. The integrated device package may include: an integrated device die attached to the radiation shield by a second adhesive, the integrated device die including sensitive active electronics in a sensitive active area of the integrated device die Circuit, wherein the integrated device die is configured to process signals transmitted from the sensor assembly to the integrated device die through the contact pads, and wherein the integrated device die is configured to pass through the contact The pad transmits the processed signal to an external device.In another embodiment, a sensor module is disclosed. The sensor module may include an integrated device package, including an integrated device die and a radiation shield. The sensor module may include a sensor assembly including a sensor substrate and a sensor chip mounted to the front surface of the sensor substrate. The sensor module may include an electrical connector on the back of the sensor substrate, the electrical connector being configured to be electrically connected to an external device, wherein the integrated device package is electrically connected to the electrical connector through the sensor substrate .In another embodiment, an integrated device package is disclosed. The integrated device package may include: a package substrate; and an integrated device die mounted to the package substrate by flip chip connection, including a plurality of solder balls between the package substrate and the integrated device die. The integrated device package may include a radiation shielding case attached to the integrated device die by an adhesive.Description of the drawingsThe embodiments of the present disclosure will now be described by non-limiting examples with reference to the accompanying drawings.Fig. 1A is a schematic side cross-sectional view of a sensor module according to an embodiment.FIG. 1B is a schematic perspective view of a sensor module according to various embodiments.FIG. 2 is a schematic side cross-sectional view of an integrated device package according to various embodiments.FIG. 3 is a schematic side cross-sectional view of a sensor module according to various embodiments.FIG. 4 is a schematic side cross-sectional view of an integrated device package according to various embodiments.Figure 5 is a photomicrograph showing an example integrated device package, similar to the package shown in Figure 1A.Fig. 6 is a schematic side cross-sectional view of an integrated device package according to another embodiment.Figure 7 is a schematic side cross-sectional view of an integrated device package including a package cover mounted to a package substrate to define a cavity package.FIG. 8 is a schematic side cross-sectional view of an integrated device package including a cavity package according to another embodiment.9 is a schematic side cross-sectional view of an integrated device package including a cavity package with a radiation shield on the top of the integrated device die.Fig. 10 is a schematic side cross-sectional view of an integrated device package including a cavity package, in which a radiation shield is disposed between two integrated device dies.Detailed waysThe various embodiments disclosed herein relate to sensor modules configured for use in imaging systems, such as digital X-ray imaging systems, computer tomography (CT) imaging systems, ultrasound imaging systems, or any other suitable imaging systems. For example, the shielding devices and techniques disclosed herein may be arranged to prevent or prevent harmful electromagnetic radiation from reaching and damaging integrated device dies, such as integrated circuit dies with radiation-sensitive active processing circuits. The various embodiments of the shielding disclosed herein can also be used in other applications susceptible to radiation damage, such as aerospace applications for atmospheric radiation and bidirectional shielding.Fig. 1A is a schematic side cross-sectional view of a sensor module 1 according to an embodiment. The sensor module 1 may include a sensor assembly 3 and an integrated device package 2 mounted to the sensor assembly 3. An illumination source 6 can be provided, such as an X-ray source or any other suitable electromagnetic radiation source, and the electromagnetic radiation can be directed to the front side 15 of the sensor assembly 3. In various embodiments, although not shown here, an object (such as a human patient, or any other suitable target object) may be provided between the illumination source 6 and the sensor assembly 3. More details about the sensor assembly and the components provided for it can be found in U.S. Patent Nos. 8,829,454, 9,116,022, and 10,340,302, the entire contents of each are hereby incorporated by reference in their entirety and used for all purposes.The sensor assembly 3 may include a sensor substrate 4 and one or more sensor chips 5 mounted to the front surface of the sensor substrate 4. The sensor substrate 4 may include any suitable type of substrate having a non-conductive or insulating base substrate with conductive routing traces (for example, at least partially embedded traces), such as a laminate substrate, a printed circuit board (PCB) substrate, A semiconductor interposer, a flexible substrate including a polymer with embedded traces, or any other suitable substrate. In various embodiments, conductive routing traces may transmit signals through the substrate 4 laterally and vertically. The sensor chip 5 may include a photodiode array (PDA), which has a plurality of photosensitive elements that convert electromagnetic radiation into electric current. Although not shown, a radiation regulator, such as a filter or a scintillator, may be provided on the front side 15 of the sensor assembly 3. The sensor die 5 can correspondingly convert the light irradiated on the PDA into an electrical signal, which can be transmitted to the conductive trace in the sensor substrate 4. In some embodiments, the sensor chip 5 may be electrically connected to the sensor substrate 4 through conductive adhesives such as solder bumps, anisotropic conductive film (ACF), conductive epoxy, or the like.The integrated device package 2 can be mounted on the back side 16 of the sensor assembly 3, for example, the back side or the surface of the sensor substrate 4. In the illustrated embodiment, the package 2 may include a package substrate 7 having a first side or lower surface that is electrically and mechanically connected to the sensor substrate 4 by a conductive adhesive, for example by a plurality of solder balls 14. Electrical communication is provided between the contact pads (not shown) of the sensor substrate 4 and the contact pads 31 of the package substrate 7. Furthermore, the package 2 may have a smaller lateral footprint than the corresponding lateral footprint of the sensor chip 5. Advantageously, the relatively small lateral footprint of the package 2 enables multiple die packages to be provided on the back of the sensor module 1.At least a part of electromagnetic radiation (for example, X-rays) can pass through the sensor assembly 3 and the package substrate 7, and if the radiation strikes the active or sensitive circuit of the electronic component, the electronic component of the package 2 may be damaged. Therefore, disclosed herein Various embodiments provide an electromagnetic shield 8 that is selected to prevent electromagnetic radiation (for example, X-rays) from hitting various electronic components of the package 2. The packaging substrate 7 may include a substrate with a non-conductive or insulating base substrate with conductive routing traces 36 (for example, at least partially embedded traces), such as a laminate substrate, a printed circuit board (PCB) substrate, a semiconductor interposer, This includes a flexible substrate with polymer embedded with traces, or any other suitable substrate. In various embodiments, the conductive routing traces 36 can transmit signals through the package substrate 7 laterally and vertically. For example, the trace 36 of the packaging substrate 7 may electrically connect the contact pads 31 on the first side of the packaging substrate 7 with the corresponding bonding pads 35 on the second side of the packaging substrate 7.The shielding cover 8 may be mounted to the second side or upper surface of the packaging substrate 7 by an adhesive 9. The adhesive 9 may include any suitable type of adhesive, such as non-conductive (e.g., non-conductive and/or non-conductive) or conductive adhesive or epoxy. The integrated device die 10 may be mounted to the upper surface of the shielding cover 8 through an adhesive 11, and the adhesive 11 may be the same as or different from the adhesive 9. The pads of the integrated device die 10 may be electrically connected to the corresponding contact pads on the upper surface of the package substrate 7 through one or more bonding wires 12. A molding compound 13 may be provided on the exposed parts of the integrated device die 10, the bonding wire 12, the shielding cover 8 and the packaging substrate 7 to encapsulate these components in the package 2.The integrated device die 10 may include an active processing circuit configured to process conversion by the sensor die 5 and transfer to the die 10 through the sensor substrate 4, the solder balls 14, the conductive traces in the package substrate 7, and the bond wires 12 Electrical signals (for example, analog signals). The integrated device die 10 can process these signals in any suitable manner, including, for example, signal filtering, analog-to-digital conversion, and the like. The signals processed by the integrated device die 10 can be transferred from the package 10 to a larger electronic system for presentation on a display or further processed in other ways to analyze the imaged object.The size of the electromagnetic shield 8 may be determined and include materials selected to effectively prevent destructive radiation (for example, X-rays) from hitting the active circuit of the integrated device 10. In some embodiments, the shield 8 may be wider than the integrated device die 10. For example, the lateral footprint of the shield 8 may be wider than the corresponding lateral footprint of the die 10, so that the die 10 is relative to the illumination source 6. It is located in the shadow of the shielding cover 8, as shown in the embodiment shown in FIG. 1A. In other embodiments, the width of the die 10 may be equal to or greater than the width of the shield 8 (see FIG. 5). In such an embodiment, the active circuits in the die 10 may be arranged in the shadow of the shielding case 8, so that even if part of the die 10 may be outside the coverage area of the shielding case 8, the sensitive or active circuits are also arranged In the lateral footprint of the shield 8.The shield 8 may include any suitable type of shield, which can effectively block or sufficiently limit destructive radiation (for example, X-rays) from striking the active circuit of the die 10. The shield 8 may include a material and a shape configured to block at least 75%, at least 85%, at least 90%, or at least 95% of X-ray radiation impinging on the sensor module 1. The shield 8 may include a material and a shape configured to block 75% to 100% or 90% to 100% of X-ray radiation impinging on the sensor module 1. For example, the shielding cover 8 may include metal with a density greater than 9g/cm3, greater than 10g/cm3, or greater than 15g/cm3. In some embodiments, the shield may include metal with a density in the range of 9 g/cm3 to 22 g/cm3. In various embodiments, the shield 8 may include tungsten, lead, or molybdenum. The thickness of the shielding can 8 can be appropriately selected based on the material composition of the shielding can 8. For example, a shield with a higher percentage of high-density metal (eg, tungsten) than another shield can be made thinner than another shield formed of a low-percent high-density metal. In various embodiments, the thickness of the shield 8 may be at least 0.4 mm, or at least 0.5 mm, to provide sufficient shielding for the die 10. For example, the thickness of the shielding cover 8 can be 0.4mm to 3mm, 0.4mm to 2mm, 0.4mm to 1.2mm, 0.4mm to 1mm, 0.45mm to 1mm, 0.5mm to 1mm 1mm, 0.45mm to 0.8mm, 0.5mm to 0.8 mm, 0.45mm to 0.65mm, or 0.7mm to 0.9mm.The height or thickness of the other components of the package 2 can be any suitable value. In various embodiments, the height of the molding compound may range from 1 mm to 1.25 mm (for example, about 1.140 mm in one embodiment). The thickness of the bonding wire 12 may be in the range of 15 μm to 40 μm (for example, about 25.4 μm, or about 20 μm in various embodiments). The thickness of the integrated device die 10 may be in the range of 80 μm to 120 μm, or in the range of 90 μm to 110 μm (for example, approximately 101.6 μm in various embodiments). The respective thicknesses of the adhesives 9, 11 may be in the range of 20 μm to 30 μm (for example, about 25.4 μm in some embodiments). The thickness of the substrate 7 may be in the range of 300 μm to 400 μm (for example, about 360 μm in some embodiments). The height of the solder ball 14 may be in the range of 200 μm to 300 μm (for example, about 240 μm in some embodiments). It should be understood that the heights and thicknesses provided above are only examples and any suitable height or thickness may be adapted to a particular packaging arrangement.FIG. 1B is a schematic perspective view of the sensor module 1 according to various embodiments. Unless otherwise stated, the components of FIG. 1B may be the same or substantially similar to the same numbered components of FIG. 1A. For example, the sensor module 1 may include the sensor substrate 4. In FIG. 1B, a plurality of (for example, two) sensor chips 5 may be mounted to the sensor substrate 4. Any suitable number of sensor chips 5 can be mounted on the sensor substrate 4 (for example, only one sensor die 5 on the sensor substrate 4, or more than two sensor chips 5 mounted on the substrate 4). In the embodiment of FIG. 1B, one or more integrated device packages 2 may be mounted on the backside of the sensor substrate 4, for example, through solder balls or other conductive adhesives. In FIG. 1B, two packages 2 are shown in the form of BGA packages, but it should be understood that any suitable number of packages 2 can be mounted on the sensor substrate 4. The package 2 may include any of the packages 2 disclosed herein. As explained herein, the package 2 may include a radiation shield (for example, the radiation shield 8 of FIG. 1A) to prevent incident radiation from damaging the integrated device die in the package 2.A heat sink or heat sink 40 may be provided on the package 2 to transfer heat away from the package 2 and other heat-generating components of the sensor module 1. The heat sink 40 may include a recess 42 along the side 43 of the heat sink 40. In the embodiment of FIG. 1B, the electrical connector 44 may be installed and electrically connected to the back surface of the sensor substrate 4. The connector 44 extends outward from the sensor substrate 4 through the groove 42 of the heat sink 40. In the described embodiment, the connector 44 is mounted to the back side of the sensor substrate 4, laterally offset from the package 2. The connector 44 may be configured to be electrically connected to an external device or system. For example, the connector 44 can be easily connected, such as by hand and without tools, to a cable or other electrical interface to send and receive signals to and from an external device or system. Advantageously, the connector 44 can transmit and/or receive signals to or from the integrated device package 2. In some embodiments, the connector 44 may additionally or alternatively send and/or receive signals to or from the sensor chip 5. Using a separate connector 44 mounted to the sensor substrate 4 can avoid the use of through-holes (TMV) or other connectors on or inside the package 2. The connector 44 may accordingly provide a common electrical input/output (I/O) interface for electrical communication between an external device or system and the integrated device package 2 (and the sensor chip 5). Advantageously, users can plug in cables of different lengths and/or types to connect to external devices, such as electronic equipment in larger systems, such as computed tomography (CT) systems. The modules can be installed at different angles at the focal point of the illumination (e.g. X-ray) source. The connector 44 can improve the assembly of the system.Therefore, in FIG. 1B, the integrated device die 10 may be configured to process signals transmitted from the sensor assembly 3 to the integrated device die 10 through the contact pads 31 on the first side of the package substrate 7. The integrated device die 10 may be configured to transmit the processed signal to an external device through the contact pad 31 on the first side, which is opposite to the second side on which the shielding cover 8 is installed. In some embodiments, there is no electrical connector (for example, no TMV) extending from the packaging substrate 7 through the molding compound 13 to the outer surface of the molding compound 13, but this connection may be made through the sensor substrate 4.FIG. 2 is a schematic side cross-sectional view of an integrated device package 2 according to another embodiment. Unless otherwise stated, the components of FIG. 2 may be the same or substantially similar to the same numbered components of FIGS. 1A-1B. For example, like the embodiment of FIG. 1A, the package 2 of FIG. The integrated device die 10 may be mounted to the shielding can 8 through an adhesive 11. However, unlike the embodiment of FIG. 1A, in the embodiment of FIG. 2, an adhesive 17 may be used to adhere the component including the spacer 18 to the integrated device die 10. The component 20 can be mounted to the spacer 18 with an adhesive 19. In the illustrated embodiment, the component 20 may include a second integrated device die configured to process additional signals converted by the sensor assembly and transmitted to the component 20 through the bonding wires 21. In the illustrated embodiment, the width or lateral footprint of the element 20 is smaller than the width or lateral footprint of the integrated device die 10. In addition, the element 20 may have a width or lateral footprint that is smaller than the width or lateral footprint of the spacer 18.However, in other embodiments, the element 20 may include a second electromagnetic radiation shielding cover, the function of which is substantially similar to the shielding cover 8. In such an embodiment, the bonding wire 21 may be omitted. As with the shield case 8, when the element 20 includes a second shield, the element 20 may be more active than the active circuit in the integrated device die 10 (or in the pad 18 if the pad 18 includes an integrated circuit die). Wide so that the active circuit (whether in the die 10 or in the pad 18) is located in the shadow of the shielding element 20. Beneficially, using the element 20 as a radiation shield can provide a bidirectional shielding capability to protect the integrated device die 10 (or pad 18) from electromagnetic radiation.The spacer 18 may include any suitable type of components that vertically space the elements 20 above the die 10 and adhere to the die 20 with an adhesive. The spacer 18 can vertically separate the upper ring of the bonding wire 21 above and below the upper ring of the bonding wire 12 to prevent the bonding wire 21 from contacting and short-circuiting the bonding wire 12, and vice versa. In some embodiments, the spacer 18 may include an inorganic substrate, such as a dummy block of semiconductor material (e.g., silicon) in which no active circuits are patterned. In other embodiments, the pad 18 may include active circuitry for providing additional processing capabilities to the package 2. The gasket 18 may have any suitable thickness. In various embodiments, the thickness of the spacer 18 may be in the range of 100 μm to 200 μm (for example, about 152.4 μm in some embodiments).FIG. 3 is a schematic side cross-sectional view of the sensor module 1 according to various embodiments. As in FIG. 1A, the sensor module 1 may include a sensor assembly 3 and an integrated device package 2 mounted to the sensor module 3. Unless otherwise stated, the components of FIG. 3 may be the same or substantially similar to the same numbered components of FIGS. 1A-2. For example, the package 2 of FIG. 3 may be substantially similar to the package 2 of FIG. 2. As in FIG. 2, in FIG. 3, the spacer 18 is attached to the integrated device die 10 with an adhesive 17, and the component 20 is attached to the spacer 18 with an adhesive 19. However, unlike the embodiment of FIG. 2, the element 20 is wider than the spacer 18. As mentioned above, the element 20 may be wider, the same or narrower than the integrated device die 10. In the illustrated embodiment, the element 20 includes a second integrated device die. As described above, in other embodiments, the element 20 may include a second radiation shield. In addition, as shown in FIG. 3, the package 2 may include passive devices 22 a, 22 b (for example, capacitors, inductors, resistors, etc.), which are mounted to the package substrate 7 through corresponding adhesives adjacent to the shielding case 8. In Figure 3, the passive devices 22a, 22b may not be shielded, for example, in an arrangement where the passive devices 22a, 22b may be insensitive to impinging electromagnetic radiation.FIG. 4 is a schematic side cross-sectional view of the integrated device package 2 according to various embodiments. Unless otherwise specified, the components of FIG. 4 may be the same or substantially similar to the same numbered components of FIGS. 1A to 3. For example, as in FIGS. 1A-3, the shield cover 8 may be attached to the packaging substrate 7 by an adhesive 9. The integrated device die 10 may be attached to the shield can 8 by an adhesive 11. However, unlike FIGS. 1A to 3, in FIG. 4, the component 20 can be mounted on the integrated device die 10 through the intermediate mounting structure 23. As mentioned above, element 20 may include the second integrated device die shown. In other embodiments, the element 20 may include a second radiation shield. The mounting structure 23 may include an in-line film (FOW) structure, in which a film (e.g., die attach film or material) can be deposited, printed (e.g., screen printed), laminated, or applied to the upper surface of the die 10 ( It may include an active surface in some embodiments) and on a portion of the wire bond 12 and/or on the bond pad of the die 10 connected to the wire bond 12. In other embodiments, the film can be spread as a paste or epoxy. The film may include a material having a flowable state and a cured state, wherein the film may be cured or hardened after flowing through the die 10. In some embodiments, the film may include an inorganic dielectric or polymer. The mounting structure 23 can be used to provide a vertically elevated mechanical attachment support for the element 20. For the embodiment in which the component 20 is connected to the substrate 7 through the bonding wire 21, the mounting structure 23 can vertically raise the bonding wire 21 above the bonding wire 12 to prevent short circuits.FIG. 5 is a photomicrograph showing an example integrated device package 2, similar to the package 2 shown in FIG. 1A. Unless otherwise specified, the components of FIG. 5 may be the same or substantially similar to the same numbered components of FIGS. 1A-4. In the example shown, the shield 8 includes a tungsten shield that is much thicker than the die 10. However, as described above, the thickness of the shield 8 can be selected based on the material characteristics of the particular shield to be used and/or the expected radiation dose. As shown in the figure, the height E and angle M of the welding wire 12 can be selected to be sufficiently spaced below the upper surface 24 of the molding compound 13 so that the welding wire 12 is sufficiently embedded in the molding compound 13 and does not pass through the molding compound 13 Exposed. In some embodiments, for example, the height E may be at least 40 μm shorter than the molding compound 13, or at least 50 μm shorter than the molding compound 13, for example, 40 μm to 80 μm shorter than the molding compound 13, or 50 μm shorter than the molding compound 13 70μm.In the illustrated embodiment, the lateral footprint of the shield 8 is smaller than the lateral footprint of the die 10. In order to shield the electronic circuit in the die 10, the shielding case 8 may have a larger lateral footprint than the active area of the die 10 where the active circuit is located. In some embodiments, some non-sensitive circuits (for example, passive electronic components or active circuits that are not altered, damaged or otherwise negatively affected by incident radiation) can be placed outside the lateral footprint of the shielding cover 8, but sensitive The source circuit (for example, a circuit that is physically or electrically sensitive to incident electromagnetic radiation or is at risk of being damaged by incident electromagnetic radiation) may be provided in a sensitive active area within the lateral footprint of the shielding cover 8. In other embodiments, as described above, the lateral footprint of the shield 8 may be larger than the lateral footprint of the die 10.FIG. 6 is a schematic side cross-sectional view of an integrated device package 2 according to another embodiment. Unless otherwise stated, the components of FIG. 6 may be the same or substantially similar to the same numbered components of FIGS. 1A-5. Unlike the embodiment shown in FIGS. 1A-5, in FIG. 6, the integrated device die 10 may be physically and electrically connected to the front surface of the package substrate 7 through an adhesive. For example, in the embodiment of FIG. 6, the die 10 may be connected to the package substrate 7 with a conductive adhesive in a flip-chip arrangement through a plurality of solder balls 33. Therefore, in FIG. 6, the contact pads of the die 10 may be disposed facing the package substrate 7. In some embodiments, the active circuit of the die 10 may be disposed facing the package substrate 7. In other embodiments, the active circuit may be on the side of the die 10 away from the package substrate 7 and through holes may be provided to connect to the contact pads of the die 10.In FIG. 6, the shield 8 may be mounted to the die 10 by an adhesive 32, and the adhesive 32 may include a conductive or non-conductive adhesive. In the illustrated embodiment, there may be no molding compound above the die 10 and the shield 8. However, in other embodiments, a molding compound similar to the molding compound 13 of FIGS. In other embodiments, the package 2 may include a package cover (which may include metal such as stainless steel) mounted to the substrate 7 to define the cavity package, as explained below in conjunction with FIGS. 7-10. In some embodiments, the package 2 may be arranged above the sensor assembly in a manner similar to that shown in FIG. 1A. In such an embodiment, where the incident radiation strikes upward as shown in FIG. 1A, the die 10 of FIG. 6 may be unshielded. However, in other embodiments, the package 2 may be arranged relative to a mounting structure (such as a system board or other structure) so that the incident radiation strikes downward, as shown in FIG. 6. In these embodiments, the shield 8 may be interposed between the flip chip mounted die 10 and the radiation source to protect the flip chip mounted die 10 from damaging radiation.FIG. 7 is a schematic side cross-sectional view of the integrated device package 2 including the package cover 50 that is mounted to the package substrate 7 to define a cavity package. Unless otherwise stated, the components of FIG. 7 may be the same or substantially similar to the same numbered components of FIGS. 1A-6. Unlike the embodiment shown in FIGS. 1A-5, in FIG. 7, the packaging cover 50 can be installed (for example, adhered) to the upper surface of the packaging substrate 7 to define a cavity 52, in which the radiation shield 8 and the integrated device tube The core 10 is disposed of. In the embodiment of FIG. 7, the radiation shield 8 is mounted to the upper surface of the packaging substrate 7 using an adhesive 9. The integrated device die 10 may be mounted to the upper surface of the shielding can 8 using an adhesive 11. The die 10 may be wire bonded to the substrate 7 using bonding wires 12. In some embodiments, the package cover 50 may include a metal material or a metal-coated plastic material to further shield the integrated device die 10 from incident radiation. For example, the thickness of the package cover 50 may be selected to effectively shield the die 10 from various types of incident radiation. In some embodiments, the cover 50 may be electrically grounded.FIG. 8 is a schematic side cross-sectional view of an integrated device package 2 including a cavity package according to another embodiment. Unless otherwise stated, the components of FIG. 8 may be the same or substantially similar to the same numbered components of FIG. 7. Unlike FIG. 7, in the embodiment of FIG. 8, the component 20 is mounted on the integrated device die 10 by the adhesive 19 so that the die 10 is disposed between the component 20 and the radiation shield 8. As explained above in connection with FIG. 2, in some embodiments, the element 20 may include additional integrated device dies. However, in the illustrated embodiment, the element 20 includes an additional radiation shield that may be similar to the radiation shield 8. The element 20 may therefore be positioned to shield the upper side of the integrated device die 10. Therefore, in FIG. 8, both the lower side and the upper side of the die 10 can shield the incident radiation. In the illustrated embodiment, the element 20 may have a smaller lateral footprint than the corresponding lateral footprint of the die 10 so that the element 20 may be disposed between the bonding wires 12. However, the component 20 may have a lateral footprint large enough to cover and shield the sensitive circuits of the underlying die 10. Although package 2 of FIG. 8 is shown as a cavity package, in other embodiments, the shield and die may be overmolded with a molding compound. Therefore, the die 10 can be protected from radiation damage from the two main surfaces.FIG. 9 is a schematic side cross-sectional view of the integrated device package 2 including the cavity package, and the radiation shield 8 is located on the top of the integrated device die 10. Unless otherwise specified, the components of FIG. 9 may be the same as or substantially similar to the same numbered components in FIGS. 1A-8. However, in FIG. 9, the die 10 is mounted to the package substrate 7 through the adhesive 11, and the radiation shield 8 is mounted on the upper surface of the die 10 through the adhesive 9. In some embodiments, the die 10 may have active circuits on both the upper surface and the lower surface of the die 10. In such an arrangement, the die 10 may be flip-chip mounted to the package substrate 7 through solder balls, and in this case, the adhesive 11 may include an underfill material. Therefore, the radiation shield 8 can be positioned to shield the die 10 from incident radiation impinging on the upper surface of the die 10. In the illustrated embodiment, the shield 8 may have a lateral footprint that is smaller than the lateral footprint of the die 10. As shown in the figure, the shield 8 may be located between the welding wires 12. The substrate via 54 may also be provided to provide electrical communication between the lower surface and the upper surface of the die 10. Although the package 2 of FIG. 9 is shown as a cavity package in other embodiments, the shield and die may be overmolded with a molding compound.Fig. 10 is a schematic side cross-sectional view of an integrated device package including a cavity package, in which a radiation shield is disposed between two integrated device dies. Unless otherwise stated, the components of FIG. 10 may be the same or substantially similar to the same numbered components of FIG. 9. Unlike the embodiment of FIG. 9, in FIG. 10, the second integrated device die 56 (which may be functionally similar to the die 10) may be mounted on the radiation shield 8 by an adhesive 19. The shield 8 may be used as a spacer to position the second die 56 above the die 10 so that the bonding wire 21 does not contact the bonding wire 12. Although not shown, in some embodiments, an additional radiation shield can also be provided above the upper surface of the second die 56 so that both surfaces of the second die 56 can be shielded. Although package 2 of FIG. 10 is shown as a cavity package, in other embodiments, the shield and die may be overmolded with a molding compound.Although the present invention has been described based on certain embodiments, other embodiments that are obvious to a person of ordinary skill in the art, including embodiments that do not provide all the features and advantages set forth herein, are also within the scope of the present invention. In addition, the various embodiments described above can be combined to provide further embodiments. Furthermore, certain features shown in the context of one embodiment can also be incorporated into other embodiments. Therefore, the scope of the present invention is only limited by reference to the appended claims.
A method for providing proactive synchronization in a computer system (100) includes a processor (18A, 18B) requesting exclusive access to a given memory resource (314A-314D). The request may include one or more addresses associated with the given memory resource. The method also includes comparing each of the addresses in the request to each address in a plurality of sets of addresses. Each address in the sets of addresses may correspond to a respective memory resource to which a requestor has exclusive access. In addition, in response to any address of the one or more addresses matching any address in the plurality of sets of addresses, the method includes returning a count value (233) associated with the set including the matching address. The count value may be indicative of the number of requestors contending for the matching address.
WHAT IS CLAIMED IS: 1. A method comprising: a processor (18 A, 18B) requesting exclusive access to a given memory resource, wherein the request includes one or more addresses associated with the given memory resource; comparing each of the one or more addresses to each address of a plurality of sets of addresses, wherein each address of the plurality of sets of addresses corresponds to a respective memory resource to which a requestor has been granted exclusive access; and in response to any address of the one or more addresses matching any address in the plurality of sets of addresses, returning a count value associated with the matching address, wherein the count value is indicative of a number of requestors contending for the matching address. 2. The method as recited in claim 1, further comprising returning a pass count value of zero in response to no address of the one or more addresses matching any address in the plurality of sets of addresses. 3. The method as recited in claim 1, further comprising using the count value to determine if a different processor has exclusive access to a different memory resource. 4. The method as recited in claim 1, wherein requesting exclusive access comprises executing one or more locked memory reference instructions having a LOCK prefix, wherein the LOCK prefix causes addresses associated with the locked memory reference instructions to be marked with one or more indication bits during instruction decode. 5. The method as recited in claim 4, wherein requesting exclusive access further comprises executing an ACQUIRE instruction that causes each of the one or more addresses of the given memory resource to be compared to each address of the plurality of sets of addresses. 6. The method as recited in claim 4, further comprising storing the addresses associated with the locked memory reference instructions in a processor buffer, and in response to execution of the ACQUIRE instruction sending all the addresses in the processor buffer to be compared. 7. A computer system (100) comprising: one or more processors (18A, 18B) coupled together and to one or more memories (314A-314D), wherein each of the processors is configured to execute instructions to request exclusive access to a given memory resource, wherein the request includes one or more addresses associated with the given memory resource; and an arbitration unit (230) coupled to compare each of the one or more addresses to each address of a plurality of sets of addresses, wherein each address of the plurality of sets of addresses corresponds to a respective memory resource to which a requestor has been granted exclusive access; wherein the arbitration unit is configured to return a count value (233) associated with the set including the matching address in response to any address of the one or more addresses matching any address in the plurality of sets of addresses, wherein the count value is indicative of a number of requestors contending for the matching address. 8. The computer system as recited in claim 7, wherein the arbitration unit is further configured to return a pass count value of zero in response to no address of the one or more addresses matching any address in the plurality of sets of addresses. 9. The computer system as recited in claim 7, wherein each of the one or more processors is further configured to use the count value to determine if a different processor has exclusive access to a different memory resource. 10. The computer system as recited in claim 7, wherein each of the one or more processors is further configured to: execute one or more memory reference instructions having a LOCK prefix, wherein the LOCK prefix causes addresses associated with the locked memory reference instructions to be marked with one or more indication bits during instruction decode; and execute an ACQUIRE instruction that causes each of the one or more addresses of the given memory resource to be compared to each address of the plurality of sets of addresses.
TITLE: METHOD FOR PROACTIVE SYNCHRONIZATION WITHIN A COMPUTER SYSTEMBACKGROUND OF THE INVENTIONTechnical Field[0001] This invention relates to microprocessors and, more particularly, to process synchronization between processors in a multiprocessor system.Background Art [0002] Modern microprocessor performance has increased steadily and somewhat dramatically over the past 10 years or so. To a large degree, the performance gains may be attributed to increased operating frequency and moreover, to a technique known as deep pipelining. Generally speaking, deep pipelining refers to using instruction pipelines with many stages, with each stage doing less, thereby enabling the overall pipeline to execute at a faster rate. This technique has served the industry well. However, there are drawbacks to increased frequency and deep pipelining. For example, clock skew and power consumption can be significant during high frequency operation. As such, the physical constraints imposed by system level thermal budget points, and the increased difficulty in managing clock skew may indicate that practical limits of the technique may be just around the corner. Thus, industry has sought to increase performance using other techniques. One type of technique to increase performance is the use of multiple core processors and more generally multiprocessing. [0003] As computing systems employ multiprocessing schemes with more and more processors (e.g., processing cores), the number of requestors that may interfere or contend for the same memory datum may increase to such an extent that conventional methods of process synchronization may be inadequate. For example, when a low number of processors are contending for a resource, simply locking structures may provide adequate performance to critical sections of code. For example, locked arithmetic operations on memory locations may be sufficient. As the scale of multiprocessing grows, these primitives become less and less efficient. To that end, more advanced processors include additions to the instruction set that include hardware synchronization primitives (e.g., CMPXCHG, CMPXCHG8B, and CMPXCHG 16B) that are based on atomically updating a single memory location. However, we are now entering the realm in which even these hardware primitives may not provide the kind of performance that may be demanded in high-performance, high processor count multiprocessors.[0004] Many conventional processors use synchronization techniques based on an optimistic model. That, is, when operating in a multiprocessor environment, these conventional processors are designed to operate under the assumption that they can achieve synchronization by repeatedly rerunning the synchronization code until no interference is detected, and then declare that synchronization has been achieved. This type of synchronization may incur an undesirable waste of time, particularly when many processors are attempting the same synchronizing event, since no more than one processor can make forward progress at any instant in time. As such, different synchronization techniques may be desirable. DISCLOSURE OF INVENTION[0005] Various embodiments of a method for providing proactive synchronization in a computer system are disclosed. In one embodiment, the method includes a processor requesting exclusive access to a given memory resource. The request may include one or more addresses associated with the given memory resource. The method also includes comparing each of the addresses in the request to each address in a plurality of sets of addresses. Each address in the sets of addresses may correspond to a respective memory resource to which a requestor is to be given exclusive access. In addition, in response to any address of the one or more addresses matching any address in the plurality of sets of addresses already given to another processor or processors, the method includes returning a count value associated with the set including the matching address. The count value may be indicative of the number of requestors contending for the matching address(es) .[0006] In one specific implementation, the method includes returning a pass count value of zero in response to no address of the one or more addresses matching any address in the plurality of sets of addresses. [0007] In another embodiment, a computer system includes one or more processors that may be coupled together and to one or more memories. Each of the processors may execute instructions to request exclusive access to a given memory resource. The request may include one or more addresses associated with the given memory resource. The computer system also includes a synchronization arbiter unit that may compare each of the addresses in the request to each address in a plurality of sets of addresses. Each address in the plurality of sets of addresses corresponds to a respective memory resource to which a requestor has exclusive access. The synchronization arbiter unit may return a count value associated with the set including the matching address in response to any address of the one or more addresses matching any address in the plurality of sets of addresses. The count value maybe indicative of a number of requestors contending for the matching address.BRIEF DESCRIPTION OF DRAWINGS[0008] FIG. 1 is a block diagram of one embodiment of a computer system. [0009] FIG. 2 is a block diagram depicting further details of an embodiment a processing node of FIG. 1.[0010] FIG. 3 is a flow diagram that describes operation of one embodiment of the computer system shownFIG. l and FIG. 2.[0011] FIG. 4 is a flow diagram that describes operation of one embodiment of the computer system shownFIG. 1 and FIG. 2 in response to receiving a coherency invalidation probe. [0012] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. It is noted that the word "may" is used throughout this application in a permissive sense (i.e., having the potential to, being able to), not a mandatory sense (i.e., must). MODE(S) FOR CARRYING OUT THE DESCRIPTION[0013] To enable the construction of high performance synchronization methods in software, a set of instructions, which may be referred to as an advanced synchronization facility may be used. The facility may support the construction of non-Blocking synchronization, WaitFree synchronization, Transactional Memory, along with the construction of various forms of Compare and Swap primitives typically used in the construction of these methods. The facility allows construction (in software) of a large variety of synchronization primitives. [0014] Moreover, the advanced synchronization facility may enable software to program a large variety of synchronization kinds. Each synchronization kind may directly specify: the cache lines needed for successful completion, a sequence point where failures can redirect control flow, a data modification section where the result of the successful critical section is performed, and a sequence point where success is made visible to the rest of the system making the whole sequence of instructions appear to be atomic.[0015] Accordingly, the functionality of the advanced synchronization facility may enable the acquisition and release of multiple cache lines with write permission associated with a critical section substantially simultaneously as seen by other processors/cores. This process may be referred to as Linearizing. After acquisition, several modifications can be performed before any other interested party may observe any of the modifications to any of the specified multiple cache lines. Between the acquisition and the release, no other processors are allowed to be manipulating these same lines (e.g. have write permission). A similar method could have been performed by not sending Source Done messages for the associated lines and thereby preventing concurrent accesses. However, these solutions lead to deadlock and/or livelock, or timeouts. Thus, a computer system including processors and processor cores that may implement the advanced synchronization facility is described below.[0016] Turning now to FIG. 1, an embodiment of a computer system 100 is shown. Computer system 100 includes several processing nodes 312A, 312B, 312C, and 312D. Each of processing node 312A-312D is coupled to a respective memory 314A-314D via a memory controller 316A-316D included within each respective processing node 312A-312D. Additionally, processing nodes 312A-312D include interface logic (IF) used to communicate between the processing nodes 312A-312D. For example, processing node 312A includes interface logic 318A for communicating with processing node 312B, interface logic 318B for communicating with processing node 312C, and a third interface logic 318C for communicating with yet another processing node (not shown). Similarly, processing node 312B includes interface logic 318D, 318E, and 318F; processing node 312C includes interface logic 318G, 318H, and 3181; and processing node 312D includes interface logic 318J, 318K, and 318L. Processing node 312D is coupled to communicate with a plurality of input/output devices (e.g. devices 320A-320B in a daisy chain configuration) via interface logic 318L. Other processing nodes may communicate with other I/O devices in a similar fashion. Processors may use this interface to access the memories associated with other processors in the system. It is noted that a component that includes a reference numeral followed by a letter may be generally referred to solely by the numeral where appropriate. For example, when referring generally to the processing nodes, processing node(s) 312 may be used.[0017] Processing nodes 312 implement a packet-based link for inter-processing node communication. In the illustrated embodiment, the link is implemented as sets of unidirectional lines (e.g. lines 324A are used to transmit packets from processing node 312A to processing node 312B and lines 324B are used to transmit packets from processing node 312B to processing node 312A). Other sets of lines 324C-324H are used to transmit packets between other processing nodes as illustrated in FIG. 1. Generally, each set of lines 324 may include one or more data lines, one or more clock lines corresponding to the data lines, and one or more control lines indicating the type of packet being conveyed. The link may be operated in a cache coherent fashion for communication between processing nodes or in a non-coherent fashion for communication between a processing node and an I/O device (or a bus bridge to an I/O bus of conventional construction such as the PCI bus or ISA bus). Furthermore, the link may be operated in a non-coherent fashion using a daisy-chain structure between I/O devices as shown (e.g., 320A and 320B). It is noted that in an exemplary embodiment, the link may be implemented as a coherent HyperTransport(TM) link or a non-coherent HyperTransport(TM) link, although in other embodiments, other links are possible.[0018] I/O devices 320A-320B may be any suitable I/O devices. For example, I/O devices 320A-320B may include devices for communicating with another computer system to which the devices may be coupled (e.g. network interface cards or modems). Furthermore, I/O devices 320A-320B may include video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards. It is noted that the term "I/O device" and the term "peripheral device" are intended to be synonymous herein.[0019] Memories 314A-314D may comprise any suitable memory devices. For example, a memory 314A- 314D may comprise one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), DDR SDRAM, static RAM, etc. The memory address space of computer system 300 is divided among memories 314A-314D. Each processing node 312A-312D may include a memory map used to determine which addresses are mapped to which memories 314A-314D, and hence to which processing node 312A-312D a memory request for a particular address should be routed. Memory controllers 316A-316D may comprise control circuitry for interfacing to memories 314A-314D. Additionally, memory controllers 316A-316D may include request queues for queuing memory requests. Memories 314A-314D may store code executable by the processors to implement the functionality as described in the preceding sections.[0020] It is noted that a packet to be transmitted from one processing node to another may pass through one or more intermediate nodes. For example, a packet transmitted by processing node 312A to processing node 312D may pass through either processing node 312B or processing node 312C as shown in FIG. 1. Any suitable routing algorithm may be used. Other embodiments of computer system 100 may include more or fewer processing nodes then the embodiment shown in FIG. 1. Generally, the packets may be transmitted as one or more bit times on the lines 324 between nodes. A bit time may be the rising or falling edge of the clock signal on the corresponding clock lines. The packets may include command packets for initiating transactions, probe packets for maintaining cache coherency, and response packets from responding to probes and commands.[0021] In one embodiment, processing nodes 312 may additionally include one or more processor cores (shown in FIG. 2). It is noted the processor cores within each node may communicate via internal packet-based links operated in the cache coherent fashion. It is further noted that processor cores and processing nodes 312 may be configured to share any (or all) of the memories 314.[0022] In one embodiment, one or more of the processor cores may implement the x86 architecture, although other architectures are possible and contemplated. As such, instruction decoder logic within each of the various processor cores may be configured to mark instructions that use a LOCK prefix. In addition, as described further below, processor core logic may include hardware (shown in FIG. 2) that may enable identification of the markers associated with LOCKed instructions. This hardware may enable the use of the LOCK instruction prefix to identify critical sections of code as part of the advanced synchronization facility. [0023] To reduce the effects of interference caused by more than one processor attempting to access the same memory reference (e.g., critical sections of code) at the same time, the advanced synchronization facility and associated hardware may be implemented within computer system 100. As will be described in greater detail below, the advanced synchronization facility may employ new instructions and use hardware such as a synchronization arbiter (shown in FIG. 2) which may be interconnected within the cache coherent fabric. As shown in FIG. 2, synchronization arbiter 230 is coupled to a Northbridge unit 290 of any processing node 312, thus enabling the synchronization arbiter to observe explicit addresses associated with the Advanced Synchronization Facility transactions of each node. The synchronization arbiter may be placed anywhere in the coherent domain of the interconnect network. It is noted that although one synchronization arbiter is shown, it is contemplated that when a system is configured to support multiple virtual machines, and when these virtual machines do not share any actual physical memory, multiple synchronization arbiters can be configured to distribute the synchronization load across several arbiters.[0024] It is noted that the phrase "critical section" is used throughout this document. A "critical section" refers to a section of code used in the advanced synchronization facility that may include one or more memory reference instructions marked with a LOCK prefix, an ACQUIRE instruction, and a RELEASE instruction which ends the critical section. In one embodiment, there are four stages of each critical section: 1) specifying the address(es) of the cache line(s) needed during the critical section (e.g., entering the critical section), 2) going through the mechanics to acquire these cache lines, 3) atomically modifying the critical section data, 4) releasing the cache lines back to the system. In particular, the critical section code will appear to be executed atomically by interested observers. The first phase may be referred to as the specification phase, while the third phase is often referred to as the Atomic phase. [0025] In various implementations, software may be allowed to perform 'simple' arithmetic and logical manipulations on the data between reading and modifying the data of the critical section as long as the simple arithmetic operations do not cause exceptions when executed. If a data manipulation causes an exception inside a critical section, atomicity of that critical section may not be guaranteed. Critical section software should detect failures of atomicity, and deal with them appropriately, s described further below. [0026] Generally, the advanced synchronization facility may utilize a weakened memory model and operate only upon cacheable data. This weakened memory model may prevent the advanced synchronization facility from wasting cycles waiting for various processor and memory buffers to empty before performing a critical section. However, when software requires a standard PC strong memory model, software may insert LFENSE, SFENSE, or MFENSE instructions just prior to the RELEASE instruction to guarantee standard PC of memory ordering. For the case of using cacheable synchronization to enable accesses to unCacheable data, an SFENSE instruction between the last LOCKed Store and the RELEASE instruction will guarantee that the unCacheable data is globally visible before the cacheable synchronization data is globally visible in any other processor. This may enable maximum overlap of unCacheable and Cacheable accesses with minimal performance degradation. [0027] In various embodiments, interface logic 318A-318L may comprise a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 100 may employ any suitable flow control mechanism for transmitting packets. In addition to interface logic 318A-318L each processing node may include respective buffer interface units (BIU) 220 (shown in FIG. 2), which may provide functionality to enable proactive synchronization. For example, as described further below, BIU 220 may be configured to those special addresses that are associated with an Advanced Synchronization event and to transmit those addresses to synchronization arbiter 230 in response to execution of an ACQUIRE instruction. The BIU 220 may also be configured to determine if the response received from synchronization arbiter 230 indicates the addresses may be interference free. Depending on whether the response indicates the addresses may not be interference free, BIU 220 may notify the requesting processor core of a failure by sending a failure count value to a register within the processor core 18, and sending a completion message to synchronization arbiter 230, or when guaranteed to be interference free by allowing the execution of the critical section, and waiting to send the completion message to synchronization arbiter 230. [0028] FIG. 2 is a block diagram that illustrates more detailed aspects of embodiments of processing node 312A and synchronization arbiter 230 of FIG. 1. Referring to FIG. 2, processing node 312A includes processor cores 18A and 18n, where n may represent any number of processor cores. Since the processor cores may be substantially the same in various embodiments, only detailed aspects of processor core 18A are described below. As shown, processor cores 18A and 18n are coupled to bus interface unit 220 which is coupled to a Northbridge unit 290, which is coupled to memory controller 316 A, HyperTransport(TM) interface logic 318 A-318C, and to synchronization arbiter 230 via a pair of unidirectional links 324I-324J.[0029] Processor core 18A includes hardware configured to execute instructions. More particularly, as is typical of many processors, processor core 18A includes one or more instruction execution pipelines including a number of pipeline stages, cache storage and control, and an address translation mechanism (only pertinent portions of which are shown for brevity). Accordingly, as shown processor core 18A includes a level one (Ll) instruction cache, prefetch logic, and branch prediction logic. Since these blocks may be closely coupled with the instruction cache, they are shown together as block 250. Processor core 18A also includes an Ll data cache 207. Processor core 18A also includes instruction decoder 255 and an instruction dispatch and control unit 256 may be coupled to receive instructions from instruction decoder 255 and to dispatch operations to a scheduler 259. Further, instruction dispatch and control unit 256 may be coupled to a microcode read-only memory (MROM) (not shown). Scheduler 259 is coupled to receive dispatched operations from instruction dispatch and control unit 256 and to issue operations to execution units 260. In various implementations, execution units 260 may include any number of integer execution units and floating-point units. Further, processor core 18 A includes a TLB 206 and a load/store unit 270. It is noted that in alternative embodiments, an on-chip L2 cache may be present (although not shown).[0030] Instruction decoder 255 may be configured to decode instructions into operations which may be either directly decoded or indirectly decoded using operations stored within the MROM. Instruction decoder 255 may decode certain instructions into operations executable within execution units 260. Simple instructions may correspond to a single operation, while in other embodiments, more complex instructions may correspond to multiple operations. In one embodiment, instruction decoder 255 may include multiple decoders (not shown) for simultaneous decoding of instructions. Each instruction may be aligned and decoded into a set of control values in multiple stages depending on whether the instructions are first routed to MROM. These control values may be routed in an instruction stream to instruction dispatch and control unit 257 along with operand address information and displacement or immediate data which may be included with the instruction. As described further below, when a memory reference instruction includes a LOCK prefix, instruction decoder may identify the address with a marker. [0031] Load/store unit 270 may be configured to provide an interface between execution units 260 and data cache 207. In one embodiment, load/store unit 270 may include load/store buffers with several storage locations for data and address information for pending loads or stores. As such, the illustrated embodiment includes LSI 205, linear LS2 209, physical LS2 210, and data storage 211. Further, processor core 18A includes marker logic 208, and a marker bit 213. [0032] In one embodiment, a critical section may be processed in one of two ways: deterministically, and optimistically. The choice of execution may be based upon the configuration of the advanced synchronization facility and upon the state of a critical section predictor, as described in greater detail below. In various embodiments, either the basic input output system (BIOS), the operating system (OS), or a virtual memory manager (VMM) may configure the operational mode of the advanced synchronization facility. When operating in the deterministic execution mode, the addresses specified by the locked memory reference instructions may be bundled up and sent enmasse to the synchronization arbiter 230 to be examined for interference. The cache line data may be obtained and the critical section executed (as permitted). In contrast, when operating in the optimistic synchronization mode, no interference may be assumed, and the critical section may be executed (bypassing the synchronization arbiter 230) and if any other processor interferes with this critical section, the interference will be detected and then the processor backs up to the ACQUIRE instruction and redirects control flow away from the atomic phase.[0033] To implement the deterministic mode, the advanced synchronization facility may use the synchronization arbiter 230. As described above, synchronization arbiter 230 examines all of the physical addresses associated with a synchronization request and either pass (a.k.a. bless) the set of addresses or fail (i.e., reject) the set of addresses, based upon whether any other processor core or requestor is operating on or has requested those addresses while they are being operated on. As such, synchronization arbiter 230 may allow software to be constructed that proactively avoids interference. When interference is detected by synchronization arbiter 230, synchronization arbiter 230 may respond to a request with a failure status including a unique number (e.g., count value 233) to a requesting processor core. In one embodiment, the count may be indicative of the number of requestors contending for the memory resource(s) being requested. Software may use this number to proactively avoid interference in subsequent trips through the critical section by using this number to choose a different resource upon which to attempt a critical section access.[0034] Accordingly, as shown in FIG. 2, synchronization arbiter 230 includes a storage 232 including a number of entries. Each of the entries may store one or more physical addresses of requests currently being operated on. In one embodiment, each entry may store up to eight physical addresses that are transported as a single 64-byte request. In addition, the synchronization arbiter entry includes the count value 233, which corresponds to all the addresses in the entry. As described above, the count value may be indicative of the number of requestors (i.e., interferers) that are contending for any of the addresses in a critical section. When synchronization arbiter 230 receives a set of addresses, a compare unit 231 within synchronization arbiter 230 checks for a match between each address in the set and all the addresses in storage 232. If there is no match, synchronization arbiter 230 may be configured to issue a pass response by returning a passing count value and to store the addresses within storage 232. In one embodiment, the passing count value is zero, although suitable count value may be used. However, if there is an address match, synchronization arbiter 230 may increment the count value 233 associated with set of addresses that includes the matching address, and then return that count value as part of a failure response. It is noted that compare unit 231 may be a compare only structure implemented in a variety of ways, as desired. In addition, in another embodiment, each address stored within storage 232 may be associated with a respective count. As such, the count value may be indicative of the number of requestors (i.e., interferers) that are contending for one of the respective address in a critical section. [0035] In the illustrated embodiment, bus interface unit (BIU) 220 includes a count compare circuit 221, a locked line buffer (LLB) 222, and a predictor 223. BIU 220 may also include various other circuits for transmitting and receiving transactions from the various components to which it is connected, however, these have been omitted for clarity. As such, BIU 220 may be configured to transmit a set of addresses associated with a critical section from LLB 222 to synchronization arbiter 230 in response to the execution of an ACQUIRE instruction. In addition, compare circuit 221 may be configured to compare the count value returned by synchronization arbiter 230 to check if the count is a passing count value (e.g., zero) or a failing count value. It is noted that SBB 22 may be implemented using any type of storage structure. For example, it may be part of an existing memory address buffer (MAB) or separate, as desired. [0036] As described above, if processor core 18 is operating in the deterministic synchronization mode, addresses associated with a critical section may be marked during instruction decode by using the LOCK prefix. More particularly, memory references that explicitly participate in advanced synchronization code sequences are annotated by using the LOCK prefix with an appropriate MOV instruction. LOCKed Load instructions may have the following form:LOCK MOVx reg,[B+I*s+DISP]. More particularly, a regular memory read instruction is made special by attaching a LOCK prefix. This causes the BIU 220 to gather the associated marked physical address into the LLB 222 as the address passes through the Ll cache (and TLB 206). In addition, memory access strength is reduced to access the line (in the case of acache miss) without write permission (ReadS, not ReadM or Read). The Load instruction may not be retired out of LS2 until the ACQUIRE instruction returns from the synchronization arbiter 230.[0037] While the request form BIU 220 (to synchronization arbiter 230) is awaiting a response, the LLB 222 watches for Probes with INValidate semantics, and if one (or more) occurs, the ACQUIRE instruction will be made to fail, even if synchronization arbiter 230 returns a success. The LOCK prefix does not cause any particular locking of the cache or bus, but simply provides a convenient marker to be added to memory based MOVe instructions. As such, LOCKed MOV to register instructions (which may be otherwise referred to as LOCKed Loads) may be processed normally down the data cache pipeline. [0038] Accordingly, during address translation each linear address may be stored within linear address portion of LS2 209. The corresponding physical addresses may be stored in TLB 206 and within physical LS2210, while the corresponding data may be stored within data cache 207 and data LS2 211. Marker logic 208 may detect the LOCK prefix marker generated during decode and generate an additional marker bit 213, thereby marking each such address as a participant in a critical section. Any LOCKed Load that takes a miss in the data cache may have its cache line data fetched through the memory hierarchy with Read-to-Share access semantics, however write permission is checked.[0039] As described above, if processor core 18 is operating in a deterministic synchronization mode, addresses associated with a critical section may be marked during instruction decode by using the LOCK prefix. More particularly, memory prefetch references that explicitly participate in advanced synchronization code sequences are annotated by using the LOCK prefix with an appropriate PREFETCHW instruction. These types of LOCKed Load instructions may have the following form:LOCK PREFETCHW [B+I*s+DISP].Thus, a regular memory PREFETCHW instruction is made special by attaching a LOCK prefix. This causes the BIU 220 to gather the associated marked physical address into the LLB 222 as the address passes through the Ll cache (and TLB 206). In addition, memory access strength is reduced to avoid an actual access the line. The PREFETCHW instruction may not be retired out of LS2 until the ACQUIRE instruction returns from synchronization arbiter 230. These instructions may be used to touch cache lines that participate in the critical section and that need data (e.g., a pointer) in order to touch other data also needed in the critical section. At the conclusion of the specification phase, an ACQUIRE instruction is used to notify BIU 220 that all memory reference addresses for the critical section are stored in LLB 222. [0040] The ACQUIRE instruction may have the formACQUIRE reg, imm8The ACQUIRE instruction checks that the number of LOCKed memory reference instructions is equal to the immediate value in the ACQUIRE instruction. If this check fails, the ACQUIRE instruction terminates with a failure code, otherwise, the ACQUIRE instruction causes BIU 220 to send all addresses stored within LLB 222 to the synchronization arbiter 230. This instruction 'looks' like a memory reference instruction on the data path so that the count value returned from the synchronization arbiter 230 can be used to confirm (or deny) that all the lines can be accessed without interference. No address is necessary for this 'load' instruction because there can be only one synchronization arbiter 230 per virtual machine or per system. The register specified in theACQUIRE instruction is the destination register of processor core 18.[0041] In one embodiment, the semantics of a LOCKed Load operation may include probe monitoring the location for a PROBE with INValidation (e.g., PROBE Inv or PROBE ReadM). If a PROBE with INValidation is detected for a location, the LSI or LS2 queue may return a failure status without waiting for the read to complete. A general-purpose fault (#GP) may be generated if the number of LOCKed loads exceeds a micro- architectural limit. If an ACQUIRE instruction fails, the count of LOCKed loads will be reset to zero. If the address is not to a Write Back memory type, the instruction may generate a page fault (#PF) or #GP fault or ACQUIRE can be made to fail (when subsequently encountered). [0042] It is expected that some critical sections may include a number of arithmetic and control flow decisions to compute what data modifications maybe appropriate (if any). However, software should arrange that these types of instructions never cause an actual exception. In one embodiment, arithmetic and memory reference instructions may be processed in either the SSE registers (XMM), or in the general-purpose registers (e.g., EAX, etc), or in the MMX or x87 registers. [0043] As described above, synchronization arbiter 230 may either pass the request enmasse or fail the request enmasse. If synchronization arbiter 230 fails the request, the response back to BIU 220 may be referred to as a "synchronization arbiter Fail-to-ACQUIRE" with the zero bit set (e.g., RFLAGS.ZF). As described above, the response returned by synchronization arbiter 230 may include the count value 233, which may be indicative of the number of interferers. Software may use this count to reduce future interference as described above. The count value 233 from the synchronization arbiter 230 may be delivered to a general-purpose register (not shown) within processor core 18 and may also be used to set condition codes. If the synchronization arbiter 230 passes the request, the response back to BIU 220 may include a pass count value (e.g., zero). [0044] In one embodiment, if the synchronization arbiter address storage 232 is full, the request may be returned with a negative count value such as minus one (-1), for example. This may provide software running on the processor core a means to see an overload in the system and to enable that software to stop making requests to synchronization arbiter 230 for a while. For example, the software may schedule something else or it may simply waste some time before retrying the synchronization attempt.[0045] If the count is zero (meaning there are no interferers observed by synchronization arbiter 230), processor core 18 may execute the instructions in the critical section and manipulate the data in the cache lines as desired. When the data manipulation is complete, a RELEASE instruction is executed signifying the end of the critical section. In one embodiment, the RELEASE instruction enables all of the modified data to become visible substantially simultaneously by sending the RELEASE message to synchronization arbiter 230, thereby releasing the associated cache lines back to the system. [0046] As described above, a critical code section may include one or more memory reference instructions with the LOCK prefix, followed by the ACQUIRE instruction. In addition, a conditional jump instruction follows the ACQUIRE instruction to allow the code to exit the critical section should synchronization arbiter 230 provide a Fail-to-Acquire code or if a Probe with INValidate is detected prior to acquiring the cache lines. In some implementations, the conditional jump may be followed by a release instruction. Two assembly language critical code sections are shown below to exemplify two types of critical sections. It is noted that the following code segments are merely examples used for discussion purposes. It is contemplated that other embodiments are possible and contemplated.[0047] The following first example code segment illustrates the removal of an element in a doubly linked list, and does so using the RELEASE instruction.// Concurrency Queue Version// p is in RAXLOCK MOVD A,[RAX+next] // a = p->next LOCK MOVD B,[RAX+prev] // b = p->prev LOCK MOVD C,[A+next] // c = a->prevLOCK MOVD D,[B+next] // d = b->nextACQUIRE reg JNZ failsMOVD [A+next],D // a->prev= dMOVD [B+prev],C // b->prev = cMOVD [RAX+next],0 // p->next = NULLMOVD [RAX+prev],0 // p->prev = NULL RELEASE[0048] The below exemplary code segment illustrates the insertion of an element into a doubly linked list, also using the RELEASE instruction.// Concurrency Queue Version// q is in RAX // p is in RSI LOCK MOVD S,[RAX+next] // s = q->nextLOCK PREFETCHW [RSI+prev] // touch p->prev LOCK PREFETCHW [RSI+next] // touch p->next LOCK PREFETCHW [S+next] // touch s->nextACQUIRE reg INZ failsMOVD [RAX+next],RSI // q->next = p MOVD [S+prev],RSI // s->prev = p MOVD [RSI+next],S // p->next = s MOVD [RSI+prev],RAX // p->prev = q RELEASE[0049] In one embodiment, the advanced synchronization facility supports two kinds of failures, a "Fail-to- ACQUIRE" and a "Fail-to-REQUESTOR". The Fail-to-ACQUIRE failure causes the ACQUIRE instruction to complete with the zero bit set (e.g., RFLAGS.ZF) so that the subsequent conditional jump instruction can redirect control flow away from damage inducing instructions in the atomic phase. The synchronization arbiter Fail-to- ACQUIRE with the zero bit set (e.g., RFLAGS.ZF) is one type of Fail-to- ACQUIRE failure. A processor Fail- to- ACQUIRE is another type. In one embodiment, during execution of critical sections, processor cores may communicate by observing memory transactions. These observations may be made visible at the ACQUIRE instruction of an executing processor core. More particularly, during the time between the start of collecting of the addresses necessary for a critical section and the response of synchronization arbiter 230, processor core 18 monitors all of those addresses for coherent invalidation probes (e.g., Probe with INValidate). If any of the lines are invalidated, the response from synchronization arbiter 230 may be ignored and the ACQUIRE instruction may be made to fail with the zero bit set (e.g., RFLAGS.ZF).[0050] The Fail-to-REQUESTOR failure may be sent as a PROBE response if there is a cache hit on a line that has been checked for interference and passed by synchronization arbiter 230. A Fail-to-REQUESTOR response causes the requesting processor to Fail-to-ACQUIRE if it is currently processing an advanced synchronization facility critical section, or it will cause the requesting processor's BIU to re-request that memory request if it is not processing the critical section. As such, BIU 220 may be configured to cause a Fail-to- ACQUIRE in response to receiving a Probe with INValidate prior to obtaining a pass notification from synchronization arbiter 230.[0051] Once the addresses of the critical section have been acquired, a processor core 18 that has had its addresses passed by synchronization arbiter 230 may obtain each cache line for exclusive access (e.g. write permission) as memory reference instructions are processed in the atomic phase. After a passed cache line arrives, processor core 18 may hold onto that cache line and prevent other processor cores from stealing the line by responding to coherent invalidation probes with Fail-to-REQUESTOR responses. It is noted that Fail-to- REQUESTOR may also be referred to as a negative-acknowledgement (NAK). [0052] As described above, when a processor receives a Fail-to-REQUESTOR and it is currently participating in an advanced synchronization instruction sequence, that instruction sequence will be caused to fail at the ACQUIRE instruction. In this case, the subsequent conditional jump is taken and the damage inducing part of the memory reference instructions in the critical section may be avoided. However, when a processor receives a Fail-to-REQUESTOR and is not participating in an advanced synchronization instruction sequence, the requesting processor's BIU may just re-request the original memory transaction. Thus, the elapsed time between the sending of the Fail-to-REQUESTOR and the subsequent arrival of the next coherent invalidation probe at the passed critical section enables forward progress of the processor with the synchronization arbiter's blessing to be guaranteed. The guarantee of forward progress enables the advanced synchronization facility to be more efficient under contention than currently existing synchronization mechanisms. Accordingly, sooner or later, both the critical section and the interfering memory reference may both be performed (e.g., no live-lock, nor dead-lock). [0053] As mentioned above, the performance of a processor participating in the Advanced Synchronization Facility may be optimized by using a critical section predictor 223. Initially predictor 223 may be set up to predict that no interference is expected during execution of a critical section. In this mode, processor core 18 may not actually use the synchronization arbiter 230. Instead processor core 18 may record the LOCKed memory references and may check these against Coherent Invalidation PROBEs to detect interference. If the end of the critical section is reached before any interference is detected, no interested third party has seen the activity of the critical section and it has been performed as if it was executed atomically. This property enables the Advanced Synchronization Facility to be processor-cycle competitive with currently existing synchronization mechanisms when no contention is observed.[0054] More particularly, when interference is detected, processor core 18 may create a failure status for the ACQUIRE instruction and the subsequent conditional branch redirects the flow of control out of the critical section, and resets the predictor to predict deterministic mode. When the next critical section is detected, the decoder will then predict interference might happen, and will process the critical section using the synchronization arbiter 230 (if enabled).[0055] In one embodiment, the Advanced Synchronization facility may operate on misaligned data items as long as these items do not span cache lines that are not participating in the actual critical section. Software is free to have synchronization items span cache line boundaries as long as all cache lines so touched are recognized as part of the critical section entry. When a data item spans a cache line into another cache line that was not part of the synchronization communication, the processor neither detects the failure of atomicity nor signals the lack of atomicity.[0056] Further, access to critical section data may be dependent upon the presence of that data in main memory. All of the lines necessary for the critical section are touched before ENTRY into the critical section, and any access rights issues or page-faulting issues may be detected when the LOCKed Load or LOCKed PREFETCHW instructions execute prior to entering the critical section. When any of the lead-in addresses take a fault, the subsequent ACQUIRE instruction is made to fail. After entry to the critical section, if any instruction causes an exception, the processor will causes a failure at the ACQUIRE instruction, and the subsequent conditional jump redirects control away from the critical section. [0057] In one embodiment, if the decoder of processor core 18 must take an interrupt, it may arrange that the ACQUIRE instruction will fail with the zero bit set (e.g., RFLAGS.ZF), and take the interrupt at the ACQUIRE instruction.[0058] It is noted that in embodiments in which synchronization arbiter 230 is connected within a North Bridge implementation within the HyperTransport(TM) fabric, synchronization arbiter 230 may be assigned a predetermined and/or reserved node ID that no other component may have. This assignment may be made at boot time by the BIOS, for example. In addition, in the above embodiments, the count value may be returned as a 64-bit value, although other values are contemplated.[0059] FIG. 3 is a flow diagram describing the operation of the embodiments of the computer system shown in FIG. 1 and FIG. 2. Referring collectively to FIG. 1 through FIG. 3, and beginning in block 405 addresses of cache lines that are currently being operated on or accessed as part of a critical section are maintained in a list (e.g., within LLB 222). For example, synchronization arbiter 230 may store the addresses corresponding to a critical section, as a set, within an entry of address storage 232. In one embodiment, each entry of address storage 232 may also store a count value that is associated with the whole set of addresses stored therein (block 410). As described above, the count value may be indicative of the number of contenders (i.e., interferers) for any of the addresses in the set. In another embodiment, synchronization arbiter 230 may store a number of count values within each entry, such that each address in the entry has a an associated count value. [0060] When a processor or processor core that is implementing the advanced synchronization facility, requests atomic access to one or more cache lines, the request comes in the form of a critical code section. For example, as described above, to ensure completion of the instructions in an atomic manner (as viewed by all outside observers) a critical section may include the use of LOCKed MOV instructions, followed by an ACQUIRE instruction and a RELEASE instruction (block 415). Accordingly, the set of addresses that are requested are checked for interference. In one embodiment, the set of addresses is compared to all of the addresses within address storage 232 (block 420). In the embodiments described above, the LOCKed MOV instructions cause the addresses to be marked. The marker causes BIU 220 to store each marked address in LLB 222. The ACQUIRE instruction causes BIU 220 to send the entire set of address in LLB 222 to synchronization arbiter 230 in the form of an unCacheable write that carries 64-bytes of physical address data. Synchronization arbiter 230 compares the set of addresses to all the addresses in the storage 232. [0061] If there is a match on any address (block 425), the count value associated with the matching address is incremented (block 455) and the new count value is returned to BIU 220 as a part of a failure response to the unCacheable write (block 460) that carries 64-bits of response data. In addition, synchronization arbiter 230 discards the set of addresses upon failure. BIU 220 sends the failure count value to the register of the requesting processor/core, which may also set condition code flags. As a result, the requesting processor/core may use the count value to select another set of memory resources in subsequent operations (block 465) and avoid interference on its subsequent synchronization attempt. Operation proceeds as described above in block 415.[0062] Referring back to block 425, if there is no matching address in storage 232, synchronization arbiter 230 may return a passing count value (e.g., zero) to BIU 220 (block 430). In addition, synchronization arbiter 230 may store the set of addresses in an entry of storage 232 (block 435). BIU 220 may send the passing count value to the requesting processor/core register specified in the ACQUIRE instruction. As such, the requesting processor/core may manipulate or otherwise operate on the data at the requested addresses (block 440). If the operation is not complete (block 445), BIU 220 defers sending a completion message to synchronization arbiter 230. When the operation in the critical section is complete such as when the RELEASE instruction is executed, BIU 220 may send a completion message to synchronization arbiter 230. Upon receiving the completion message, synchronization arbiter 230 may flush the corresponding addresses from storage 232, thereby releasing those addresses back to the system (block 450) for use by another processor/core. In addition, load/store unit 270 updates the data cache for all instructions in that critical section that retired.[0063] As described above, if a coherency invalidation probe hits on an address in the critical section during processing of the critical section, the response to that probe maybe dependent upon the state of processing of the critical section (i.e., whether or not the cache lines have been acquired). FIG. 4 is a flow diagram describing the operation of the embodiments of FIG. 1 and FIG. 2 when a coherency invalidation probe is received.[0064] Referring collectively to FIG. 1 through FIG. 4 and beginning in block 505 of FIG. 4, a Probe with INValidate is received and hits on a critical section address in load store unit 270. If the requested lines have been successfully acquired (block 510), (e.g., a coherency invalidation probe is received after synchronization arbiter 230 has provided a pass count value, and stored the set of addresses within storage 232), BIU 220 may send a Failure-to-Requestor response as a response to the probe (block 515). At the requesting processor core, this Failure-to-Requestor response should cause a failure of the ACQUIRE instruction if the processor core was operating in a critical section, or a retry of the addresses if not. [0065] Referring back to block 510, if the requested lines have been acquired, the processor core may ignore any count value received form synchronization arbiter 230 (block 520). Load/store unit 270 may notify instruction dispatch and control unit 257 that there is a probe hit (e.g., Prb hit signal), and thus there is a Failure- to-Acquire. As such, the ACQUIRE instruction is made to fail, as described above. As such, to an outside observer the ACQUIRE instruction simply failed. [0066] It is noted that although the computer system 100 described above includes processing nodes that include one or more processor cores, it is contemplated that in other embodiments, the advanced synchronization facility and associated hardware may be implemented using stand-alone processors or a combination of processing nodes and stand-alone processors, as desired. In such embodiments, each stand-alone processor may include all or part of the above described hardware and may be capable of executing the instructions that are part of the advanced synchronization facility. As such the terms processor and processor core may be used somewhat synonymously, except when specifically enumerated to be different.[0067] Code and/or data that implements the functionality described in the preceding sections may also be provided on computer accessible/readable medium. Generally speaking, a computer accessible/readable medium may include any media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible medium may include storage media such as magnetic or optical media, e.g., disk (fixed or removable), CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, volatile or nonvolatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc., as well as media accessible via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link.[0068] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.Industrial Applicability:This invention may generally be applicable to microprocessors.
The invention relates to on-die cross-over temperature management for memory devices. Control logic in a memory device receives a request to read data from a memory array of the memory device, the request including an indication of a segment of the memory array in which the data is stored, and determine whether a write temperature associated with the data is stored in a flag byte corresponding to the segment of the memory array. In response to determining that the write temperature associated with the data is stored in the flag byte, the control logic determines a cross-over temperature of the data based on the write temperature and a read temperature when the request to read the data is received, a program/erase loop count associated with the segment of the memory array is determined, and whether to perform a correction action to calibrate a read voltage level to be applied to the memory array to read the data from the segment is determined based on the cross temperature and the program/erase loop count.
1. A memory device comprising:memory array; andcontrol logic operatively coupled to the memory array to perform operations comprising:receiving a request to read data from the memory array, the request including an indication of a segment of the memory array in which the data is stored;determining whether a write temperature associated with the data is stored in a flag byte corresponding to the segment of the memory array;In response to determining that the write temperature associated with the data is stored in the flag byte, determining the write temperature based on the write temperature and the read temperature when the request to read the data is received The crossover temperature of the above data;determining a program/erase cycle count associated with the segment of the memory array; andWhether to perform a corrective action to calibrate a read voltage level to be applied to the memory array to read the data from the segment is determined based on the crossover temperature and the program/erase cycle count.2. The memory device of claim 1, wherein determining the crossover temperature of the data comprises determining a difference between the write temperature and the read temperature.3. The memory device of claim 1 , wherein determining the program/erase cycle count comprises at least one of: reading the program byte from the flag byte corresponding to the segment of the memory array /erase cycle count, or receive an indication of the program/erase cycle count with the request to read the data from the memory array.4. The memory device of claim 1 , wherein determining whether to perform the corrective action to calibrate the read voltage level comprises:determining whether the crossover temperature satisfies a crossover temperature threshold criterion; andIn response to determining that the crossover temperature satisfies the crossover temperature threshold criterion, it is determined whether the program/erase cycle count satisfies a cycle threshold criterion.5. The memory device of claim 4, wherein determining whether to perform the corrective action to calibrate the read voltage level further comprises:In response to determining that the program/erase cycle count satisfies the cycle threshold criterion, it is determined to perform the corrective action to calibrate the read voltage level.6. The memory device of claim 5, wherein the control logic is to perform operations further comprising:Executing the corrective action to calibrate the read voltage level, wherein performing the corrective action includes at least one of: executing a read voltage calibration command to modify the read voltage level, or executing a corrective read command to modify the read voltage level Overlap decoupling of threshold voltage distributions in the data.7. The memory device of claim 1, wherein the control logic is to perform operations further comprising:in response to determining that the write temperature associated with the data is not stored in the flag byte, determining whether the read temperature satisfies read temperature threshold criteria; andIn response to determining that the read temperature satisfies the read temperature threshold criterion, it is determined whether the program/erase cycle count satisfies a cycle threshold criterion.8. The memory device of claim 7, wherein the control logic is to perform operations further comprising:In response to determining that the program/erase cycle count satisfies the cycle threshold criterion or that the program/erase cycle count is not available, it is determined to perform the corrective action to calibrate the read voltage level.9. A method comprising:receiving a request to read data from a memory array of a memory device, the request including an indication of a segment of the memory array in which the data is stored;determining whether a write temperature associated with the data is stored in a flag byte corresponding to the segment of the memory array;In response to determining that the write temperature associated with the data is stored in the flag byte, determining the write temperature based on the write temperature and the read temperature when the request to read the data is received The crossover temperature of the above data;determining a program/erase cycle count associated with the segment of the memory array; andWhether to perform a corrective action to calibrate a read voltage level to be applied to the memory array to read the data from the segment is determined based on the crossover temperature and the program/erase cycle count.10. The method of claim 9, wherein determining the crossover temperature of the data comprises determining a difference between the write temperature and the read temperature.11. The method of claim 9, wherein determining the program/erase cycle count comprises at least one of: reading the program/erase cycle count from the flag byte corresponding to the segment of the memory array An erase cycle count, or an indication of the program/erase cycle count is received with the request to read data from the memory array.12. The method of claim 9, wherein determining whether to perform the corrective action to calibrate the read voltage level comprises:determining whether the crossover temperature satisfies a crossover temperature threshold criterion; andIn response to determining that the crossover temperature satisfies the crossover temperature threshold criterion, it is determined whether the program/erase cycle count satisfies a cycle threshold criterion.13. The method of claim 12, wherein determining whether to perform the corrective action to calibrate the read voltage level further comprises:In response to determining that the program/erase cycle count satisfies the cycle threshold criterion, it is determined to perform the corrective action to calibrate the read voltage level.14. The method of claim 13, further comprising:Executing the corrective action to calibrate the read voltage level, wherein performing the corrective action includes at least one of: executing a read voltage calibration command to modify the read voltage level, or executing a corrective read command to modify the read voltage level Overlap decoupling of threshold voltage distributions in the data.15. The method of claim 9, further comprising:in response to determining that the write temperature associated with the data is not stored in the flag byte, determining whether the read temperature satisfies read temperature threshold criteria; andIn response to determining that the read temperature satisfies the read temperature threshold criterion, it is determined whether the program/erase cycle count satisfies a cycle threshold criterion.16. The method of claim 15, further comprising:In response to determining that the program/erase cycle count satisfies the cycle threshold criterion or that the program/erase cycle count is not available, it is determined to perform the corrective action to calibrate the read voltage level.17. A memory device comprising:memory array; andcontrol logic operatively coupled to the memory array to perform operations comprising:receiving a request to program data into the memory array, the request including an indication of a program/erase cycle count associated with a segment of the memory array in which the data is to be stored;determining a write temperature upon receiving the request to program the data;programming the data to the segments of the memory array; andThe write temperature and the program/erase cycle count are programmed into a flag byte corresponding to the segment of the memory array.18. The memory device of claim 17, wherein determining the write temperature comprises reading an indication of the write temperature included in the request to program the data to the memory array.19. The memory device of claim 17, further comprising:A temperature sensor is operatively coupled to the control logic, wherein determining the write temperature includes receiving a value from the temperature sensor.20. The memory device of claim 17, wherein the write temperature and the program/erase cycle count are maintained in the flag byte until the data, and wherein the control logic is to determine whether to perform a corrective action based on the crossover temperature and the program/erase cycle count to calibrate the read to be applied to the memory array to read the data from the segment Take the voltage level.
On-die crossover temperature management for memory devicestechnical fieldEmbodiments of the present disclosure relate generally to memory subsystems and, more particularly, to on-die crossover temperature management for memory devices of a memory subsystem.Background techniqueA memory subsystem may include one or more memory devices that store data. A memory device may be, for example, a non-volatile memory device and a volatile memory device. In general, a host system can utilize a memory subsystem to store data at and retrieve data from a memory device.Contents of the inventionIn one aspect, the present application is directed to a memory device comprising: a memory array; and control logic operatively coupled to the memory array to perform operations comprising: receiving a read from the memory array a request for data, the request including an indication of a segment of the memory array in which the data is stored; determining whether a write temperature associated with the data is stored in a flag word corresponding to the segment of the memory array In section; in response to determining that the write temperature associated with the data is stored in the flag byte, based on the write temperature and the read time when the request to read the data is received determining a crossover temperature of the data; determining a program/erase cycle count associated with the segment of the memory array; and determining whether to perform corrective action based on the crossover temperature and the program/erase cycle count A read voltage level to be applied to the memory array to read the data from the segment is calibrated.In another aspect, the present application relates to a method comprising: receiving a request to read data from a memory array of a memory device, the request including an indication of a segment of the memory array in which the data is stored; whether the write temperature associated with the data is stored in a flag byte corresponding to the segment of the memory array; responsive to determining that the write temperature associated with the data is stored in the flag byte , determining a crossover temperature of the data based on the write temperature and a read temperature when the request to read the data is received; determining a program/erase associated with the segment of the memory array a cycle count; and determining whether to perform a corrective action to calibrate a read voltage level to be applied to the memory array to read the data from the segment based on the crossover temperature and the program/erase cycle count.In another aspect, the present application is directed to a memory device comprising: a memory array; and control logic operatively coupled to the memory array to perform operations comprising: receiving programming data into the a request for a memory array, the request including an indication of a program/erase cycle count associated with a segment of the memory array in which the data is to be stored; determining when the request to program the data is received programming the data to the segment of the memory array; and programming the write temperature and the program/erase cycle count to a flag corresponding to the segment of the memory array byte.Description of drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the disclosure.Figure 1A illustrates an example computing system including a memory subsystem, according to some embodiments of the present disclosure.Figure IB is a block diagram of a memory device in communication with a memory subsystem controller of a memory subsystem, according to some embodiments of the present disclosure.2 is a schematic diagram of a portion of an array of memory cells that may be used in a memory of the type described with reference to FIG. 1B , according to some embodiments of the present disclosure.3 is a flowchart of an example method of storing crossover temperature data on a memory device during a programming operation, according to some embodiments of the present disclosure.4 is a flowchart of an example method for on-die crossover temperature management of memory devices of a memory subsystem according to some embodiments of the present disclosure.5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to on-die crossover temperature management for memory devices of a memory subsystem. A memory subsystem may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices and memory modules are described below in conjunction with FIG. 1 . Typically, a host system may utilize a memory subsystem that includes one or more components, such as memory devices, that store data. The host system can provide data for storage at the memory subsystem, and can request data to be retrieved from the memory subsystem.The memory subsystem may include high density non-volatile memory devices where it is desirable to retain data when power is not supplied to the memory device. For example, NAND memory, such as 3D flash NAND memory, provides storage in a compact, high-density configuration. A non-volatile memory device is a package of one or more dies, each die containing one or more planes. For some types of non-volatile memory devices (eg, NAND memory), each plane contains a set of physical blocks. Each block contains a set of pages. Each page contains a group of memory cells ("cells"). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information and have various logic states related to the number of bits stored. Logical states can be represented as binary values, such as "0" and "1," or combinations of such values.A memory device may be composed of bits arranged in a two-dimensional or three-dimensional grid. Memory cells are formed on a silicon wafer in an array of columns (also referred to below as bit lines) and rows (also referred to below as word lines). A word line may refer to one or more rows of memory cells in a memory device that are used with one or more bit lines to generate an address for each memory cell. The intersection of the bitline and wordline constitutes the address of the memory cell. Hereinafter, a block refers to cells in a memory device used to store data, and may include groups of memory cells, groups of word lines, word lines, or individual memory cells. One or more blocks can be grouped together to form separate partitions (eg, planes) of a memory device in order to allow parallel operations on each plane.Bit flip errors can occur in some memory devices when there is insufficient separation between the corresponding threshold voltages (Vt) of two adjacent bit levels (also referred to as "states"). Typically, each binary value stored in a memory cell has a different associated threshold voltage, with the lowest binary value having the highest threshold voltage, the highest binary value having the lowest threshold voltage, and intermediate states having progressively different threshold voltage values. For example, a memory cell configured as a triple level cell (TLC) memory can have eight states, with each state having a corresponding Vt. Similarly, a memory cell configured as a quad-level cell (QLC) memory may have 16 states, with each state having a corresponding Vt. In certain memory devices, bit flip errors can be reduced (eg, minimized) by providing better level separation in threshold voltage (Vt) distributions. However, as more bits are stored per memory cell, the separation between two adjacent levels decreases.In many memory devices, the level separation of threshold voltages becomes further reduced (or shifted) due to changes in environmental conditions such as crossover temperature effects. In cases where the memory cells are operated (eg, read) at a different temperature range than the temperature at which the memory cells were programmed, the crossover temperature negatively impacts the level separation. For example, cross temperature effects can arise when data is read from a memory cell at a different temperature than the temperature at which the data was written into the memory cell. Errors caused by crossover temperatures can accumulate due to either or both of the offset levels crossing threshold boundaries, causing bit flip errors and/or overlapping levels resulting in an increased number of bit flip errors. Bit flip errors degrade reliability and data retention due to increased error rates. When the difference between the data writing temperature and the data reading temperature increases, the error rate of the data also increases due to level shift and level overlap.As the storage capacity of memory cells is increased to store more bits, additional error correction operations may be utilized to meet the reliability requirements of the memory subsystem. For example, error correction code (ECC) can be used to correct cross temperature related bit errors. SSDs based on QLC NAND can take advantage of more complex error correction operations than SSDs using SLC, MLC or TLC NAND flash. Therefore, under certain crossover temperature conditions, a large number of error correction operations will be performed to correct crossover temperature related bit flip errors. These error correction operations reduce the amount of processing in the memory subsystem and increase read command latency.Certain memory devices and memory subsystems attempt to reduce error rates using various techniques, including adjusting read voltage levels. This can include determining a compensation offset value to account for the threshold voltage shift of a given memory cell. Since the threshold voltage shift can vary depending on the process variation in each memory cell, the location of the memory cell (i.e., die-to-die variation), and the number of program/erase cycles performed on the cell, This calibration process can be complex. For example, some memory devices perform instantaneous read voltage calibration to adjust the read voltage level applied during a read operation depending on the ambient temperature when the read operation is performed. Such devices generally do not take into account the temperature at which the data being read was originally programmed, and thus do not address the specific issues associated with crossover temperatures. Other memory devices do attempt to apply a read voltage offset based on crossover temperature, however since most memory devices do not track the temperature at which data is written, memory devices rely on the memory subsystem controller to determine the crossover temperature, which increases the Latency and complexity of read operations. Still other memory devices attempt to reduce error rates by calibrating read voltage levels based on the number of program/erase cycles performed on a given segment (eg, page or block) of the memory device. Since the number of program/erase cycles can vary widely per segment, such tracking can be complex and require the memory subsystem to maintain a large number of expensive additional data structures.Aspects of the present disclosure address the above and other deficiencies by providing on-die crossover temperature management for memory devices of a memory subsystem. In one embodiment, when performing a write operation to write host data to a page of the memory device, control logic on the memory device may store an indication of the temperature at which the data was written (i.e., the "write temperature") at in the flags byte associated with the segment of the memory device. Additionally, the control logic may store an indication of the segment's program/erase cycle count in the flag byte. Depending on the embodiment, either or both of the write temperature and program/erase cycle count can be tracked directly by control logic on the memory device, or can be tracked from the memory subsystem that issued the write command associated with the write operation. Received by the control or host system. This information can remain stored in a flag byte on the memory device, and can be quickly accessed and used for read voltage calibration when later reading host data written to the segment.When a read command is received at a memory device from a memory subsystem controller or a host system, control logic on the memory device may identify the segment of the memory device to be read and determine whether the requested data is stored at a write temperature comparable to in the flag bytes associated with the segment. If so, the control logic can determine the crossover temperature (ie, the difference between the write temperature and the ambient temperature when the read command is received) and the number of program/erase cycles associated with the segment. Depending on the embodiment, the number of program/erase cycles may be read from the flag byte or received in conjunction with a read command. In one embodiment, using the crossover temperature and the number of program/erase cycles as inputs, the control logic may determine the read voltage offset (eg, from a lookup table or other data structure stored on the memory device). In one embodiment, the control logic may further determine whether the crossover temperature satisfies a threshold criterion (e.g., greater than or equal to a threshold level), and if so, determines whether the number of program/erase cycles satisfies a threshold criterion (e.g., greater than or equal to threshold level). If both the crossover temperature and the number of program/erase cycles satisfy corresponding threshold criteria, the control logic may take corrective action to calibrate the read voltage bias before applying the read voltage to the memory array of the memory device to read the requested data. shift. Depending on the embodiment, correction may include, for example, calibrating read voltage offsets on the fly or enabling smarter and longer read commands that may reduce the number of bit flips. Alternatively, if the write temperature is not stored in the flag byte, or the crossover temperature does not meet threshold criteria (e.g., less than a threshold level), the control logic may determine whether the ambient temperature when the read command is received meets threshold criteria (e.g., above the high threshold or below the low threshold). If so, control logic can analyze the number of program/erase cycles, as described above, to determine whether to take corrective action. If the ambient temperature does not meet the threshold criteria (for example, below the high threshold and above the low threshold), or the number of program/erase cycles does meet the threshold criteria (for example, below the threshold level), the control logic can Read operations are performed with a voltage offset.Advantages of this approach include, but are not limited to, improved performance of memory devices. The techniques described herein provide a simple on-die crossover temperature solution that exploits the write temperature, read temperature, and number of program/erase cycles for a given segment of the memory device. This method detects workload conditions and adjusts the read voltage offset and read command to achieve a balance between performance and quality of service under normal operating conditions and extreme conditions. Longer latency and lower error rate commands can be selectively deployed only under strict operating conditions to reduce error handling trigger rates in the memory subsystem. Reducing the error handling trigger rate in this way improves average throughput and quality of service despite the higher latency. Therefore, at normal temperatures, use normal time-delay read conditions without affecting and time-delay. The control logic on the memory device can optionally handle extreme conditions, which means that the memory device has more margin at one extreme temperature (e.g., high temperature, such as in a system without temperature regulation), while the read calibration And corrective read operations can be deployed at the other extreme.FIG. 1A illustrates an example computing system 100 including a memory subsystem 110 according to some embodiments of the disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such devices .Memory subsystem 110 may be a storage device, a memory module, or a hybrid of storage device and memory module. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) card and hard disk drive (HDD). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).Computing system 100 may be a computing device such as a desktop computer, laptop computer, web server, mobile device, vehicle (e.g., airplane, drone, train, automobile, or other means of transportation), an Internet of Things (IoT) enabled device , an embedded computer (eg, an embedded computer contained in a vehicle, industrial equipment, or networked business device), or such a computing device including memory and processing means.Computing system 100 may include a host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to a different type of memory subsystem 110 . FIG. 1A illustrates an example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or Wireless, including connections such as electrical connections, optical connections, magnetic connections, etc.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more caches, memory controllers (eg, NVDIMM controllers), and storage protocol controllers (eg, PCIe controllers, SATA controllers). Host system 120 uses memory subsystem 110 , for example, to write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Fiber Channel, Serial Attached SCSI (SAS), Double Data Rate (DDR) memory bus, Small Computer System Interface (SCSI), Dual Inline Memory Module (DIMM) interface (eg, DIMM socket that supports Double Data Rate (DDR)), etc. The physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 can also utilize the NVM Express (NVMe) interface to access memory components (eg, the memory device 130 ). A physical host interface may provide an interface for transferring control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1A illustrates memory subsystem 110 as an example. In general, host system 120 can access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.The memory devices 130, 140 may include any combination of different types of non-volatile memory devices and/or volatile memory devices. A volatile memory device (eg, memory device 140 ) can be, but is not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices, such as memory device 130, include "NAND" (NAND) type flash memory and write-in-place memory, such as three-dimensional cross point ("3D cross point") memory. A cross-point array of non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in bulk resistance. In addition, compared to many flash-based memories, cross-point nonvolatile memory can perform write-in-place operations, in which nonvolatile memory cells can be written to without pre-erasing the nonvolatile memory cells. programming. The NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each of memory devices 130 may include one or more arrays of memory cells. One type of memory cell, such as single-level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC), can store multiple bits per cell. In some embodiments, each of memory devices 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC, or any combination of these. In some embodiments, a particular memory device may include an SLC portion of a memory cell as well as an MLC portion, a TLC portion, or a QLC portion. Memory cells in memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. For some types of memory (eg, NAND), pages may be grouped to form blocks.Although nonvolatile memory components such as a 3D cross-point array of nonvolatile memory cells and NAND-type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device 130 may be based on any other type of nonvolatile memory. Volatile memory such as read-only memory (ROM), phase-change memory (PCM), self-selection memory, other chalcogenide-based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM) , Magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-based RRAM (OxRAM), or Not (NOR ) flash memory, electrically erasable programmable read-only memory (EEPROM).Memory subsystem controller 115 (or, for simplicity, controller 115 ) may communicate with memory device 130 to perform operations, such as reading data, writing data, or erasing data at memory device 130 , among other such operations. Memory subsystem controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Memory subsystem controller 115 may be a microcontroller, special purpose logic circuitry (eg, Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), etc.), or other suitable processor.Memory subsystem controller 115 may include a processor 117 (eg, a processing device) configured to execute instructions stored in local memory 119 . In the illustrated example, local memory 119 of memory subsystem controller 115 includes embedded memory configured to store instructions for performing operations that control memory subsystem 110, including processing memory subsystem 110 with the host computer. communication between systems 120), various processes, operations, logic flows, and routines.In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include read only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1A is illustrated as including a memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include a memory subsystem controller 115 and may instead rely on External control (eg, provided by an external host or by a processor or controller separate from the memory subsystem).In general, memory subsystem controller 115 may receive commands or operations from host system 120 and may convert the commands or operations into instructions or suitable commands to achieve a desired access to memory device 130 . Memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses associated with memory devices 130 (e.g., , address translation between logical block addresses (LBAs, namespaces) and physical addresses (eg, physical block addresses). Memory subsystem controller 115 may also include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may translate commands received from the host system into command instructions to access memory device 130 and responses associated with memory device 130 into information for host system 120 .Memory subsystem 110 may also include additional circuitry or components not illustrated. In some embodiments, memory subsystem 110 may include caches or buffers (e.g., DRAM) and address circuitry (e.g., row and column decoders) that may be read from the memory subsystem controller 115 receives the address and decodes the address to access memory device 130 .In some embodiments, memory device 130 includes a local media controller 135 that operates in conjunction with memory subsystem controller 115 to perform operations on one or more memory units of memory device 130 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 130 externally (eg, perform media management operations on memory device 130 ). In some embodiments, memory device 130 is a managed memory device, which is a controller with on-die control logic (e.g., local controller 135) and a controller for media management within the same memory device package (e.g., memory sub- Raw memory device 130 of system controller 115). An example of a managed memory device is a managed NAND (MNAND) device. For example, memory device 130 may represent a single die with some control logic embodied thereon (eg, local media controller 135). In some embodiments, one or more components of memory subsystem 110 may be omitted.In one embodiment, the memory subsystem 110 includes a memory interface component 113 . Memory interface component 113 is responsible for handling the interaction of memory subsystem controller 115 with a memory device (eg, memory device 130 ) of memory subsystem 110 . For example, memory interface component 113 may send memory access commands, such as program commands, read commands, or other commands, to memory device 130 corresponding to requests received from host system 120 . Additionally, memory interface component 113 may receive data from memory device 130, such as data retrieved in response to a read command or confirmation that a program command has been successfully executed. In some embodiments, memory subsystem controller 115 includes at least a portion of memory interface 113 . For example, memory subsystem controller 115 may include a processor 117 (eg, a processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, memory interface component 113 is part of host system 110, an application program, or an operating system.In one embodiment, memory device 130 includes local media controller 135 and memory array 104 . As described herein, memory array 104 may be logically or physically divided into segments (eg, dies, blocks, pages, etc.). Each segment may include one or more flag bytes, which are restricted areas in memory array 104 that store system data or other metadata and are generally not accessible or usable by host system 120 . In one embodiment, local media controller 135 may utilize flag bytes in memory array 104 to store certain information associated with host data written to corresponding segments of memory array 104 . For example, in response to receiving a write (i.e., program) request or command from memory interface 113, and while performing a write operation corresponding to a request to write host data to a page of memory array 104, the local media control Register 135 may store an indication of the temperature at which data was written (ie, "write temperature") in a flag byte associated with the page. Additionally, the local media controller 135 may store an indication of the program/erase cycle count for the page in the flag byte. Depending on the embodiment, one or both of write temperature and program/erase cycle count may be tracked directly by local media controller 135, or may be received by memory interface 113 along with a write request. This information can remain stored in a flag byte on memory device 130 and can be used for read voltage calibration when later reading host data written to the page. Since the write temperature and program/erase cycle count are stored in the flag byte on the memory device 130, the local media controller 135 can quickly and easily access the flag byte when performing a read operation later. information, perform associated calculations (e.g., determine the crossover temperature, compare the crossover temperature and/or the number of program/erase cycles to corresponding thresholds, etc.), and determine the Whether the calibration of the read voltage is appropriate. In this way, the local media controller 135 can selectively take corrective action to adjust the read voltage level (e.g., apply a read voltage offset to the default read voltage level) only when necessary, and can prevent Latency is added in order to complete read operations associated with taking unauthorized corrective action and having to access cross temperature data and/or program/erase cycle counts from memory subsystem controller 115 . Additional details regarding the operation of the local media controller 135 are described below.1B is a simplified block diagram of a first device in the form of a memory device 130 in communication with a second device in the form of a memory subsystem controller 115 of a memory subsystem (eg, memory subsystem 110 of FIG. 1A ), according to an embodiment. Some examples of electronic systems include personal computers, personal digital assistants (PDAs), digital cameras, digital media players, digital recorders, games, electrical equipment, vehicles, wireless devices, mobile phones, and the like. Memory subsystem controller 115 (eg, a controller external to memory device 130 ) may be a memory controller or other external host device.Memory device 130 includes an array of memory cells 104 logically arranged in rows and columns. Memory cells of a logical row are usually connected to the same access line (eg, word line), and memory cells of a logical column are usually selectively connected to the same data line (eg, bit line). A single access line can be associated with more than one logical row of memory cells, and a single data line can be associated with more than one logical column. At least a portion of memory cells (not shown in FIG. 1B ) of memory cell array 104 are capable of being programmed to one of at least two target data states.Row decoding circuitry 108 and column decoding circuitry 109 are provided to decode address signals. Address signals are received and decoded to access memory cell array 104 . Memory device 130 also includes input/output (I/O) control circuitry 160 to manage the input of commands, addresses, and data to memory device 130 and the output of data and status information from memory device 130 . Address register 114 communicates with I/O control circuitry 160 and row decode circuitry 108 and column decode circuitry 109 to latch address signals prior to decoding. Command register 124 communicates with I/O control circuitry 160 and local media controller 135 to latch incoming commands.A controller (e.g., local media controller 135 internal to memory device 130) controls access to memory cell array 104 in response to commands and generates status information for external memory subsystem controller 115, i.e., local media controller 135 is configured to perform access operations (eg, read operations, program operations, and/or erase operations) on the memory cell array 104 . Local media controller 135 communicates with row decoding circuitry 108 and column decoding circuitry 109 to control row decoding circuitry 108 and column decoding circuitry 109 in response to addresses. As described herein, local media controller 135 may utilize information stored in flag bytes 150 of memory array 104 to perform on-die cross temperature management on memory device 130 . In one embodiment, the local media controller 135 communicates with a temperature sensor 170 disposed within or adjacent to the memory device 130 . The temperature sensor 170 may be used to measure the ambient temperature at certain points in time, which may represent, for example, a write temperature or a read temperature.Local media controller 135 is also in communication with cache register 172 . Cache register 172 latches incoming or outgoing data as directed by local media controller 135 to temporarily store data while memory cell array 104 is busy writing or reading other data, respectively. During a programming operation (e.g., a write operation), data may be transferred from cache register 172 to data register 170 for transfer to memory cell array 104; new data may then be latched in from I/O control circuitry 160 cache register 172. During a read operation, data may be transferred from cache register 172 to I/O control circuitry 160 for output to memory subsystem controller 115 ; new data may then be transferred from data register 170 to cache register 172 . Cache register 172 and/or data register 170 may form (eg, may form part of) the page buffer of memory device 130 . The page buffer may additionally include a sensing device (not shown in FIG. 1B ) to sense the data state of the memory cells of the memory cell array 104, eg, by sensing the state of the data lines connected to the memory cells. Status register 122 may communicate with I/O control circuitry 160 and local memory controller 135 to latch status information for output to memory subsystem controller 115 .Memory device 130 receives control signals at memory subsystem controller 115 from local media controller 135 via control link 132 . For example, the control signals may include chip enable signal CE#, command latch enable signal CLE, address latch enable signal ALE, write enable signal WE#, read enable signal RE# and write protect signal WP#. Additional or alternative control signals (not shown) may further be received over control link 132 depending on the nature of memory device 130 . In one embodiment, memory device 130 receives command signals (representing commands), address signals (representing addresses), and data from memory subsystem controller 115 over multiplexed input/output (I/O) bus 134 signal (which represents data) and outputs the data to memory subsystem controller 115 via I/O bus 134 .For example, commands may be received through input/output (I/O) pins [7:0] of I/O bus 134 at I/O control circuitry 160 and then may be written to command register 124 middle. Addresses may be received at I/O control circuitry 160 through input/output (I/O) pins [7:0] of I/O bus 134 and may then be written into address register 114 . Can be at I/O control circuitry 160 through input/output (I/O) pins [7:0] for 8-bit devices or input/output (I/O) pins [7:0] for 16-bit devices 15:0] Data is received and may then be written into cache register 172 . Data may then be written into data register 170 for programming memory cell array 104 .In an embodiment, cache register 172 may be omitted, and data may be written directly into data register 170 . Data can also be output through input/output (I/O) pins [7:0] for 8-bit devices or input/output (I/O) pins [15:0] for 16-bit devices. Although reference may be made to an I/O pin, it may include any conductive node, such as commonly used conductive pads or bumps, that enables electrical connection to memory device 130 through an external device (eg, memory subsystem controller 115).Those skilled in the art will appreciate that additional circuitry and signals may be provided, and that the memory device 130 of FIG. 1B has been simplified. It should be appreciated that the functions of the various block components described with reference to FIG. 1B may not necessarily be divided into different components or component parts of the integrated circuit device. For example, a single component or component portion of an integrated circuit device may be adapted to perform the functions of more than one block component of FIG. 1B . Alternatively, one or more components or component parts of an integrated circuit device may be combined to perform the functions of a single block component of FIG. 1B . Additionally, while specific I/O pins are described in accordance with popular conventions for receipt and output of various signals, it should be noted that other combinations or other numbers of I/O pins (or other I/O pins) may be used in various embodiments. /O node structure).2 is a schematic diagram of a portion of a memory cell array 104 (eg, a NAND memory array) that may be used in a memory of the type described with reference to FIG. 1B , according to an embodiment. Memory array 104 includes access lines, such as word lines 2020-202N, and data lines, such as bit lines 2040-204M. Word lines 202 may be connected in a many-to-one relationship to global access lines not shown in FIG. 2 (eg, global word lines). For some embodiments, the memory array 104 may be formed over a semiconductor, which may be conductively doped, for example, to have a conductivity type such as p-type conductivity, for example, to form a p-well, or to have a conductivity type, such as n-type conductivity, to form a For example, an n-well is formed.The memory array 104 may be arranged in rows (each row corresponds to a word line 202) and columns (each column corresponds to a bit line 204). Each column may include a string of memory cells (eg, non-volatile memory cells) connected in series, such as one of NAND strings 2060-206M. Each NAND string 206 may be connected (eg, selectively connected) to a common source (SRC) 216 and may include memory cells 2080-208N. Memory unit 208 may represent a non-volatile memory unit for storing data. The memory cells 208 in each NAND string 206 may be connected in series to a select gate 210 (eg, a field effect transistor) such as one of the select gates 2100 to 210M (eg, may be a source select transistor, commonly referred to as select gate source) and a select gate 212 (eg, a field effect transistor) such as one of select gates 2120 to 212M (eg, which may be a drain select transistor, often referred to as select gate drain). Select gates 2100 through 210M may be commonly connected to a select line 214, such as a source select line (SGS), and select gates 2120 through 212M may be commonly connected to a select line 215, such as a drain select line (SGD). Although depicted as conventional field effect transistors, select gates 210 and 212 may utilize a similar (eg, identical) structure to memory cell 208 . Select gates 210 and 212 may represent several select gates connected in series, where each select gate in series is configured to receive the same or an independent control signal.The source of each select gate 210 may be connected to a common source 216 . The drain of each select gate 210 may be connected to a memory cell 2080 in a corresponding NAND string 206 . For example, the drain of select gate 2100 may be connected to memory cell 2080 in corresponding NAND string 2060 . Accordingly, each select gate 210 may be configured to selectively connect a corresponding NAND string 206 to common source 216 . The control gate of each select gate 210 may be connected to a select line 214 .The drain of each select gate 212 may be connected to the bit line 204 of the corresponding NAND string 206 . For example, the drain of the select gate 2120 can be connected to the bit line 2040 of the corresponding NAND string 2060 . The source of each select gate 212 may be connected to a memory cell 208N in a corresponding NAND string 206 . For example, the source of select gate 2120 may be connected to memory cell 208N in corresponding NAND string 2060 . Accordingly, each select gate 212 may be configured to selectively connect a corresponding NAND string 206 to a corresponding bit line 204 . The control gate of each select gate 212 may be connected to a select line 215 .Memory array 104 in FIG. 2 may be a quasi-two-dimensional memory array, and may have a substantially planar structure, eg, where common source 216, NAND string 206, and bit lines 204 extend in substantially parallel planes. Alternatively, memory array 104 in FIG. 2 may be a three-dimensional memory array, such as where NAND strings 206 may be substantially perpendicular to a plane containing common source 216 and substantially perpendicular to a plane containing bitline 204 (which may be substantially parallel extending in the plane containing the common source 216).A typical construction of a memory cell 208 includes a data storage structure 234 (e.g., a floating gate, charge trapping, etc.) and a control gate 236 that can determine the data state of the memory cell (e.g., through a change in threshold voltage) and a control gate 236, as shown in FIG. . Data storage structure 234 may include both conductive and dielectric structures, while control gate 236 is typically formed from one or more conductive materials. In some cases, memory cell 208 may also have a defined source/drain (eg, source) 230 and a defined source/drain (eg, drain) 232 . Memory cell 208 has its control gate 236 connected to (and in some cases formed into) word line 202 .A column of memory cells 208 may be a NAND string 206 or a number of NAND strings 206 selectively connected to a given bitline 204 . A row of memory cells 208 may be memory cells 208 that are commonly connected to a given word line 202 . A row of memory cells 208 may, but need not, include all memory cells 208 that are commonly connected to a given word line 202 . A row of memory cells 208 may typically be divided into one or more groups of physical pages of memory cells 208 , and a physical page of memory cells 208 typically includes every other memory cell 208 commonly connected to a given word line 202 . For example, the memory cells 208 commonly connected to word lines 202N and selectively connected to even bit lines 204 (e.g., bit lines 2040, 2042, 2044, etc.) may be one physical page of memory cells 208 (e.g., even memory cells), while the memory cells 208 commonly connected to word lines 202N and selectively connected to odd bit lines 204 (e.g., bit lines 2041, 2043, 2045, etc.) may be memory cells 208 of another physical page (e.g. , odd memory cells).Although bit lines 2043 through 2045 are not explicitly depicted in FIG. 2, it is apparent from the figure that bit lines 204 of memory cell array 104 may be numbered consecutively from bit line 2040 to bit line 204M. Other groupings of memory cells 208 commonly connected to a given word line 202 may also define a physical page of memory cells 208 . For certain memory devices, all memory cells commonly connected to a given word line can be considered as one physical page of memory cells. A portion of a physical page of memory cells (which in some embodiments may still be an entire row) that is read during a single read operation or programmed during a single program operation (e.g., the upper or lower page of memory cells) can be read by Considered a logical page of memory cells. A block of memory cells may include those memory cells configured to be erased together, such as all memory cells connected to word lines 2020-202N (eg, all NAND strings 206 sharing a common word line 202). Unless clearly distinguished, references to a page of memory cells herein refer to memory cells of a logical page of memory cells. Although the example of FIG. 2 is discussed in connection with NAND flash memory, the embodiments and concepts described herein are not limited to a particular array architecture or structure, and may include other structures (e.g., SONOS, phase change, ferroelectric, etc.) and other architectures (e.g., , AND array, NOR array, etc.).3 is a flowchart of an example method of storing crossover temperature data on a memory device during a programming operation, according to some embodiments of the present disclosure. Method 300 may be performed by processing logic that may comprise hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (e.g., on a processing device instructions to run or execute), or a combination thereof. In some embodiments, method 300 is performed by local media controller 135 of FIGS. 1A and 1B . Although shown in a particular order or sequence, the order of the processes can be modified unless otherwise specified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At operation 305, a request is received. For example, control logic (eg, local media controller 135 ) may receive a request to program data to a memory array (eg, memory array 104 ) of a memory device (eg, memory device 130 ). In one embodiment, the request is received from a requestor such as the memory interface 113 of the memory subsystem controller 115 or the host system 120 . In one embodiment, the request includes data to be programmed into a segment (e.g., page, block, etc.) of memory device 130, such as host data or user data, and a program/program associated with the segment of data to be stored. Indication of erase (P/E) cycle count. In one embodiment, memory subsystem controller 115 keeps track of the number of program/erase cycles that have been performed on the segment over the lifetime of memory device 130 (eg, may increment a corresponding counter). Depending on the embodiment, the program/erase cycle count included in the request indicates the number of previously performed program/erase cycles or the updated number of program/erase cycles (e.g., including the program data to be performed in response to the current request operate).At operation 310, a write temperature is determined. For example, control logic may determine the write temperature upon receiving a request to program data. In one embodiment, the request received at operation 305 includes an indication of the write temperature provided by the memory subsystem controller 115, and thus control logic may read the indication of the write temperature from the request. In another embodiment, the control logic may receive a value from a temperature sensor on memory device 130 (eg, temperature sensor 170 ). Depending on the embodiment, the control logic may query the temperature sensor 170 for a new write temperature measurement in response to receiving the write request at operation 305, or may use the most recently measured temperature value as the write temperature (e.g., when at temperature measurements are routinely taken on the memory device 130 at periodic intervals).At operation 315, the data is programmed. For example, the control logic may program the host data received with the request at operation 305 into the identification segment of the memory array 104 . In one embodiment, the control logic may cause one or more program voltage signals to be applied to the word lines 202 of the memory array 104 corresponding to the identified segments.At operation 320, the crossover temperature data is programmed. For example, the control logic may program the write temperature determined at operation 310 and the program/erase cycle count received with the request at operation 305 to a designated area corresponding to the segment of the memory array 104, e.g. One of flag bytes 150. In one embodiment, each segment (eg, page) of memory array 104 is one or more corresponding flag bytes 150 for storing metadata associated with programmed host data. Flag byte 150 may be a restricted area in memory array 104 that stores system data or other metadata, and is generally not accessible or usable by host system 120 . In other embodiments, the control logic may store the write temperature and program/erase cycle count in some other designated area on memory device 130 . In one embodiment, the write temperature and program/erase cycle count are maintained in the flag byte 150 until host data is read from the segment of the memory array 104 . At this point, the control logic may determine whether to perform corrective action based on the crossover temperature and program/erase cycle count to calibrate the read voltage level to be applied to the memory array 104 to read host data from the segment, as described below with respect to Figure 4 is described in more detail.4 is a flowchart of an example method for on-die crossover temperature management of memory devices of a memory subsystem according to some embodiments of the present disclosure. Method 400 may be performed by processing logic that may comprise hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (e.g., on a processing device instructions to run or execute), or a combination thereof. In some embodiments, method 400 is performed by local media controller 135 of FIGS. 1A and 1B . Although shown in a particular order or sequence, the order of the processes can be modified unless otherwise specified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Therefore, not all procedures are required in every embodiment. Other process flows are also possible.At operation 405, a request is received. For example, control logic (eg, local media controller 135) may receive a request to read data from a memory array (eg, memory array 104) of a memory device (eg, memory device 130). In one embodiment, the request is received from a requestor such as the memory interface 113 of the memory subsystem controller 115 or the host system 120 . In one embodiment, the request includes an indication of the segment (eg, page) in memory array 104 where the data is stored.At operation 410, a determination is made. For example, the control logic may determine whether a write temperature associated with the data is stored in the flag byte 150 corresponding to the segment of the memory array 104 . As described above with respect to FIG. 3 , in some embodiments, control logic may program write temperature data to flag byte 150 when host data is programmed to a segment of memory array 104 . In one embodiment, when a read request is received at operation 405, the control logic may identify the flag byte 150 corresponding to the segment indicated in the read request and determine the write temperature (i.e., when the data is programmed An indication of the ambient temperature when reaching the memory array 104) is stored in the flag byte 150. The write temperature in the flag byte 150 may be identified by a unique identifier, or there may be a designated field in the flag byte 150 in which the write temperature is stored.In response to determining that the write temperature associated with the data is stored in the flag byte 150, at operation 415, a crossover temperature is determined. For example, the control logic may determine a crossover temperature for the data at operation 405 based on the write temperature and the read temperature when the request to read the data is received. In one embodiment, to determine the crossover temperature, the control logic may determine the difference between the write temperature and the read temperature. In one embodiment, the request received at operation 405 includes an indication of the read temperature provided by the memory subsystem controller 115, and thus control logic may read the indication of the read temperature from the request. In another embodiment, the control logic may receive a value from a temperature sensor on memory device 130 (eg, temperature sensor 170 ). Depending on the embodiment, the control logic may query the temperature sensor 170 for a new read temperature measurement in response to receiving the read request at operation 405, or may use the most recently measured temperature value as the read temperature (e.g., when When temperature measurements are routinely taken at periodic intervals on the memory device 130 ).At operation 420, a loop count is determined. For example, control logic may determine program/erase (P/E) cycle counts associated with segments of memory array 104 . As described above with respect to FIG. 3 , in some embodiments, control logic may program program/erase cycle counts to flag bytes 150 when host data is being programmed to segments of memory array 104 . Therefore, in one embodiment, the control logic can read the program/erase cycle count from the flag byte 150 . In another embodiment, the request received at operation 405 includes an indication of a program/erase cycle count for the segment of memory array 104 .At operation 425, a read voltage offset is determined. For example, the control logic may determine a read voltage offset by which a default read voltage level is adjusted when applied to the memory array 104 to read data from the segment. In one embodiment, using the crossover temperature determined at operation 415 and the program/erase cycle count determined at operation 420 as inputs, the control logic may identify a data structure (e.g., a look-up table) stored on the memory device 130 The corresponding entry in , where the entry contains an indication of the appropriate read voltage offset. In one embodiment, different combinations of crossover temperatures and program/erase cycle counts can have different read voltage offsets (ie, the amount by which the default read voltage can be increased or decreased when performing a read operation). In one embodiment, the control logic further determines based on the crossover temperature and the program/erase cycle count whether to perform corrective action to calibrate the read voltage level to be applied to the memory array 104 to read data from the segment.At operation 430, a determination is made. For example, the control logic may determine whether the crossover temperature determined at operation 415 satisfies crossover temperature threshold criteria. In one embodiment, the control logic may determine that the crossover temperature threshold criterion is met if the crossover temperature is greater than or equal to a crossover temperature threshold level.In response to determining that the crossover temperature satisfies the crossover temperature threshold criterion, at operation 435 another determination is made. For example, the control logic may determine whether the program/erase cycle count determined at operation 420 satisfies cycle threshold criteria. In one embodiment, the control logic may determine that the cycle threshold criterion is met if the program/erase cycle count is greater than or equal to the cycle threshold level. In embodiments where program/erase cycle count information is not available, control logic may assume by default that the program/erase cycle count satisfies the threshold criteria so that processing continues to operation 440 .In response to determining that the program/erase cycle count satisfies the cycle threshold criterion, the control logic may determine to perform corrective action to calibrate the read voltage level. At operation 440, corrective action is performed. For example, the control logic can perform corrective actions to calibrate the read voltage level. In one embodiment, performing the corrective action includes executing a read voltage calibration command to modify the read voltage level. In another embodiment, performing the corrective action includes performing a corrective read command to decouple threshold voltage distribution overlap in the data. When the read voltage calibration command is executed, a read voltage offset is applied according to the write-read temperature difference. These offsets are discrete offsets and have some associated noise. At extreme write-read temperature differences, the noise can be large. In one embodiment, instead of relying on the read voltage offsets in a lookup table, the read voltage offsets are scanned to identify the optimal read voltage that achieves the greatest separation between Vt levels (or states). Regarding calibration read commands, a NAND page may contain tens of thousands to hundreds of thousands of cells. Due to the influence of neighboring cells, each cell experiences a different degree of Vt shift. This effect is a function of the Vt level programmed to the adjacent aggressor and the Vt level programmed to the victim. Therefore, the Vt offset is not uniform across every cell in the victim page. Consequently, this may lead to widening of the Vt states, reduced separation between levels, and increased number of bit flips. In one embodiment, instead of reading all cells in a page in one iteration, control logic may read cells in a page in multiple iterations depending on the aggressor state. For example, all cells whose neighbor aggressor is state X are read in one iteration, while all cells whose neighbor aggressor is state Y are read in a separate iteration. In this way, the control logic can filter out Vt broadening noise due to different aggressor Vt states to decouple the distribution. At the end of the read command, data read from different iterations can be combined to form an entire page. After performing the corrective action, the control logic can perform a read operation using the calibrated read voltage level. For example, the control logic may cause one or more read voltage signals to be applied to word lines 202 in memory array 104 corresponding to the identified segment.Another determination is made at operation 445 if the control logic determines at operation 410 that the write temperature associated with the data is not stored in the flag byte 150 or at operation 430 that the crossover temperature does not satisfy the crossover temperature threshold criteria. For example, control logic may determine whether the read temperature satisfies read temperature threshold criteria. In one embodiment, the control logic may determine that the read temperature threshold criteria are met if the read temperature is greater than or equal to the high read temperature threshold level, or less than or equal to the low read temperature threshold level. Therefore, if the read temperature is extremely high or very low, the read temperature threshold criterion is met.In response to determining that the read temperature satisfies the read temperature threshold criteria, control logic may proceed to operation 435 to determine whether the program/erase cycle count satisfies the cycle threshold criteria, as described above. If the control logic determines at operation 445 that the read temperature does not meet the read temperature threshold criteria, or at operation 435 that the program/erase cycle count does not meet the cycle threshold criteria, then at operation 450 a read operation is performed. For example, the control logic may perform the read operation 450 without calibrating the read voltage level, and may use a default read voltage level or a default read voltage modified by the read voltage offset determined at operation 425. Take the voltage level.5 illustrates an example machine of a computer system 500 within which a set of instructions are executable for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 500 may correspond to a host system (e.g., host system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (e.g., memory subsystem 110 of FIG. 1 ), Or may be used to perform operations of the controller (eg, used to execute an operating system to perform operations corresponding to local media controller 135 of FIG. 1 ). In alternative embodiments, the machine may be connected (eg, networked) to other machines in a LAN, intranet, extranet, and/or the Internet. The Machine can perform in the capacity of a server or client machine in a client-server network environment as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment operate.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network device, server, network router, switch or bridge, or capable of (sequentially or otherwise) ) any machine that executes a set of instructions specifying actions to be taken by said machine. Further, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or collectively execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.The example computer system 500 includes a processing device 502, a main memory 504 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), Static memory 506 (eg, flash memory, static random access memory (SRAM), etc.) and data storage system 518 communicate with each other via bus 530 .Processing device 502 represents one or more general-purpose processing devices, such as microprocessors, central processing units, and the like. More specifically, the processing means may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, or a processor implementing other instruction sets , or a processor implementing a combination of instruction sets. The processing device 502 may also be one or more special-purpose processing devices, such as application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 may also include a network interface device 508 to communicate over a network 520 .The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) having stored thereon one or more sets of instructions 526 or software embodying any one or more methods or functions described herein. Instructions 526 may also reside, wholly or at least partially, within main memory 504 and/or within processing device 502 during execution thereof by computer system 500 , which also constitute machine-readable storage media. Machine-readable storage medium 524, data storage system 518, and/or main memory 504 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 526 include instructions to implement functionality corresponding to local media controller 135 of FIG. 1 . Although machine-readable storage medium 524 is shown in an example embodiment as a single medium, the term "machine-readable storage medium" shall be taken to encompass a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more methods of the present disclosure. Accordingly, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memories, optical media, and magnetic media.Some portions of the previous detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, considered to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the acts and processes of a computer system or similar electronic computing device that manipulate and transform data represented as physical (electronic) quantities within the computer system's registers and memory into similarly represented computer system memory or registers or other such Other data of physical quantities within a class information storage system.The present disclosure also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored on a computer readable storage medium such as, but not limited to, any type of disk (including floppy disk, compact disk, CD-ROM and magneto-optical disk), read-only memory (ROM), random-access memory (RAM) , EPROM, EEPROM, magnetic or optical card, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.The algorithms and displays presented herein are not per se related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods described. The structure of a variety of these systems will be presented as set forth in the description below. Additionally, the present disclosure has not been described with reference to any particular programming language. It should be appreciated that various programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a program according to the present disclosure. process. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, a machine-readable (eg, computer-readable) medium includes a machine-readable (eg, computer-readable) storage medium, such as read-only memory ("ROM"), random-access memory ("RAM"), disk storage media, optical storage media, flash memory components, etc.In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It should be apparent that various modifications can be made in the present disclosure without departing from the broader spirit and scope of embodiments of the present disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense.
One aspect of the present invention relates to a method of forming an advanced low k material between metal lines on a semiconductor substrate, involving the steps of providing the semiconductor substrate having a plurality of metal lines thereon; depositing a spin-on material over the semiconductor substrate having the plurality of metal lines thereon; and at least one of heating or etching the semiconductor substrate whereby at least a portion of the spin-on material is removed, thereby forming the advanced low k material comprising at least one air void between the metal lines, the advanced low k material having a dielectric constant of about 2 or less. Another aspect of the present invention relates to a method of forming a semiconductor structure, involving the steps of forming a first plurality of metal lines on the semiconductor structure; depositing a spin-on material over the semiconductor substrate having the plurality of metal lines thereon; forming a plurality of openings in the spin-on material exposing a portion of the metal lines and depositing metal to form a plurality of metal vias in the openings; forming a second plurality of metal lines over at least a portion of the metal vias; and at least one of heating or etching the semiconductor structure whereby at least a portion of the spin-on material is removed, thereby forming an advanced low k material comprising at least one air void, the advanced low k material having a dielectric constant of about 2 or less.
What is claimed is: 1. A method of forming an advanced low k material between metal lines on a semiconductor substrate, comprising:providing the semiconductor substrate having a plurality of metal lines thereon; depositing one layer of a spin-on material over the semiconductor substrate having the plurality of metal lines thereon, wherein the spin-on material is a low k polymer material, and pyrolizing the semiconductor substrate at a temperature from about 600[deg.] C. to about 2,000[deg.] C. for a time from about 2 seconds to about 10 minutes whereby at least a portion of the spin-on material is removed, thereby forming the advanced low k material comprising at least one air void between the metal lines, the advanced low k material having a porous structure and a dielectric constant of about 2 or less. 2. The method of claim 1, wherein the semiconductor substrate is pyrolized at a temperature from about 600[deg.] C. to about 1,500[deg.] C. for a time from about 10 seconds to about 5 minutes.3. The method of claim 1, wherein the air void comprises an inert gas.4. A method of forming a semiconductor structure, comprising:forming a first plurality of metal lines on the semiconductor structure; depositing one layer of a spin-on material over the semiconductor substrate having the plurality of metal lines thereon, wherein the spin-on material is a low k polymer material; forming a plurality of openings in the spin-on material exposing a portion of the metal lines and depositing metal to form a plurality of metal vias in the openings; forming a second plurality of metal lines over at least a portion of the metal vias; and pyrolizing the semiconductor structure at a temperature from about 600[deg.] C. to about 1,500[deg.] C. whereby a portion of the spin-on material is removed, thereby forming a porous advanced low k material comprising a plurality of voids between the metal lines, the porous advanced low k material having a porous structure and a dielectric constant of about 1.75 or less. 5. The method of claim 4, wherein the low k polymer material comprises a polyimide, a fluorinated polyimide, a polysilsequioxane, a benzocyclobutene, a poly(arylene ester), parylene F, parylene N, or an amorphous polytetrafluoroethylene.6.The method of claim 4, wherein the semiconductor structure is pyrolized at a temperature from about 700[deg.] C. to about 1,300[deg.] C. for a time from about 20 seconds to about 2 minutes.7. The method of claim 4, wherein the first plurality of metal lines and the second plurality of metal lines comprise copper, tungsten, gold, silver, aluminum, and alloys thereof and a barrier layer comprising titanium nitride, tungsten, tantalum, titanium tungsten, tantalum silicon nitride, tungsten nitride, niobium, molybdenum or combinations thereof.8. The method of claim 4, wherein the advanced low k material has a dielectric constant of about 1.25 or less.
TECHNICAL FIELDThe present invention generally relates to processing a semiconductor substrate by employing low k dielectric materials. In particular, the present invention relates to forming an advanced low k material by partially or entirely removing another dielectric material.BACKGROUND ARTAn integrated circuit consists of electronic devices electrically coupled by conductive trace elements called interconnect lines (interconnects). Interconnects are patterned from layers of electrically conductive materials (e.g., metals such as aluminum and/or copper, doped polysilicon, etc.) formed on the surface of a silicon wafer. Multiple layers (or levels) of closely-spaced interconnects allow an increase in the density of devices formed on semiconductor wafers. Electrical separation of stacked interconnect layers is achieved by placing an electrically insulating material (i.e., interlevel dielectric layer) between the vertically spaced interconnect layers. Multiple lines of closely-spaced interconnects on a single level also allow an increase in the density of devices formed on semiconductor wafers. Electrical separation of adjacent interconnect lines is achieved by placing an electrically insulating material (i.e., innerlayer dielectric) between the conductive interconnect lines.Many types of materials are employed as insulating materials. Examples include oxides, silicates, nitrides, low k materials, and air. These insulating materials have different properties and characteristics; thus, different insulating materials are used depending upon the requirements of a given environment. Although air lacks the structural integrity of oxides, silicates, nitrides, and low k materials, air is the cheapest and has the lowest dielectric constant (about 1). Therefore, in many instances it is desirable to employ air as an insulating material. The requirement of structural integrity, however, limits the extent to which air is employed in semiconductor manufacturing.In very large scale integrated (VLSI) circuit devices, several wiring layers each containing numerous interconnect lines are often required to connect together the active and/or passive elements in a VLSI semiconductor chip. The interconnection structure typically consists of thin conductive lines separated by insulation in one layer or level and connected through vias or studs from contacts of the elements of the semiconductor chip or to a similar layer in another level of interconnections. With the trend to higher and higher levels of integration in semiconductor devices to ultra large scale integrated (ULSI) circuits, the space or gap between the wires or conductive lines to be filled with insulation is becoming extremely narrow, such as about 0.18 microns and smaller. In addition, when the height of the conductive lines is increased, it is more difficult to fill gaps between the lines, especially when the aspect ratio is 2 to 1 or greater with a gap distance of 0.25 microns or smaller.In order to satisfy increasingly higher density requirements, the dimensions of integrated circuits are continuously reduced and, hence, the line widths of the conductors decreased into the submicron range. While the there is as trend for conductors to become narrower and narrower, there is also a trend for the spaces between conductors to become narrower and narrower. As a result, there is an increasing and unmet need for high performance insulation materials.SUMMARY OF THE INVENTIONThe present invention provides an advanced low k material by partially or entirely removing another dielectric material, thereby providing improved insulation in semiconductor devices. The advanced low k material of the present invention provides excellent insulation between metal lines (as an innerlayer dielectric) and between metal layers (as an interlevel dielectric).One aspect of the present invention relates to a method of forming an advanced low k material between metal lines on a semiconductor substrate, involving the steps of providing the semiconductor substrate having a plurality of metal lines thereon; depositing a spin-on material over the semiconductor substrate having the plurality of metal lines thereon; and at least one of heating or etching the semiconductor substrate whereby at least a portion of the spin-on material is removed, thereby forming the advanced low k material comprising at least one air void between the metal lines, the advanced low k material having a dielectric constant of about 2 or less.Another aspect of the present invention relates to a method of forming a semiconductor structure, involving the steps of forming a first plurality of metal lines on the semiconductor structure; depositing a spin-on material over the semiconductor substrate having the plurality of metal lines thereon; forming a plurality of openings in the spin-on material exposing a portion of the metal lines and depositing metal to form a plurality of metal vias in the openings; forming a second plurality of metal lines over at least a portion of the metal vias; and at least one of heating or etching the semiconductor structure whereby at least a portion of the spin-on material is removed, thereby forming an advanced low k material comprising at least one air void, the advanced low k material having a dielectric constant of about 2 or less.Yet another aspect of the present invention relates to a method of forming a semiconductor structure, involving the steps of forming a first plurality of metal lines on the semiconductor structure; depositing a spin-on material over the semiconductor substrate having the plurality of metal lines thereon, wherein the spin-on material is a silicate or a low k polymer material; forming a plurality of openings in the spin-on material exposing a portion of the metal lines and depositing metal to form a plurality of metal vias in the openings; forming a second plurality of metal lines over at least a portion of the metal vias; and heating the semiconductor structure whereby a portion of the spin-on material is removed, thereby forming a porous advanced low k material comprising a plurality of voids, the porous advanced low k material having a dielectric constant of about 1.75 or less.BRIEF DESCRIPTION OF DRAWINGSFIG. 1 illustrates a cross-sectional view of a semiconductor substrate having a plurality of metal lines thereon according to one aspect of the present invention.FIG. 2 illustrates a cross-sectional view of a semiconductor substrate according to one aspect of the present invention.FIG. 3 illustrates a cross-sectional view of a semiconductor substrate according to one aspect of the present invention.FIG. 4 illustrates a cross-sectional view of a semiconductor substrate having a plurality of metal lines and vias thereon according to one aspect of the present invention.FIG. 5 illustrates a cross-sectional view of a semiconductor substrate having a first and second plurality of metal lines thereon according to one aspect of the present invention.FIG. 6A illustrates a cross-sectional view of a semiconductor substrate having an advanced low k material thereon according to one aspect of the present invention.FIG. 6B illustrates a cross-sectional view of a semiconductor substrate having an advanced low k material thereon according to another aspect of the present invention.FIG. 7 illustrates a cross-sectional view of an advanced low k material according to one aspect of the present invention.DISCLOSURE OF INVENTIONThe present invention involves providing an advanced low k material between metal lines and/or between metal layers by partially or entirely removing another dielectric material. The dielectric constant of the advanced low k material of the present invention approaches 1; thus, the advanced low k material provides excellent insulation enabling increased integration on a semiconductor wafer.The advanced low k material is made by initially depositing a spin-on material onto the semiconductor structure. The spin-on material is partially removable, substantially or entirely removable, as will be discussed below. Spin-on materials include silicates and low k polymer materials. Silicates include fluorine doped silicon glass (FSG), tetraethylorthosilicate (TEOS), phosphosilicate glass (PSG), borophosphosilicate glass (BPSG), any other suitable spin-on glass.Low k polymer materials include one or more of polyimides, fluorinated polyimides, polysilsequioxane, benzocyclobutene (BCB), poly(arylene ester), parylene F, parylene N and amorphous polytetrafluoroethylene. Specific examples of a commercially available low k materials include those under the trade designations Flare(TM) from AlliedSignal, believed to be derived from perfluorobiphenyl and aromatic bisphenols; Black Diamond(TM) from Applied Materials; ALCAP-S from Asahi Chemical; SiLK(R) and Cyclotene(R) BCB from Dow Chemical; Teflon(R) polytetrafluoroethylene from DuPont; XLK and 3MS from Dow Corning; HSG RZ25 from Hitachi Chemical; HOSP(TM) and Nanoglass(TM) from Honeywell Electronic Materials; LKD from JSR Microelectronics; CORAL(TM) and AF4 from Novellus; mesoporous silica from Battelle PNNL; and VeloX(TM) PAE-2 from Schumacher.The thickness of the spin-on material varies, and is not critical to the present invention. In one embodiment, the thickness of the spin-on material is from about 1,000 Å to about 30,000 Å (the thickness determined by the distance from the substrate surface (and not from the top of any features on the substrate) to the top of the spin-on material). In another embodiment, the thickness of the spin-on material is from about 2,500 Å to about 20,000 Å. In yet another embodiment, the thickness of the spin-on material is from about 5,000 Å to about 15,000 Å.Once the spin-on material is deposited onto the semiconductor structure, the structure may be optionally baked to drive off excess casting solvent, if appropriate. The spin-on material formed on the semiconductor structure is subjected to a removal treatment; that is, it is at least partially removed. It is noted that further processing may be conducted on the semiconductor structure after the spin-on material is initially formed on the semiconductor structure and before the removal treatment (such as the formation of metal structures or semiconductor devices).The removal treatment involves removing at least a portion of the spin-on material and leaving air or a vacuum in the space occupied by the removed material using any suitable means. The removal treatment typically involves at least one of etching and heating the spin-on material clad semiconductor structure. In embodiments where the spin-on material is substantially or entirely removed from the semiconductor structure, an air void or vacuum is formed in the space. In embodiments where the spin-on material is partially removed, air voids or vacuum pockets are interdispersed throughout the remaining portion of the spin-on material (basically forming aporous structure or "spongy" type material). This is illustrated in FIG. 7 and discussed later. Since the dielectric constant of air/vacuum is 1, an excellent advanced low k material is provided by the present invention.Etching involves contacting the spin-on material clad semiconductor structure with a material that selectively dissolves the spin-on material while not substantially damaging or deleteriously effecting other features such as metal structures. The specific etchant depends upon the specific identity of the spin-on material. The etching conditions employed can be determined by one skilled in the art.Dry or wet etching techniques may be employed. Wet etch techniques involve using an acid or organic solvent. Acids include hydrofluoric acid, phosphoric acid, hydrobromic acid, and boric acid. In one embodiment, a dilute acid solution is employed to (at least partially) remove spin-on material. Organic solvents include alcohols, ketones, esters, ethers, aromatic compounds, alkanes, and the like.Dry etch techniques typically involve using a plasma containing fluorine or chlorine compounds, such as one or more of BCl3, CCl4, SiCl4, O2, Cl2, HBr, NF3, SF6, CH3F, CF4 and CHF3. One or more inert gases may be included in the plasma. In one embodiment, the spin-on material is partially etched using an isotropic etching process.Heating involves exposing the spin-on material clad semiconductor structure to elevated temperatures so that the spin-on material is at least partially removed (degraded, pyrolized, denatured, etc.) wherein other structures on the semiconductor structure are not substantially damaged or effected by the heat treatment. Heating in one embodiment involves controlled pyrolysis whereby the spin-on material is pyrolized, but other structures on the semiconductor structure are not substantially damaged.The temperature the spin-on material clad semiconductor structure is exposed to primarily depends upon the length of time of the heat treatment. In one embodiment, the spin-on material clad semiconductor structure is exposed to temperatures from about 500[deg.] C. to about 2,000[deg.] C. In another embodiment, the spin-on material clad semiconductor structure is exposed to temperatures from about 600[deg.] C. to about 1,500[deg.] C. In yet another embodiment, the spin-on material clad semiconductor structure is exposed to temperatures from about 700[deg.] C. to about 1,300[deg.] C.The length of time the spin-on material clad semiconductor structure is exposed to elevated temperatures primarily depends upon the temperature employed. In one embodiment, the spin-on material clad semiconductor structure is exposed to elevated temperatures from about 2 seconds to about 10 minutes. In another embodiment, the spin-on material clad semiconductor structure is exposed to elevated temperatures from about 10 seconds to about 5 minutes. In yet another embodiment, the spin-on material clad semiconductor structure is exposed to elevated temperatures from about 20 seconds to about 2 minutes.Any suitable atmosphere may be employed during exposure to elevated temperatures. For example, the heat treatment may take place in air, in a vacuum, in an inert atmosphere (comprising one or more inert gases such one or more of nitrogen, helium, neon, argon, krypton and xenon), and the like. In instances where a vacuum is employed, an exhaust connected to the heating chamber may facilitate removal of particles or remnants of the spin-on material.The resultant advanced low k materials of the present invention have a dielectric constant approaching 1. In one embodiment, the advanced low k materials of the present invention have a dielectric constant of about 2 or less. In another embodiment, the advanced low k materials of the present invention have a dielectric constant of about 1.75 or less. In yet another embodiment, the advanced low k materials of the present invention have a dielectric constant of about 1.5 or less. In still yet another embodiment, the advanced low k materials of the present invention have a dielectric constant of about 1.25 or less.FIGS. 1-6 illustrate two embodiments of the methods of the present invention. The method of FIGS. 1-6 may be used with any suitable semiconductor technology including but not limited to NMOS, PMOS, CMOS, BiCMOS, bipolar, multi-chip modules (MCM) and III-IV semiconductors.Referring to FIG. 1, a structure 10 is provided containing a semiconductor substrate 12 having a plurality of metal lines 14 thereon. An optional conformal barrier layer 16 covers the structure 10. With regard to the description in connection with the embodiments of FIGS. 1-6, the term substrate includes not only a semiconductor substrate, such as semiconductor substrate 12, but also any and all layers and structures fabricated over the semiconductor substrate up to the point of processing under discussion. For example, semiconductor substrate 12 may include one or more structures such as active elements and passive elements including polysilicon gates, wordlines, source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive plugs, diffusion regions, etc.Metal lines 14 are formed over the substrate 12 by initially forming a conductive material layer, and then using lithography techniques to pattern the layer. The metal lines 14 may be made of any suitable conductive material or materials. Examples of suitable conductive materials include copper, tungsten, gold, silver, aluminum, any alloys and/or combinations thereof. In this embodiment, the conductive material is copper or a copper alloy. The metal lines 14 may be formed to any suitable thickness using any suitable technique. Any two given metal lines 14 may be spaced apart with as little as about 0.25 [mu]m, about 0.18 [mu]m, about 0.15 [mu]m, about 0.12 [mu]m, and even about 0.1 [mu]m, space therebetween. Likewise, the width of metal lines 14 may be as little as about 0.25 [mu]m, about 0.18 [mu]m, about 0.15 [mu]m, about 0.12 [mu]m, and even about 0.1 [mu]m, or less.Use of the optional barrier layer 16 depends upon the identity of the conductive material of the metal lines 14. The barrier layer 16 may serve as a diffusion barrier layer preventing conductive material of the metal lines 14 from diffusing into other regions of the structure 10, especially dielectric material regions. The barrier layer 16 is formed over the substrate so that it covers the metal lines 14. The barrier layer 16 may be made of any suitable conductive material or materials. Examples of suitable conductive materials for the barrier layer include titanium nitride, tungsten, tantalum, titanium tungsten, tantalum silicon nitride, tungsten nitride, niobium and molybdenum and combinations thereof. The barrier layer 16 may be formed using any suitable technique to a thickness sufficient to serve as a diffusion barrier for metal lines 14. For example, the thickness of the barrier layer 16 may be in the range from about 100 Å to about 1,500 Å.In a preferred embodiment, the barrier layer 16 is made of titanium nitride and the metal lines 14 are made of copper or a copper alloy (such as a copper-aluminum alloy). Titanium nitride serves as a diffusion barrier for copper, preventing copper from diffusing into other regions of the structure 10 such as a dielectric layer. In embodiments where the metal lines 14 contain copper, use of a barrier layer is preferred, especially where silicon dioxide is present somewhere on the structure 10. The barrier layer 16 and the metal lines 14 may be initially deposited using CVD techniques or physical vapor deposition (PVD) techniques.Referring to FIG. 2, a spin-on material layer 18, containing a spin-on material such as a silicate or a low k polymer material, is deposited on the structure 10, and particularly over the barrier layer 16, to any suitable thickness. The spin-on material layer 18 is formed using spin coating techniques. Optionally the structure is then baked to at least one of drive off excess casting solvent and improve adhesion to the underlying surface. After the spin-on material layer 18 is formed, a photoresist layer 20 is formed thereover. Any suitable photoresist material may be employed for the photoresist layer 20.Referring to FIG. 3, the photoresist layer 20 is patterned to form openings 22 for subsequently forming contacts. A portion of the spin-on material layer 18 is exposed in openings 22 as a result of the patterning. Patterning involves irradiating a portion of the photoresist layer 20 using a mask, and removing or developing the irradiated or non-irradiated portions of the photoresist layer 20. Openings 22 may have a larger width, the same width, or have a smaller width than the metal lines 14. The width of openings 22 may be as little as about 0.25 [mu]m, about 0.18 [mu]m, about 0.15 [mu]m, about 0.12 [mu]m, and even about 0.1 [mu]m, or less.Referring to FIG. 4, the photoresist layer 20 is used as a mask to etch or selectively remove the exposed portions of the spin-on material layer 18 thereby exposing portions of the barrier layer 16 in openings 22. An anisotropic etch process is preferred. The exposed portions of the barrier layer 16 are then etched or selectively removed thereby exposing portions of the metal lines 14 in openings 22. Again, an anisotropic etch process is preferred. A via 24 made of a conductive material is formed in opening 22. The conductive material may be deposited over the entire structure 10, followed by chemical mechanical polishing (CMP) to planarize the structure 10. Examples of suitable conductive materials for the vias 24 include copper, tungsten, gold, silver, aluminum, any alloys and/or combinations thereof. The photoresist layer 20 is removed or stripped during etching of the spin-on material layer 18, during etching of the barrier layer 16, or as a separate step.Referring to FIG. 5, a second plurality of metal lines 26 are formed on the structure 10, preferably over the vias 24. An optional conformal barrier layer 28 covers the plurality of metal lines 26. The materials that form the metal lines 26 and barrier layer 28, the manner in which they are made, the spacing between metal lines, and the widths and size of features are the same as for metal lines 14 and barrier layer 16 discussed above.Referring to FIGS. 6A and 6B, two alternative embodiments of forming the advanced low k material of the present invention are explained in more detail. Specifically referring to FIG. 6A, the spin-on material layer 18 is substantially or completely removed from the structure 10. As a result, the advanced low k material 30 is formed around the metal lines 14, 26 and vias 24. The advanced low k material 30 primarily contains air, thus it has a dielectric constant approaching 1. The spin-on material layer 18 is removed from the structure 10 without substantially damaging the metal lines 14, 26 and vias 24 (that is, without corroding, cracking, or breaking the metal lines 14, 26 and vias 24). As examples wherein the spin-on material layer 18 contains a silicate, removal is achieved using an acid solution or a halocarbon plasma. As examples wherein the spin-on material layer 18 contains a low k polymer material, removal is achieved using pyrolysis, an oxygen containing plasma, an acid solution or an organic solvent.Specifically referring to FIG. 6B, the spin-on material layer 18 is partially removed from the structure 10. As a result, the advanced low k material 32 is formed around the metal lines 14, 26 and vias 24. The advanced low k material 32 primarily contains air or vacuum and the remaining portion of the spin-on material, thus it has a dielectric constant approaching 1. The spin-on material layer 18 is partially removed from the structure 10 without substantially damaging the metal lines 14, 26 and vias 24 (that is, without corroding, cracking, or breaking themetal lines 14, 26 and vias 24). Partial removal of the spin-on material layer 18 means that air voids or vacuum pockets are formed and interdispersed throughout the remaining portion of the spin-on material thereby forming a porous structure of the advanced low k material 32. The porous structure of the advanced low k material 32 is illustrated in FIG. 7. FIG. 7 shows a plurality of voids and the remaining portion of the spin-on material.As examples wherein the spin-on material layer 18 contains a silicate, partial removal is achieved using pyrolysis, a dilute acid solution or a hydrocarbon plasma.As examples wherein the spin-on material layer 18 contains a low k polymer material, partial removal is achieved using an halocarbon containing plasma, an dilute acid solution or a semi-miscible organic solvent.In both FIGS. 6A and 6B, the advanced low k material 30 and the advanced low k material 32 simultaneously.serve as an interlevel dielectric layer electrically insulating vertically spaced layers and as an innerlayer dielectric electrically insulating horizontally adjacent interconnect lines. However, the advanced low k material 30 and/or the advanced low k material 32 may serve as one of an interlevel dielectric layer and an innerlayer dielectric.Although the invention has been shown and described with respect to a certain preferred embodiment or embodiments, it is obvious that equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, etc.), the terms (including any reference to a "means") used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary embodiments of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several embodiments, such feature may be combined with one or more other features of.the other embodiments as may be desired and advantageous for any given or particular application.
Aspects of the disclosure are related to a method for verifying whether a message was digitally signed by a user. The example method comprises: receiving a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with the user; applying a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result; determining whether the hash result satisfies one or more criteria; determining whether the public key is associated with the user based on the determination of whether the hash result satisfies the one or more criteria; and verifying a digital signature of the message with the public key.
ClaimsWhat is claimed is:1. A method for verifying whether a message was digitally signed by a user, comprising:receiving a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with a user;applying a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result;determining whether the hash result satisfies one or more criteria;determining whether the public key is associated with the user based on the determination of whether the hash result satisfies one or more criteria; andverifying a digital signature of the message with the public key.2. The method of claim 1, further comprising:determining that the public key is associated with the user in response to determining that the hash result satisfies the one or more criteria.3. The method of claim 1, further comprising:verifying that the message was digitally signed by the user in response to determining that the public key is associated with the user and successfully verifying the digital signature of the message with the public key.4. The method of claim 1, wherein the plaintext identification information is personally- identifiable identification information comprising at least one of: a name, a government identification number, a passport number, a social security number, a bank account number, an insurance policy number, an employee identification number, a user ID, a password, or an e-mail address.5. The method of claim 1, wherein the hash scheme specifies: 1) a number of hash operations, 2) a hash algorithm used in each hash operation, and 3) when or how the public key and the plaintext identification information are combined.6. The method of claim 1, wherein a level of trust accorded to the public key is determined based on a level of strictness of the criteria.7. The method of claim 1, wherein the criteria comprise a requirement that the hash result, when represented in decimal form, end with a personally-identifiable identification number associated with the user.8. The method of claim 7, wherein the criteria comprise a requirement that the hash result, when represented in decimal form, end with the personally-identifiable identification number preceded by one or more leading "0"s, wherein a level of trust accorded to the public key is determined based on a number of the leading "0"s.9. An apparatus for verifying whether a message was digitally signed by a user, comprising: a memory;a processor coupled to the memory, the processor configured to:receive a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with a user;apply a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result;determine whether the hash result satisfies one or more criteria;determine whether the public key is associated with the user based on the determination of whether the hash result satisfies one or more criteria; andverify a digital signature of the message with the public key.10. The apparatus of claim 9, wherein the processor is further configured to:determine that the public key is associated with the user in response to determining that the hash result satisfies the one or more criteria.11. The apparatus of claim 9, wherein the processor is further configured to:verify that the message was digitally signed by the user in response to determining that the public key is associated with the user and successfully verifying the digital signature of the message with the public key.12. The apparatus of claim 9, wherein the plaintext identification information is personally- identifiable identification information comprising at least one of: a name, a government identification number, a passport number, a social security number, a bank account number, an insurance policy number, an employee identification number, a user ID, a password, or an e-mail address.13. The apparatus of claim 9, wherein the hash scheme specifies: 1) a number of hash operations, 2) a hash algorithm used in each hash operation, and 3) when or how the public key and the plaintext identification information are combined.14. The apparatus of claim 9, wherein a level of trust accorded to the public key is determined based on a level of strictness of the criteria.15. The apparatus of claim 9, wherein the criteria comprise a requirement that the hash result, when represented in decimal form, end with a personally-identifiable identification number associated with the user.16. The apparatus of claim 15, wherein the criteria comprise a requirement that the hash result, when represented in decimal form, end with the personally-identifiable identification number preceded by one or more leading "0"s, wherein a level of trust accorded to the public key is determined based on a number of the leading "0"s.17. An apparatus for verifying whether a message was digitally signed by a user, comprising:means for receiving a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with a user;means for applying a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result;means for determining whether the hash result satisfies one or more criteria; means for determining whether the public key is associated with the user based on the determination of whether the hash result satisfies one or more criteria; andmeans for verifying a digital signature of the message with the public key.18. The apparatus of claim 17, further comprising:means for determining that the public key is associated with the user in response to determining that the hash result satisfies the one or more criteria.19. The apparatus of claim 17, further comprising:means for verifying that the message was digitally signed by the user in response to determining that the public key is associated with the user and successfully verifying the digital signature of the message with the public key.20. The apparatus of claim 17, wherein the hash scheme specifies: 1) a number of hash operations, 2) a hash algorithm used in each hash operation, and 3) when or how the public key and the plaintext identification information are combined.21. The apparatus of claim 17, wherein a level of trust accorded to the public key is determined based on a level of strictness of the criteria.22. A non-transitory computer-readable medium comprising code which, when executed by a processor, causes the processor to perform a method for verifying whether a message was digitally signed by a user comprising:receiving a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with a user;applying a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result;determining whether the hash result satisfies one or more criteria;determining whether the public key is associated with the user based on the determination of whether the hash result satisfies one or more criteria; andverifying a digital signature of the message with the public key.23. The non-transitory computer-readable medium of claim 22, further comprising:code for determining that the public key is associated with the user in response to determining that the hash result satisfies the one or more criteria.24. The non-transitory computer-readable medium of claim 21, further comprising:code for verifying that the message was digitally signed by the user in response to determining that the public key is associated with the user and successfully verifying the digital signature of the message with the public key.25. The non-transitory computer-readable medium of claim 22, wherein the hash scheme specifies: 1) a number of hash operations, 2) a hash algorithm used in each hash operation, and 3) when or how the public key and the plaintext identification information are combined.26. The non-transitory computer-readable medium of claim 22, wherein a level of trust accorded to the public key is determined based on a level of strictness of the criteria.
PROOF OF POSSESSION BASED USER IDENTIFICATION SYSTEMCross-Reference to Related Application[0001] This application claims the benefit of priority from U.S. Patent Application Serial No. 14/683,006, filed April 9, 2015, entitled, "PROOF OF WORK BASED USER INDENTIFICATION SYSTEM," which is herein incorporated by reference.Field[0002] The subject matter disclosed herein relates to public/private key pairs, and more particularly to methods, apparatuses, and systems for discovering public/private key pairs linked to personally-identifiable plaintext identification information without relying on a certificate authority.Backgrounds[0003] A public-key signature scheme employs two mathematically linked keys - a public key and a private key. A user may generate a public/private key pair with relative ease, publish the public key, and keep the private key secret. The user may digitally sign messages by generating digital signatures of the messages using the private key. Anyone with the public key may verify the authenticity of a message by verifying the message and the associated digital signature using the public key.[0004] People may receive identification numbers, such as a passport number, an identification card number, a social security number, a bank account number, or an insurance policy number, etc., from government agencies and other authorities. Similarly, employees may receive identification information, such as an employee number, or an employee user ID, etc., from employers. However, it is uncommon for government agencies, various authorities, or employers to issue public/private key pairs usable in a public-key signature scheme.Summary[0005] Aspects of the disclosure are related to a device for determining whether a message was digitally signed by a user. The device performs operations comprising: receiving a public key of a public-key signature scheme and one or more pieces of plaintext identification information associated with the user; applying a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result; determining whether the hash result satisfies one or more criteria; determining whether the public key is associated with the user based on the determination of whether the hash result satisfies the one or more criteria; and verifying a digital signature of the message with the public key.Brief Description of the Drawings[0006] FIG. 1 is a diagram illustrating an embodiment of a device with which embodiments of the disclosure may be practiced.[0007] FIG. 2 is a diagram illustrating example elements involved in the generation of a hash result.[0008] FIG. 3 is a flowchart illustrating an example method for testing a public/private key pair to determine whether the key pair is a conforming key pair.[0009] FIG. 4 is a flowchart illustrating an example method for determining whether a message was digitally signed by a user.Detailed Description[0010] Embodiments of the disclosure are related to a system and method for allowing a user to search for and discover a public/private key pair usable in a public-key signature scheme based on plaintext identification information. The owner of the public key may be verified based on the public key and the plaintext identification information that is associated with the owner without relying on a certificate authority. The user may use the private key to sign messages. Messages thus signed, which can be verified with the public key, carry increased reliability with respect to the identity of the author. For example, a user may discover a public/private key pair based on her passport number. The discovered key pair is therefore linked to her passport number. The user may then publish the public key and sign messages with the private key. The link between the passport number and the public key may be independently verified with relative ease with a predetermined method without reliance upon a certificate authority. Thus, compared to an unsigned message, a signed message that can be verified with the public key is more likely to have come from the user to whom the passport number belongs. Embodiments of the disclosure are based on the assumption that an attacker is less likely than an honest user to expend time and computational resources on the creation of a public/private key pair based on the honest user's plaintext identification information, especially when the honest user is an ordinary person. Therefore, the link between a public key and the plaintext identification information may be verified by verifying the proof of work performed to discover the public key. [0011] A user may search for and discover a public/private key pair based on plaintext identification information by searching for a public/private key pair that, when combined with predetermined plaintext identification information and processed with a predetermined hash scheme, yields a hash result that satisfies one or more predetermined criteria, through a process of trial and error. The stricter the predetermined criteria required of the hash result are, the more work (e.g., time and/or computational resources) is required to discover the public/private key pair through the process of trial and error, and the higher the level of trust that may be accorded to the public key.[0012] An example device 100 is illustrated in FIG. 1. The device 100 is shown comprising hardware elements that can be electrically coupled via a bus 105 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 110, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input/output devices 115, and can further include without limitation a mouse, a keyboard, a speaker, a printer, and/or the like.[0013] The device 100 may further include (and/or be in communication with) one or more non-transitory storage devices 125, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like.[0014] The device 100 might also include a communication subsystem 130, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device and/or chipset (such as a Bluetooth device, an 802.11 device, a Wi-Fi device, a WiMAX device, cellular communication facilities, etc.), and/or the like. The communications subsystem 130 may permit data to be exchanged with a network, other computer systems/devices, and/or any other devices described herein. In some embodiments, the device 100 may further comprise a working memory 135, which can include a RAM or ROM device, as described above.[0015] The device 100 also can comprise software elements, shown as being currently located within the working memory 135, including an operating system 140, device drivers, executable libraries, and/or other code, such as one or more application programs 145, which may comprise or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed below might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.[0016] A set of these instructions and/or code might be stored on a non-transitory computer- readable storage medium, such as the storage device(s) 125 described above. In some cases, the storage medium might be incorporated within a computer device, such as the device 100. In other embodiments, the storage medium might be separate from a computer device (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure, and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computerized device 100 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the device 100 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.), then takes the form of executable code.[0017] It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.[0018] Referring to FIG. 2, an illustration 200 of elements involved in the generation of a hash result is shown. A public key 210 under test and predetermined plaintext identification information 220 are processed with a hash scheme 230 to yield a hash result 240. In other words, a hash scheme 230 is applied to a combination of the public key 210 under test and the predetermined plaintext identification information 220 to yield the hash result 240. The public key 210 is mathematically linked to a private key (not shown). For each public key 210 tested, it is determined whether the hash result 240 satisfies one or more predetermined criteria. If the hash result 240 satisfies the one or more predetermined criteria, the public key 210 and its associated private key are a successfully discovered public/private key pair. A successfully discovered public/private key pair may be referred to hereinafter as a conforming key pair, and the public key a conforming public key. [0019] The public/private key pair comprising the public key 210 under test may be a key pair of any suitable public -key signature scheme. Non-limiting examples of public-key signature schemes include RSA, Digital Signature Algorithm (DSA), Elliptic Curve Digital Signature Algorithm (ECDSA), or ElGamal Signature scheme, etc. Other suitable public-key signature schemes may also be utilized.[0020] The plaintext identification information 220 may include plaintext information that is personally identifiable and clearly associated with a user and may include one or more of a name, a government identification number, a passport number, a social security number, a bank account number, an insurance policy number, an employee identification number, a user ID, a password, or an e-mail address, etc., or any combination thereof. The list is illustrative and does not limit the disclosure.[0021] The hash scheme 230 defines the following: 1) the number of hash operations that are performed, 2) the hash algorithm used in each hash operation, and 3) when and how the plaintext identification information 220 is combined with either the public key 210 under test or with an intermediate hash result. The hash algorithms may include, but are not limited to, BLAKE-256, BLAKE-512, ECOH, FSB, GOST, Gr0stl, HAS-160, HAVAL, JH, MD2, MD4, MD5, MD6, RadioGatun, RIPEMD, RIPEMD-128, RIPEMD-160, RIPEMD-320, SHA-1, SHA-224, SHA- 256, SHA-384, SHA-512, SHA-3, Scrypt, Skein, SipHash, Snerfru, Spectral Hash, SWIFFT, Tiger, or Whirlpool, etc., or any combination thereof. Any other suitable secure hash algorithms may also be utilized.[0022] As non- limiting examples, a piece of plaintext identification information 220 may be combined with either the public key 210 under test or with an intermediate hash result by 1) concatenating the former to the beginning of the latter, 2) concatenating the former to the end of the latter, or 3) inserting the former into the latter at a particular location, or any combination thereof. These examples do not limit the disclosure, and other suitable methods for combining the plaintext identification information 220 with either the public key 210 under test or with an intermediate hash result may be utilized.[0023] A hash scheme 230 that utilizes multiple pieces of plaintext identification information 220 may provide additional linkage between these pieces of plaintext identification information 220. For example, in one embodiment, a passport number and the passport holder's name may both be used in the hash operations. Thus, to verify a conforming public key, three pieces of information are required: the public key, the passport number, and the passport holder's name. [0024] In one embodiment, for example, the hash scheme 230 may specify that: 1) two hash operations are to be performed; 2) the RIPEMD-160 hash algorithm is to be used in both hash operations; and 3) the employee number is to be concatenated to the end of the public key 210 under test, and the employee user ID is to be concatenated to the beginning of the intermediate hash result after the first hash operation.[0025] Furthermore, as a non-limiting example, the one or more predetermined criteria may require that the hash result 240 include a particular predetermined string of characters/digits at a particular predetermined location. For example, the criteria may require that the hash result 240, when represented in decimal form, contain the string of digits "0000" at the beginning, or contain the string of digits "11111" at the end, or contain the string of digits "222222" after the 5th digit, etc. Any suitable criteria may be utilized.[0026] In some embodiments, the hash result 240 may be required to contain plaintext identification information associated with the user. Any type of plaintext identification information may be utilized, and the plaintext identification information required here may be the same as, or may be different from, the plaintext identification information 220 used in the generation of the hash result 240. For example, in one embodiment, the predetermined criteria may require that the hash result 240, when represented in decimal form, end with the user's employee number.[0027] It should be appreciated that even if the hash result 240 is required by the predetermined criteria to contain some plaintext identification information associated with the user, the hash scheme 230 should still require that at least one piece of plaintext identification information 220 be combined with either the public key 210 under test or with an intermediate hash result. Otherwise, an attacker may perform a generalized search against all plaintext identification information, that is, she may generate test public/private key pairs, perform one or more hash operations on the test public keys according to the hash scheme, and look for valid plaintext identification information for any person in the hash results. In this way, although the attacker cannot choose which person to attack, she may be able to discover a key pair that is associated with some person with relative ease, and fraudulently assume the identity of that person with the discovered key pair.[0028] As explained above, the strictness level of the predetermined criteria required of the hash result 240 indicates the level of trust that may be accorded to a conforming public key because it takes more work (e.g., time and/or computational resources) to find a hash result 240 that satisfies stricter criteria. In embodiments where the criteria specify that the hash result 240, when represented in decimal form, must contain one or more predetermined strings of digits, the criteria may be made stricter by increasing the length of the required strings of digits. For example, the odds of finding a four-digit number in the hash result at a particular location with 100,000 hash tries is approximately 99.995%. Finding the same four-digit number preceded by a single leading "0" with the same number of hash tries has reduced odds of approximately 63.21%. Further, finding the same four-digit number preceded by four leading "0"s with the same number of hash tries has further reduced odds of approximately 0.09%. In one embodiment where the hash result 240 is required to end with the employee number, the criteria may be made stricter by requiring, for example, that the hash result 240 end with the employee number preceded by one or more leading "0"s (or followed by trailing "0"s, or preceded by leading "l"s, etc.). Therefore, for example, a conforming public key that yields a hash result that, when represented in decimal form, ends with the 5-digit employee number preceded by a single leading "0" may provide sufficient trust to be used to verify a signed document for a small transaction of an ordinary person. On the other hand, for a large transaction, or a transaction involving a celebrity or state leader, only a conforming public key that yields a hash result that, when represented in decimal form, ends with a 5-digit predetermined identification number preceded by 5 or even 10 leading "0"s may provide sufficient trust.[0029] It should be appreciated that a third party intending to verify a conforming public key by verifying the proof of work and verify signed messages with the public key needs to have knowledge about the plaintext identification information 220 used, the hash scheme 230 used, and the criteria required of the hash result 240.[0030] As an example, the public-key signature scheme used may be ECDSA; the hash scheme 230 may specify that: 1) two hash operations are to be performed; 2) the RIPEMD-160 hash algorithm is to be used in both hash operations; and 3) the employee number is to be concatenated to the end of the public key 210 under test, and the employee user ID is to be concatenated to the beginning of the intermediate hash result after the first hash operation. The predetermined criteria require that the hash result 240, when represented in decimal form, end with the employee number preceded by a single leading "0". A search for a conforming public/private key pair based on a fictional employee with an employee number of 98519 and a user ID of lwittgenstein was conducted on a typical personal computer. Therefore, the objective was to find a hash result that, when represented in decimal form, ends in "098519." The search lasted 49 hours and involved 598,000 hash operations, and the following conforming key pair and hash result were discovered successfully: Private key: fabc983bbc5517238e4bee428bb463c7e43659d67f6e35195f36d77d859d969a Public key:0415db820e066c530cb2d87c34f2dalcl5ec04f752beec4dd7bedd03edl l84ea69e4a40f36a087d2 8450db4740082367a911642f8b6d8cc0487ad778903cb405daHash result: 03d6f7e9759a5b3c6f4323d42b9a803e56106217Hash result in decimal form:21920927961694848995018985472371397010845098519A hypothetical three-week search using the same computer may yield, with a reasonable chance, a conforming key pair associated with a hash result that, when represented in decimal form, ends with the employee number preceded by 2 leading "0"s.[0031] Referring to FIG. 3, a flowchart illustrating an example method 300 for testing a public/private key pair to determine whether the key pair is a conforming key pair implemented with the device 100 is shown. At block 310, a public/private key pair of a public -key signature scheme may be generated, the public/private key pair comprising a public key and a private key. At block 320, a predetermined hash scheme may be applied to a combination of the public key and one or more pieces of plaintext identification information associated with a user, the hash scheme yielding a hash result. At block 330, it may be determined whether the hash result satisfies one or more predetermined criteria. At block 340, it may be determined whether the public/private key pair under test is a conforming key pair based on the determination of whether the hash result satisfies the one or more predetermined criteria. It may be determined that the public/private key pair is a conforming key pair in response to determining that the hash result satisfies the one or more predetermined criteria, and vice versa.[0032] Referring to FIG. 4, a flowchart illustrating an example method 400 for determining whether a message was digitally signed by a user implemented with the device 100 is shown. At block 410, a public key of a public -key signature scheme and one or more pieces of plaintext identification information associated with the user may be received. At block 420, a predetermined hash scheme may be applied to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result. At block 430, it may be determined whether the hash result satisfies one or more predetermined criteria. At block 440, it may be determined whether the public key is associated with the user based on the determination of whether the hash result satisfies the one or more predetermined criteria. It may be determined that the public key is associated with the user in response to determining that the hash result satisfies the one or more predetermined criteria, and vice versa. At block 450, a digital signature of the message may be verified with the public key based on the public-key signature scheme. Further, it may be determined whether the message was signed by the user based on the determination of whether the public key is associated with the user and the verification of the digital signature. It may be determined that the message was signed by the user when the public key is determined to be associated with the user and the verification of the digital signature of the message is successful. Otherwise, it may be determined that the message was not signed by the user.[0033] Furthermore, embodiments of the disclosure are related to an apparatus 100 comprising a memory 135, and a processor 110 coupled to the memory 135, the processor configured to: receive a public key of a public -key signature scheme and one or more pieces of plaintext identification information associated with a user; apply a hash scheme to a combination of the public key and the one or more pieces of plaintext identification information, the hash scheme yielding a hash result; determine whether the hash result satisfies one or more predetermined criteria; determine whether the public key is associated with the user based on the determination of whether the hash result satisfies the one or more predetermined criteria; and verify a digital signature of a message with the public key.[0034] The processor 110 may determine that the public key is associated with the user in response to determining that the hash result satisfies the one or more predetermined criteria, and vice versa. Furthermore, the processor 110 may determine that the message was signed by the user when the public key is determined to be associated with the user and the verification of the digital signature of the message is successful. Otherwise, the processor 110 may determine that the message was not signed by the user.[0035] Therefore, by utilizing embodiments of the disclosure, a user may search for and discover a public/private key pair that is linked to one or more pieces of plaintext identification information associated with the user. The trial and error process involves testing public keys by applying a hash scheme to the public key under test and determining whether the hash result satisfies one or more predetermined criteria. The user may publish the conforming public key found and sign messages with the private key. A recipient of the message in possession of the public key, with knowledge about the plaintext identification information, the hash scheme, and the criteria, may independently verify that the public key is linked to the user by verifying the proof of work, and may verify that the user is the author of the signed message using the public key. No certificate authority is required either for the discovery of the public/private key pair or for the verification of the public key. [0036] For example, based on an agreed-upon hash scheme and hash result criteria, a user may discover a public/private key pair based on a username and a password maintained with a bank, sign an instruction to the bank to pay a cell phone bill with the private key, and transmit the signed instruction and the public key to the bank. The bank may then verify that the public key is linked to the user based on the username, the password, the hash scheme, and the hash result criteria, and verify that the instruction has been signed by the user by verifying the digital signature of the instruction using the public key. In this way, the bank may have more confidence in believing that the instruction has come from the user than with an unsigned instruction. As a further example, based on an agreed-upon hash scheme and hash result criteria, a user may discover a public/private key pair base on her name and her driver license number, sign an instruction to the Department of Motor Vehicles (DMV) to update her address, and transmit the signed instruction and the public key to the DMV. The DMV may then verify that the public key is linked to the user based on the name, the driver license number, the hash scheme, and the hash result criteria, and verify that the instruction has been signed by the user by verifying the digital signature of the instruction using the public key. In this way, the DMV may have more confidence in believing that the instruction has come from the user than with an unsigned instruction.[0037] It should be appreciated that aspects of the disclosure previously described may be implemented in conjunction with the execution of instructions (e.g., applications) by processor 110 of device 100, as previously described. Particularly, circuitry of the device, including but not limited to processor, may operate under the control of an application, program, routine, or the execution of instructions to execute methods or processes in accordance with embodiments of the disclosure (e.g., the processes of FIGs. 3 and 4). For example, such a program may be implemented in firmware or software (e.g., stored in memory and/or other locations) and may be implemented by processors and/or other circuitry of the devices. Further, it should be appreciated that the terms processor, microprocessor, circuitry, controller, etc., refer to any type of logic or circuitry capable of executing logic, commands, instructions, software, firmware, functionality, etc.[0038] Methods described herein may be implemented in conjunction with various wireless communication networks such as a wireless wide area network (WW AN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term "network" and "system" are often used interchangeably. A WW AN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdmalOOO, Wideband-CDMA (W-CDMA), and so on. CdmalOOO includes IS-95, IS- 1000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rdGeneration Partnership Project" (3GPP). CdmalOOO is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2). 3 GPP and 3GPP2 documents are publicly available. A WLAN may be an IEEE 802. l lx network, and a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network. The techniques may also be implemented in conjunction with any combination of WW AN, WLAN and/or WPAN.[0039] Example methods, apparatuses, or articles of manufacture presented herein may be implemented, in whole or in part, for use in or with mobile communication devices. As used herein, "mobile device," "mobile communication device," "hand-held device," "tablets," etc., or the plural form of such terms may be used interchangeably and may refer to any kind of special purpose computing platform or device that may communicate through wireless transmission or receipt of information over suitable communications networks according to one or more communication protocols, and that may from time to time have a position or location that changes. As a way of illustration, special purpose mobile communication devices, may include, for example, cellular telephones, satellite telephones, smart telephones, heat map or radio map generation tools or devices, observed signal parameter generation tools or devices, personal digital assistants (PDAs), laptop computers, personal entertainment systems, e-book readers, tablet personal computers (PC), personal audio or video devices, personal navigation units, wearable devices, or the like. It should be appreciated, however, that these are merely illustrative examples relating to mobile devices that may be utilized to facilitate or support one or more processes or operations described herein.[0040] The methodologies described herein may be implemented in different ways and with different configurations depending upon the particular application. For example, such methodologies may be implemented in hardware, firmware, and/or combinations thereof, along with software. In a hardware implementation, for example, a processing unit may beimplemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other devices units designed to perform the functions described herein, and/or combinations thereof.[0041] The herein described storage media may comprise primary, secondary, and/or tertiary storage media. Primary storage media may include memory such as random access memory and/or read-only memory, for example. Secondary storage media may include mass storage such as a magnetic or solid state hard drive. Tertiary storage media may include removable storage media such as a magnetic or optical disk, a magnetic tape, a solid state storage device, etc. In certain implementations, the storage media or portions thereof may be operatively receptive of, or otherwise configurable to couple to, other components of a computing platform, such as a processor.[0042] In at least some implementations, one or more portions of the herein described storage media may store signals representative of data and/or information as expressed by a particular state of the storage media. For example, an electronic signal representative of data and/or information may be "stored" in a portion of the storage media (e.g., memory) by affecting or changing the state of such portions of the storage media to represent data and/or information as binary information (e.g., ones and zeros). As such, in a particular implementation, such a change of state of the portion of the storage media to store a signal representative of data and/or information constitutes a transformation of storage media to a different state or thing.[0043] In the preceding detailed description, numerous specific details have been set forth to provide a thorough understanding of claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods and apparatuses that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.[0044] Some portions of the preceding detailed description have been presented in terms of algorithms or symbolic representations of operations on binary digital electronic signals stored within a memory of a specific apparatus or special purpose computing device or platform. In the context of this particular specification, the term specific apparatus or the like includes a general purpose computer once it is programmed to perform particular functions pursuant to instructions from program software. Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, is considered to be a self-consistent sequence of operations or similar signal processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated as electronic signals representing information. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals, information, or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels.[0045] Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as "processing," "computing," "calculating,", "identifying", "determining", "establishing", "obtaining", and/or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device. In the context of this particular patent application, the term "specific apparatus" may include a general purpose computer once it is programmed to perform particular functions pursuant to instructions from program software.[0046] Reference throughout this specification to "one example", "an example", "certain examples", or "exemplary implementation" means that a particular feature, structure, or characteristic described in connection with the feature and/or example may be included in at least one feature and/or example of claimed subject matter. Thus, the appearances of the phrase "in one example", "an example", "in certain examples" or "in some implementations" or other like phrases in various places throughout this specification are not necessarily all referring to the same feature, example, and/or limitation. Furthermore, the particular features, structures, or characteristics may be combined in one or more examples and/or features.[0047] While there has been illustrated and described what are presently considered to be example features, it will be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from claimed subject matter. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter may also include all aspects falling within the scope of appended claims, and equivalents thereof.
A computing platform comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster.
What is claimed is:1. An apparatus comprising: a plurality of disaggregated data center resources; and an infrastructure processing unit (IPU) circuitry, communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster.2. The apparatus of claim 1, further comprising an orchestration controller, communicatively coupled to the IPU circuitry, to compose the platform via the IPU circuitry during a provisioning phase. 3. The apparatus of claim 2, wherein the orchestration controller schedules a microservice at one or more of the disaggregated data center resources based on resource requirements provided by the microservice.4. The apparatus of claim 3, wherein the IPU circuitry discovers and performs management of the plurality of disaggregated data center resources. 5. The apparatus of claim 4, wherein the IPU circuitry reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller.6. The apparatus of claim 5, wherein the IPU circuitry authenticates and attests the plurality of disaggregated data center resources.7. The apparatus of claim 6, wherein the IPU circuitry establishes a communication session with each of the plurality of disaggregated data center resources.8. The apparatus of claim 3, wherein the IPU circuitry receives a configuration file including configuration information from the orchestration controller during a scheduling process.9. The apparatus of claim 8, wherein the IPU circuitry exposes a virtualized resource endpoint at a disaggregated data center resource.10. The apparatus of claim 9, wherein the IPU circuitry transmits a message to the orchestration controller indicating that the disaggregated data center resource has been composed and receives a specification for an execution environment for a microservice from the orchestration controller.11. The apparatus of claim 10, wherein the IPU circuitry retrieves one or more images associated with the configuration information included in the configuration file from a registry and transfers the one or more images to the disaggregated data center resource. 12. A method comprising: performing provisioning at an infrastructure processing unit (IPU) circuitry to compose a plurality of disaggregated data center resources for allocation of microservices cluster; and performing orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice. 13. The method of claim 12, wherein performing the provisioning comprises the IPU circuitry discovering and managing of the plurality of disaggregated data center resources.14. The method of claim 13, wherein performing the provisioning further comprises the IPU circuitry reporting information associated with each of the plurality of disaggregated data center resources to an orchestration controller. 15. The method of claim 14, wherein performing the provisioning further comprises: the IPU circuitry authenticating the plurality of disaggregated data center resources; the IPU circuitry attesting the plurality of disaggregated data center resources; and the IPU establishing a communication session with each of the plurality of disaggregated data center resources.16. The method of claim 12, wherein performing the orchestration comprises scheduling a microservice at one or more of the disaggregated data center resources via the IPU circuitry based on resource requirements provided by the microservice.17. The method of claim 16, wherein performing the orchestration further comprises: the IPU circuitry receiving a configuration file including configuration information from an orchestration controller; transmitting a message to the orchestration controller indicating that a disaggregated data center resource has been composed; and receiving a specification for an execution environment for a microservice from the orchestration controller.18. The method of claim 17, wherein performing the orchestration further comprises: the IPU circuitry retrieving one or more images associated with the configuration information included in the configuration file from a registry; and transferring the one or more images to a disaggregated data center resource. 19. An infrastructure processing unit (IPU) circuitry, comprising: resource management circuitry communicatively coupled to a plurality of disaggregated data center resources; and coordination circuitry communicatively coupled to an orchestration controller to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster.20. The IPU circuitry of claim 19, wherein resource management circuitry discovers and performs management of the plurality of disaggregated data center resources.21. The IPU circuitry of claim 20, wherein the resource management circuitry reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller.22. The IPU circuitry of claim 21, wherein the resource management circuitry establishes a communication session with each of the plurality of disaggregated data center resources.23. The IPU circuitry of claim 22, wherein the coordination circuitry receives a configuration file including configuration information from the orchestration controller during a scheduling process.24. At least one computer readable medium having instructions stored thereon, which when executed by one or more processors, cause the processors to: perform provisioning at an infrastructure processing unit (IPU) circuitry to compose a plurality of disaggregated data center resources for allocation of microservices cluster; and perform orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice. 25. The computer readable medium of claim 24, wherein performing the provisioning comprises discovering and managing of the plurality of disaggregated data center resources.
DYNAMIC MICROSER VICES ALLOCATION MECHANISMRELATED APPLICATIONS[0001] This application claims the benefit of U.S. Application No. 17/304,657, filed June24, 2021, the entire contents of which are hereby incorporated by reference herein.BACKGROUND[0002] Modern computing devices may include general-purpose processor cores as well as a variety of hardware accelerators for offloading compute-intensive workloads or performing specialized tasks. Hardware accelerators may include, for example, one or more field- programmable gate arrays (FPGAs) which may include programmable digital logic resources that may be configured by the end user or system integrator. Hardware accelerators may also include one or more application- specific integrated circuits (ASICs). Hardware accelerators may be embodied as I/O devices that communicate with a processor core over an I/O interconnect. Additionally, hardware accelerators may include one or more graphics processing units (GPUs) implemented to process graphics data.[0003] For efficient use of datacenter resources and to meet the demands of large computations, there is a trend towards disaggregated computing where the compute resources needed by a workload (e.g., central processing unit (CPU), accelerators, storage etc.) may not be on the same physical platform, but instead are connected over network. This approach is enabled due to vast improvements in network throughput and latency over the last several years. This provides several benefits to cloud service providers (CSPs), such as better resource utilization resulting in lower total cost of ownership, great scalability, vendor flexibility, etc.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0005] Figure 1 is a simplified block diagram of at least one embodiment of a computing device for secure I/O with an accelerator device;[0006] Figure 2 is a simplified block diagram of at least one embodiment of an accelerator device of the computing device of Figure 1;[0007] Figure 3 is a simplified block diagram of at least one embodiment of an environment of the computing device of Figures 1 and 2;[0008] Figure 4 illustrates one embodiment of a system;[0009] Figure 5 illustrates one embodiment of a data center; [0010] Figure 6 illustrates one embodiment of a cluster;[0011] Figure 7A illustrates a conventional platform;[0012] Figure 7B illustrates one embodiment of a dynamically composed platform;[0013] Figure 8 illustrates one embodiment of a data center platform;[0014] Figure 9 illustrates one embodiment of an infrastructure processing unit;[0015] Figure 10 is a flow diagram illustrating one embodiment of a cluster setup process;[0016] Figure 11 is a flow diagram illustrating one embodiment of a process for composing a node; and[0017] Figures 12A-12C illustrate embodiments of a platform during composing of a node.DETAILED DESCRIPTION OF THE DRAWINGS[0018] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0019] References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0020] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device). [0021] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0022] Referring now to Figure 1, a computing device 100 for secure I/O with an accelerator device includes a processor 120 and an accelerator device 136, such as a field- programmable gate array (FPGA). In use, as described further below, a trusted execution environment (TEE) established by the processor 120 securely communicates data with the accelerator 136. Data may be transferred using memory-mapped I/O (MMIO) transactions or direct memory access (DMA) transactions. For example, the TEE may perform an MMIO write transaction that includes encrypted data, and the accelerator 136 decrypts the data and performs the write. As another example, the TEE may perform an MMIO read request transaction, and the accelerator 136 may read the requested data, encrypt the data, and perform an MMIO read response transaction that includes the encrypted data. As yet another example, the TEE may configure the accelerator 136 to perform a DMA operation, and the accelerator 136 performs a memory transfer, performs a cryptographic operation (i.e., encryption or decryption), and forwards the result. As described further below, the TEE and the accelerator 136 generate authentication tags (ATs) for the transferred data and may use those ATs to validate the transactions. The computing device 100 may thus keep untrusted software of the computing device 100, such as the operating system or virtual machine monitor, outside of the trusted code base (TCB) of the TEE and the accelerator 136. Thus, the computing device 100 may secure data exchanged or otherwise processed by a TEE and an accelerator 136 from an owner of the computing device 100 (e.g., a cloud service provider) or other tenants of the computing device 100. Accordingly, the computing device 100 may improve security and performance for multi tenant environments by allowing secure use of accelerator devices.[0023] The computing device 100 may be embodied as any type of device capable of performing the functions described herein. For example, the computing device 100 may be embodied as, without limitation, a computer, a laptop computer, a tablet computer, a notebook computer, a mobile computing device, a smartphone, a wearable computing device, a multiprocessor system, a server, a workstation, and/or a consumer electronic device. As shown in Figure 1, the illustrative computing device 100 includes a processor 120, an I/O subsystem 124, a memory 130, and a data storage device 132. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 130, or portions thereof, may be incorporated in the processor 120 in some embodiments.[0024] The processor 120 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor 120 may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. As shown, the processor 120 illustratively includes secure enclave support 122, which allows the processor 120 to establish a trusted execution environment known as a secure enclave, in which executing code may be measured, verified, and/or otherwise determined to be authentic. Additionally, code and data included in the secure enclave may be encrypted or otherwise protected from being accessed by code executing outside of the secure enclave. For example, code and data included in the secure enclave may be protected by hardware protection mechanisms of the processor 120 while being executed or while being stored in certain protected cache memory of the processor 120. The code and data included in the secure enclave may be encrypted when stored in a shared cache or the main memory 130. The secure enclave support 122 may be embodied as a set of processor instruction extensions that allows the processor 120 to establish one or more secure enclaves in the memory 130. For example, the secure enclave support 122 may be embodied as Intel® Software Guard Extensions (SGX) technology.[0025] The memory 130 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 130 may store various data and software used during operation of the computing device 100 such as operating systems, applications, programs, libraries, and drivers. As shown, the memory 130 may be communicatively coupled to the processor 120 via the I/O subsystem 124, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 130, and other components of the computing device 100. For example, the I/O subsystem 124 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, sensor hubs, host controllers, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the memory 130 may be directly coupled to the processor 120, for example via an integrated memory controller hub. Additionally, in some embodiments, the I/O subsystem 124 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 120, the memory 130, the accelerator device 136, and/or other components of the computing device 100, on a single integrated circuit chip. Additionally, or alternatively, in some embodiments the processor 120 may include an integrated memory controller and a system agent, which may be embodied as a logic block in which data traffic from processor cores and I/O devices converges before being sent to the memory 130.[0026] As shown, the I/O subsystem 124 includes a direct memory access (DMA) engine126 and a memory-mapped I/O (MMIO) engine 128. The processor 120, including secure enclaves established with the secure enclave support 122, may communicate with the accelerator device 136 with one or more DMA transactions using the DMA engine 126 and/or with one or more MMIO transactions using the MMIO engine 128. The computing device 100 may include multiple DMA engines 126 and/or MMIO engines 128 for handling DMA and MMIO read/write transactions based on bandwidth between the processor 120 and the accelerator 136. Although illustrated as being included in the I/O subsystem 124, it should be understood that in some embodiments the DMA engine 126 and/or the MMIO engine 128 may be included in other components of the computing device 100 (e.g., the processor 120, memory controller, or system agent), or in some embodiments may be embodied as separate components.[0027] The data storage device 132 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, non-volatile flash memory, or other data storage devices. The computing device 100 may also include a communications subsystem 134, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 100 and other remote devices over a computer network (not shown). The communications subsystem 134 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, 3G, 4G LTE, etc.) to effect such communication.[0028] The accelerator device 136 may be embodied as a field-programmable gate array(FPGA), an application-specific integrated circuit (ASIC), a coprocessor, or other digital logic device capable of performing accelerated functions (e.g., accelerated application functions, accelerated network functions, or other accelerated functions), GPUs, etc. Illustratively, the accelerator device 136 is an FPGA, which may be embodied as an integrated circuit including programmable digital logic resources that may be configured after manufacture. The FPGA may include, for example, a configurable array of logic blocks in communication over a configurable data interchange. The accelerator device 136 may be coupled to the processor 120 via a high speed connection interface such as a peripheral bus (e.g., a PCI Express bus) or an inter- processor interconnect (e.g., an in-die interconnect (IDI) or QuickPath Interconnect (QPI)), or via any other appropriate interconnect. The accelerator device 136 may receive data and/or commands for processing from the processor 120 and return results data to the processor 120 via DMA, MMIO, or other data transfer transactions.[0029] As shown, the computing device 100 may further include one or more peripheral devices 138. The peripheral devices 138 may include any number of additional input/output devices, interface devices, hardware accelerators, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 138 may include a touch screen, graphics circuitry, a graphical processing unit (GPU) and/or processor graphics, an audio device, a microphone, a camera, a keyboard, a mouse, a network interface, and/or other input/output devices, interface devices, and/or peripheral devices.[0030] Referring now to Figure 2, an illustrative embodiment of a field-programmable gate array (FPGA) 200 is shown. As shown, the FPGA 200 is one potential embodiment of an accelerator device 136. The illustratively FPGA 200 includes a secure MMIO engine 202, a secure DMA engine 204, one or more accelerator functional units (AFUs) 206, and memory/registers 208. As described further below, the secure MMIO engine 202 and the secure DMA engine 204 perform in-line authenticated cryptographic operations on data transferred between the processor 120 (e.g., a secure enclave established by the processor) and the FPGA 200 (e.g., one or more AFUs 206). In some embodiments, the secure MMIO engine 202 and/or the secure DMA engine 204 may intercept, filter, or otherwise process data traffic on one or more cache-coherent interconnects, internal buses, or other interconnects of the FPGA 200. [0031] Each AFU 206 may be embodied as logic resources of the FPGA 200 that are configured to perform an acceleration task. Each AFU 206 may be associated with an application executed by the computing device 100 in a secure enclave or other trusted execution environment. Each AFU 206 may be configured or otherwise supplied by a tenant or other user of the computing device 100. For example, each AFU 206 may correspond to a bitstream image programmed to the FPGA 200. As described further below, data processed by each AFU 206, including data exchanged with the trusted execution environment, may be cryptographically protected from untrusted components of the computing device 100 (e.g., protected from software outside of the trusted code base of the tenant enclave). Each AFU 206 may access or otherwise process stored in the memory/registers 208, which may be embodied as internal registers, cache, SRAM, storage, or other memory of the FPGA 200. In some embodiments, the memory 208 may also include external DRAM or other dedicated memory coupled to the FPGA 200.[0032] Referring now to Figure 3, in an illustrative embodiment, the computing device100 establishes an environment 300 during operation. The illustrative environment 300 includes a trusted execution environment (TEE) 302 and the accelerator 136. The TEE 302 further includes a host cryptographic engine 304, a transaction dispatcher 306, a host validator 308, and a direct memory access (DMA) manager 310. The accelerator 136 includes an accelerator cryptographic engine 312, an accelerator validator 314, a memory mapper 316, an authentication tag (AT) controller 318, and a DMA engine 320. The various components of the environment 300 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 300 may be embodied as circuitry or collection of electrical devices (e.g., host cryptographic engine circuitry 304, transaction dispatcher circuitry 306, host validator circuitry 308, DMA manager circuitry 310, accelerator cryptographic engine circuitry 312, accelerator validator circuitry 314, memory mapper circuitry 316, AT controller circuitry 318, and/or DMA engine circuitry 320). It should be appreciated that, in such embodiments, one or more of the host cryptographic engine circuitry 304, the transaction dispatcher circuitry 306, the host validator circuitry 308, the DMA manager circuitry 310, the accelerator cryptographic engine circuitry 312, the accelerator validator circuitry 314, the memory mapper circuitry 316, the AT controller circuitry 318, and/or the DMA engine circuitry 320 may form a portion of the processor 120, the I/O subsystem 124, the accelerator 136, and/or other components of the computing device 100. Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another.[0033] The TEE 302 may be embodied as a trusted execution environment of the computing device 100 that is authenticated and protected from unauthorized access using hardware support of the computing device 100, such as the secure enclave support 122 of the processor 120. Illustratively, the TEE 302 may be embodied as one or more secure enclaves established using Intel SGX technology. The TEE 302 may also include or otherwise interface with one or more drivers, libraries, or other components of the computing device 100 to interface with the accelerator 136.[0034] The host cryptographic engine 304 is configured to generate an authentication tag(AT) based on a memory-mapped I/O (MMIO) transaction and to write that AT to an AT register of the accelerator 136. For an MMIO write request, the host cryptographic engine 304 is further configured to encrypt a data item to generate an encrypted data item, and the AT is generated in response to encrypting the data item. For an MMIO read request, the AT is generated based on an address associated with MMIO read request.[0035] The transaction dispatcher 306 is configured to dispatch the memory-mapped I/O transaction (e.g., an MMIO write request or an MMIO read request) to the accelerator 136 after writing the calculated AT to the AT register. An MMIO write request may be dispatched with the encrypted data item.[0036] The host validator 308 may be configured to verify that an MMIO write request succeeded in response dispatching the MMIO write request. Verifying that the MMIO write request succeeded may include securely reading a status register of the accelerator 136, securely reading a value at the address of the MMIO write from the accelerator 136, or reading an AT register of the accelerator 136 that returns an AT value calculated by the accelerator 136, as described below. For MMIO read requests, the host validator 308 may be further configured to generate an AT based on an encrypted data item included in a MMIO read response dispatched from the accelerator 136; read a reported AT from a register of the accelerator 136; and determine whether the AT generated by the TEE 302 matches the AT reported by the accelerator 136. The host validator 308 may be further configured to indicate an error if those ATs do not match, which provides assurance that data was not modified on the way from the TEE 302 to the accelerator 136.[0037] The accelerator cryptographic engine 312 is configured to perform a cryptographic operation associated with the MMIO transaction and to generate an AT based on the MMIO transaction in response to the MMIO transaction being dispatched. For an MMIO write request, the cryptographic operation includes decrypting an encrypted data item received from the TEE 302 to generate a data item, and the AT is generated based on the encrypted data item. For an MMIO read request, the cryptographic operation includes encrypting a data item from a memory of the accelerator 136 to generate an encrypted data item, and the AT is generated based on that encrypted data item.[0038] The accelerator validator 314 is configured to determine whether the AT written by the TEE 302 matches the AT determined by the accelerator 136. The accelerator validator 314 is further configured to drop the MMIO transaction if those ATs do not match. For MMIO read requests, the accelerator validator 314 may be configured to generate a poisoned AT in response to dropping the MMIO read request, and may be further configured to dispatch a MMIO read response with a poisoned data item to the TEE 302 in response to dropping the MMIO read request.[0039] The memory mapper 316 is configured to commit the MMIO transaction in response to determining that the AT written by the TEE 302 matches the AT generated by the accelerator 136. For an MMIO write request, committing the transaction may include storing the data item in a memory of the accelerator 136. The memory mapper 316 may be further configured to set a status register to indicate success in response to storing the data item. For an MMIO read request, committing the transaction may include reading the data item at the address in the memory of the accelerator 136 and dispatching an MMIO read response with the encrypted data item to the TEE 302.[0040] The DMA manager 310 is configured to securely write an initialization command to the accelerator 136 to initialize a secure DMA transfer. The DMA manager 310 is further configured to securely configure a descriptor indicative of a host memory buffer, an accelerator 136 buffer, and a transfer direction. The transfer direction may be host to accelerator 136 or accelerator 136 to host. The DMA manager 310 is further configured to securely write a finalization command to the accelerator 136 to finalize an authentication tag (AT) for the secure DMA transfer. The initialization command, the descriptor, and the finalization command may each be securely written and/or configured with an MMIO write request. The DMA manager 310 may be further configured to determine whether to transfer additional data in response to securely configuring the descriptor, the finalization command may be securely written in response to determining that no additional data remains for transfer.[0041] The AT controller 318 is configured to initialize an AT in response to the initialization command from the TEE 302. The AT controller 318 is further configured to finalize the AT in response to the finalization command from the TEE 302.[0042] The DMA engine 320 is configured to transfer data between the host memory buffer and the accelerator 136 buffer in response to the descriptor from the TEE 302. For a transfer from host to accelerator 136, transferring the data includes copying encrypted data from the host memory buffer and forwarding the plaintext data to the accelerator 136 buffer in response to decrypting the encrypted data. For a transfer from accelerator 136 to host, transferring the data includes copying plaintext data from the accelerator 136 buffer and forwarding encrypted data to the host memory buffer in response encrypting the plaintext data. [0043] The accelerator cryptographic engine 312 is configured to perform a cryptographic operation with the data in response to transferring the data and to update the AT in response to transferring the data. For a transfer from host to accelerator 136, performing the cryptographic operation includes decrypting encrypted data to generate plaintext data. For a transfer from accelerator 136 to host, performing the cryptographic operation includes encrypting plaintext data to generate encrypted data.[0044] The host validator 308 is configured to determine an expected AT based on the secure DMA transfer, to read the AT from the accelerator 136 in response to securely writing the finalization command, and to determine whether the AT from the accelerator 136 matches the expected AT. The host validator 308 may be further configured to indicate success if the ATs match and to indicate failure if the ATs do not match. [0045] Figure 4 illustrates one embodiment of a system 400 having a computing device420 employing a container orchestration controller (or controller) 410. In one embodiment, container orchestration enables automated deployment, configuration, coordination and management of multi-container workloads in a containerized architecture. As shown in Figure 4, computing device 420 includes a host server computer serving as a host machine for employing controller 410 to facilitate a provisioning of cluster life-cycles (e.g., public and private) accessible by customer organizations 421 via a platform as a service (PaaS) or infrastructure as a service (IaaS). Computing device 420 may include (without limitation) server computers (e.g., cloud server computers, etc.), desktop computers, cluster-based computers, set top boxes (e.g., Internet-based cable television set-top boxes, etc.), etc. Computing device 420 includes an operating system (“OS”) 406 serving as an interface between one or more hardware/physical resources of computing device 420 and one or more client devices 430A- 430N, etc. Computing device 420 further includes processor(s) 402, memory 404, input/output (“I/O”) sources 408, such as touchscreens, touch panels, touch pads, virtual or regular keyboards, virtual or regular mice, etc.[0046] In one embodiment, host organization 101 may further employ a production environment that is communicably interfaced with client devices 430A-N through host organization 101. Client devices 430A-N may include (without limitation) customer organization-based server computers, desktop computers, laptop computers, mobile computing devices, such as smartphones, tablet computers, personal digital assistants, e-readers, media Internet devices, smart televisions, television platforms, wearable devices (e.g., glasses, watches, bracelets, smartcards, jewelry, clothing items, etc.), media players, global positioning system - based navigation systems, cable setup boxes, etc.[0047] In one embodiment, the illustrated database(s) 140 store (without limitation) information and underlying database records having customer and user data therein on to process data on behalf of customer organizations 421A-N. In some embodiments, host organization 101 receives input and other requests from a plurality of customer organizations 421A-N over one or more networks 435; for example, incoming data, or other inputs may be received from customer organizations 421A-N to be processed using database system 140.[0048] In one embodiment, each customer organization 421A-N is an entity selected from a group consisting of a separate and distinct remote organization, an organizational group within host organization 101, a business partner of host organization 101, a customer organization 421A-N that subscribes to cloud computing services provided by host organization 101, etc. [0049] In one embodiment, requests are received at, or submitted to, a web server within host organization 101. Host organization 101 may receive a variety of requests for processing by host organization 101. For example, incoming requests received at the web server may specify services from host organization 101 are to be provided. Further, host organization 101 may implement a request interface via the web server or as a stand-alone interface to receive requests packets or other requests from the client devices 430A-N. The request interface may further support the return of response packets or other replies and responses in an outgoing direction from host organization 101 to one or more client devices 430 A-N.[0050] In one embodiment, computing device 420 may include a server computer that may be further in communication with one or more databases or storage repositories, such as database(s) 140, which may be located locally or remotely over one or more networks, such as network(s) 435 (e.g., cloud network, Internet, proximity network, intranet, Internet of Things (“IoT”), Cloud of Things (“CoT”), etc.). Computing device 420 is further shown to be in communication with any number and type of other computing devices, such as client computing devices 430A-N, over one or more networks, such as network(s) 435.[0051] In one embodiment, computing device 420 may serve as a service provider core for hosting and maintaining controller 410 as a SaaS or laaS, and be in communication with one or more client computers 430A-N, over one or more network(s) 435, and any number and type of dedicated nodes. In such an embodiment, host organization 101 implements orchestration controller 410 to operate as a control plane during deployment and at runtime, to perform tasks such as carving out infrastructure resources needed for microservices to run and allocate the tasks to the different microservices based on their specific need or adapting to different load conditions.[0052] Figure 5 illustrates one embodiment of a data center. As shown in Figure 5, the data center configuration includes traditional servers, racks of FPGAs, GPUs and storage devices, all of which are connected by infrastructure processing unit (IPUs). In one embodiment, IPUs comprise smart network interface cards (NICs) that not only perform traditional networking functions, but also has additional responsibilities in the control and management of infrastructure. Block 501 represents a represents a single workload spanning disaggregated compute resources within the data center. As defined herein, a workload comprises services and resources (e.g., storage, network, compute, etc.) implemented to execute an application.[0053] Another major trend in computing has been the growth of microservices based applications replacing monolithic applications. A microservice architecture loosely defines coupled services that collaborate to perform a larger function and are developed, deployed and managed independently. For ease of development, deployment and management of microservices, technologies such as Containers and Orchestrators (such as Kubemetes) are widely used.[0054] Figure 6 illustrates one embodiment of a Kubemetes cluster. Kubemetes provides a cluster management platform implemented for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubemetes systems include various object types that define a set of primitives (e.g., containers, pods and clusters). Containers are packages that rely on virtual isolation to deploy and mn applications that access a shared OS. Pods provide a higher level of abstraction that includes a group of containers that are guaranteed to be co-located on the same host machine to share resources. Containers within a pod can reference all other containers in the pod. A cluster includes two or more pods, in which each pod is assigned a unique pod identifier (ID). Although described herein with regards to a Kubemetes system, other embodiments may feature an implementation of different types of container orchestration architectures (e.g., Docker, Mesos, etc.).[0055] Currently, an orchestrator has prior knowledge of available hardware resources through initial static provisioning steps, and upon demand carves out requested resources from a static pool of resources for use by a given microservice. Additionally, the orchestrator maintains a static inventory of worker machines (e.g., that were provisioned) and allocates the worker machines from a static pool whenever a microservice requests resources. However, multiple problems exist in disaggregated computing in which the compute resources are distributed, and the availability is dynamic.[0056] One problem is that current orchestrators cannot dynamically compose a platform of disaggregated hardware resources per customer requirement, or be provisioned to have knowledge of available pool of resources (e.g., CPUs, GPUs, FPGAs, storage, memory), where the resources are located, how to allocate the resources, how to setup communications amongst the resources, etc. Another problem is that the orchestrator is not currently enabled to dynamically create a worker machine that is composed of disaggregated hardware resources as requested by a microservice. Figure 7A illustrates a conventional platform in which an orchestrator statically composes.[0057] According to one embodiment, an infrastructure manager is provided to enable dynamic platform composition for allocation to a microservices cluster. In such an embodiment, the infrastructure manager dynamically constructs the platform during a provisioning phase via IPUs attached to the disaggregated resources. Dynamic compos ability enables a cloud service provider (CSP) to construct a platform on the fly based on available resources in a data center. Figure 7B illustrates one embodiment of a dynamically composed platform. As shown in Figure 7B, the platform includes a mix and match of resources, as opposed to the fixed resources shown in Figure 7A.[0058] In a further embodiment, runtime orchestration by orchestrion controller 110 enables dynamic composing/configuration of a worker node. In this embodiment, orchestration controller 110 schedules a microservice on a suitable worker node during deployment based on the worker node requirements provided by the microservice. In a further embodiment, a microservice includes a manifest file describing resource requirements (e.g., 4 GPUs, 2 CPU cores, 1 GB of storage, etc.). Thus, orchestration controller 110 may construct a worker node by combining network connected resources in many different ways, which provides enhanced flexibility to use the resources most efficiently. A worker node is defined as an infrastructure resource on which a microservice is operating.[0059] Figure 8 illustrates another embodiment of a platform 800 including orchestration controller 110, IPU 810 and a plurality of data center resources 850 (e.g., 850A - 850C). According to one embodiment, platform 800 comprises a microservice control plane and data plane. As used herein, a control plane refers to a combined role of the orchestration controller 110 and IPU 810 in performing resource discovery, worker node configuration, composition of resources, establishing routing and communication, etc., while the data plane refers to the movement of between various resources in a cluster data during runtime.[0060] In one embodiment, IPU 810 enables discovery of resources, and performs management, scheduling and configuration functions. Additionally, IPU 810 reports information associated with a resource (or the resources information), such as type, capabilities, security features, availability etc., to a central infrastructure manager at orchestration controller 110. As shown in Figure 8, IPU 810 includes coordination logic 812, resource manager 814, platform health logic 816, network 817, storage 818 and security engine 819.[0061] Coordination logic 812 provides coordination with orchestration controller 110.In one embodiment, coordination logic 812 coordinates resource discovery, allocation, scheduling, load balancing, performance management, etc. with orchestration controller 110. Resource manager 814 facilitates the management of resources at resources 850. Platform health logic 816 maintains platform health statistics (e.g., key performance indicators (KPIs) usage status, etc.) via monitoring and telemetry. Security engine 819 provides attestation for platform (e.g., including IPU 810 and one or more resources 850).[0062] Figure 9 illustrates another embodiment of IPU 810. As shown in Figure 9, the security architecture of IPU 810 provides isolation of a customer’s control and data plane, via tenant security 910, from being accessed by infrastructure management 920. Additionally, the infrastructure management 920 control and data is protected from networking components associated with a tenant. In a further embodiment, IPU 810 includes a root of trust 930 that protects infrastructure management 920 to secure startup and attest to the entire platform 800 environment. IPU 810 also includes microservices orchestration 940 that provides for orchestration of resource 850 resources. As a result, orchestration occurs at IPU 810, rather than at a CPU. In yet a further embodiment, microservices orchestration 940 may logically partition each resource 850 into sub-accelerators.[0063] Referring back to Figure 8, resources 850 provide acceleration resource services98 (e.g., 856A - 856C), such as GPUs, CPUs, FPGAs, storage, etc.). In one embodiment, resources 850 each include a telemetry engine 854 (e.g., 854A - 854C) to perform telemetry services to collect measurement data associated with the use of acceleration services 856. Resources 850 also provide a standard set of interfaces to enable running microservices securely at arbitrary granularity and with QoS assurance. Thus, each resource 850 includes a security engine 852 (e.g., 852A - 852C) that provides for attestation to prove the authenticity and integrity of the resource 850. Additionally, security engine 852 creates a trusted isolation of arbitrary granularity to match the resources requested by a microservice, such as an acceleration service 856. Security engine 852 also facilitates trusted peer-to-peer communication to enable larger microservice that span resources 850.[0064] Figure 10 is a flow diagram illustrating one embodiment of a microservices cluster setup process. At processing block 1010, a cluster administrator introduces and provisions new resources in one or more clusters. In one embodiment, this process comprises setting up one or more resources (e.g., GPU, FPGA, CPU, storage, etc.) within a rack and interfacing the resources with IPU 810. At processing block 1020, IPU 810 discovers and enumerates the resources. In one embodiment, IPU 810 also authenticates and attests the resources via security engine 819 and a security engine 825 at the resources. In a further embodiment, IPU 810 sets up a long-term secure communication session with a manager at each resource 850 and assigns unique internet protocol (IP) address endpoints.[0065] At processing block 1030, a report of the resource capabilities, long-term secure communication sessions and IP address endpoints are transmitted to orchestration controller 410. Subsequently, orchestration controller 410 updates its state to reflect the presence of the new resources within the cluster. In one embodiment, orchestration controller 410 may have network (e.g., out-of-band or in-band management) through which it works together with various IPUs 810 to track how many resources are in use, as well as their health. At processing block 1040, identity and certificates provisioning of the resources 850 is performed by interacting with a secure processing element within a resource 850. [0066] Figure 11 is a flow diagram illustrating one embodiment of a process for composing a node. At processing 1110, a developer (e.g., microservice developer) provides a worker node configuration to orchestration controller 410 in the form of a manifest. In one embodiment, the manifest lists a type of resources that is needed, attributes related to the resource, details regarding the workload that will execute on the resources, as well as other metadata. In current implementations, manifests include information regarding a containerized application image and where the image may be located (e.g., in a registry or a local store). According to one embodiment, registries are provided within each accelerator to store configuration information (e.g., bitstreams of FPGAs, compute kernels for GPUs, etc.).[0067] At processing block 1120, orchestration controller 410 finds available resources within the platform. In one embodiment, orchestration controller 410 examines the available resources based on a persistent cluster state, and schedules the corresponding resources by interacting with a node agent 813 within coordination logic 812 of IPU 810. IPU node agents 813 are control plane components that communicate with orchestration controller 410. In one embodiment, node agents 813 operate as endpoints with which orchestration controller 410 may communicate for management related functions. In such an embodiment, a node agent 813 may listen for new requests from the orchestration controller 410 (e.g., via out of band or in-band management). In a further embodiment, orchestration controller 410 assigns an identifier (or composed platform ID) to a resource and creates a mapping to individual resource IDs. Further, orchestration controller 410 removes the resource IDs from an available resources pool. Accordingly, orchestration controller 410 returns a failure message in instances in which a resource requested by a manifest is not available.[0068] At processing block 1130, a node agent 813 having a corresponding platform ID and resource ID to be allocated receives a configuration file including configuration information from orchestration controller 410 during a scheduling process. In one embodiment, the configuration file provides details (e.g., on how to reach the other endpoint, like an IP address, port number) regarding each IPU node agent 813 involved in configuring the composable platform. In a further embodiment, IPU 810 managing CPU resources operates as a master, and establishes mutually authenticated secure channels with the IPUs having the other resource 850 resources. In yet a further embodiment, this master IPU 810 requests for virtualized resource 850 endpoint objects from the other IPUs 810. Figure 12A illustrates one embodiment of the platform after receipt of the worker node configuration request at an IPU 810 from orchestration controller 410.[0069] At processing block 1140, the master IPU 810 exposes the virtualized resource850 endpoint as a hot-pluggable PCIe device that is enumerated on a CPU platform. In one embodiment, the actual translation (e.g., from CPU platform <— PCIe — > Custom protocol (such as accelerator over fabric) <— PCIe — > accelerator) is handled transparently by the IPUs. It’s designed as a protocol similar to NVMe over Fabric - XPU over Fabric that encapsulates the underlying transfer mechanisms. Figure 12B illustrates one embodiment of the platform after a virtualized accelerator endpoint gas been exposed.[0070] At processing block 1150, an IPU 810 transmits a message to orchestration controller 410 informing that the worker node has been successfully composed. At processing block 1160, an IPU 810 receives the specification for an execution environment for a microservice from orchestration controller 410. At processing block 1170, an IPU 810 communicates with a registry to retrieve one or more images associated with the configuration information included in the configuration file. In one embodiment, an image comprises container images, bitstreams, configuration information, etc.[0071] At processing block 1180, IPU verifies the image. In one embodiment, the IPU verifies the image by verifying the image signature, and decrypting and inspecting the image for potentially malicious code. Figure 12C illustrates one embodiment of the platform after images have been pulled by each IPU 810. At processing block 1190, an IPU 810 transfers the respective images to the resource 850 management bitstream, where the resource 850 creates an execution environment based on the provided image.[0072] Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0073] Example 1 includes an apparatus comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster.[0074] Example 2 includes the subject matter of Example 1, further comprising an orchestration controller, communicatively coupled to the IPU, to compose the platform via the IPU during a provisioning phase.[0075] Example 3 includes the subject matter of any of Examples 1-2, wherein the orchestration controller schedules a microservice at one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice.[0076] Example 4 includes the subject matter of any of Examples 1-3, wherein the IPU discovers and performs management of the plurality of disaggregated data center resources. [0077] Example 5 includes the subject matter of any of Examples 1-4, wherein the IPU reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller.[0078] Example 6 includes the subject matter of any of Examples 1-5, wherein the IPU authenticates and attests the plurality of disaggregated data center resources.[0079] Example 7 includes the subject matter of any of Examples 1-6, wherein the IPU establishes a communication session with each of the plurality of disaggregated data center resources.[0080] Example 8 includes the subject matter of any of Examples 1-7, wherein the IPU receives a configuration file including configuration information from the orchestration controller during a scheduling process.[0081] Example 9 includes the subject matter of any of Examples 1-8, wherein the IPU exposes a virtualized resource endpoint at a disaggregated data center resource.[0082] Example 10 includes the subject matter of any of Examples 1-9, wherein the IPU transmits a message to the orchestration controller indicating that a disaggregated data center resource has been composed and receives a specification for an execution environment for a microservice from the orchestration controller.[0083] Example 11 includes the subject matter of any of Examples 1-10, wherein the IPU retrieves one or more images associated with the configuration information included in the configuration file from a registry and transfers the one or more images to a disaggregated data center resource.[0084] Example 12 includes a method comprising performing provisioning at an infrastructure processing unit (IPU) to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster and performing orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice[0085] Example 13 includes the subject matter of Example 12, wherein performing the provisioning comprises the IPU discovering and managing of the plurality of disaggregated data center resources.[0086] Example 14 includes the subject matter of any of Examples 12-13, wherein performing the provisioning further comprises the IPU reporting information associated with each of the plurality of disaggregated data center resources to the orchestration controller.[0087] Example 15 includes the subject matter of any of Examples 12-14, wherein performing the provisioning further comprises the IPU authenticating the plurality of disaggregated data center resources, the IPU attesting the plurality of disaggregated data center resources and the IPU establishing a communication session with each of the plurality of disaggregated data center resources.[0088] Example 16 includes the subject matter of any of Examples 12-15, wherein performing the orchestration comprises scheduling a microservice at one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice.[0089] Example 17 includes the subject matter of any of Examples 12-16, wherein performing the orchestration further comprises the IPU receiving a configuration file including configuration information from an orchestration controller, transmitting a message to the orchestration controller indicating that a disaggregated data center resource has been composed; and receiving a specification for an execution environment for a microservice from the orchestration controller.[0090] Example 18 includes the subject matter of any of Examples 12-17, wherein performing the orchestration further comprises the IPU retrieving one or more images associated with the configuration information included in the configuration file from a registry and transferring the one or more images to a disaggregated data center resource.[0091] Example 19 includes a method comprising wherein performing the orchestration further comprises the IPU retrieving one or more images associated with the configuration information included in the configuration file from a registry and transferring the one or more images to a disaggregated data center resource.[0092] Example 20 includes the subject matter of Example 19, wherein the resource management circuitry discovers and performs management of the plurality of disaggregated data center resources.[0093] Example 21 includes the subject matter of any of Examples 19-20, wherein the resource management circuitry reports information associated with each of the plurality of disaggregated data center resources to the orchestration controller.[0094] Example 22 includes the subject matter of any of Examples 19-21, wherein the resource management circuitry establishes a communication session with each of the plurality of disaggregated data center resources.[0095] Example 23 includes the subject matter of any of Examples 19-22, wherein the coordination circuitry receives a configuration file including configuration information from the orchestration controller during a scheduling process.[0096] Example 24 includes at least one computer readable medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform provisioning at an infrastructure processing unit (IPU) to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster and perform orchestration to compose one or more of the disaggregated data center resources via the IPU based on resource requirements provided by the microservice.[0097] The above Detailed Description includes references to the accompanying drawings, which form a part of the Detailed Description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, also contemplated are examples that include the elements shown or described. Moreover, also contemplated are examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0098] Publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) are supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.[0099] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In addition, "a set of" includes one or more elements. In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein." Also, in the following claims, the terms "including" and "comprising" are open-ended; that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," "third," etc. are used merely as labels, and are not intended to suggest a numerical order for their objects.[00100] The terms “logic instructions” as referred to herein relates to expressions which may be understood by one or more machines for performing one or more logical operations. For example, logic instructions may comprise instructions which are interpretable by a processor compiler for executing one or more operations on one or more data objects. However, this is merely an example of machine-readable instructions and examples are not limited in this respect. [00101] The terms "computer readable medium" as referred to herein relates to media capable of maintaining expressions which are perceivable by one or more machines. For example, a computer readable medium may comprise one or more storage devices for storing computer readable instructions or data. Such storage devices may comprise storage media such as, for example, optical, magnetic or semiconductor storage media. However, this is merely an example of a computer readable medium and examples are not limited in this respect.[00102] The term “logic” as referred to herein relates to structure for performing one or more logical operations. For example, logic may comprise circuitry which provides one or more output signals based upon one or more input signals. Such circuitry may comprise a finite state machine which receives a digital input and provides a digital output, or circuitry which provides one or more analog output signals in response to one or more analog input signals. Such circuitry may be provided in an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). Also, logic may comprise machine-readable instructions stored in a memory in combination with processing circuitry to execute such machine-readable instructions. However, these are merely examples of structures which may provide logic and examples are not limited in this respect.[00103] Some of the methods described herein may be embodied as logic instructions on a computer-readable medium. When executed on a processor, the logic instructions cause a processor to be programmed as a special-purpose machine that implements the described methods. The processor, when configured by the logic instructions to execute the methods described herein, constitutes structure for performing the described methods. Alternatively, the methods described herein may be reduced to logic on, e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or the like.[00104] In the description and claims, the terms coupled and connected, along with their derivatives, may be used. In particular examples, connected may be used to indicate that two or more elements are in direct physical or electrical contact with each other. Coupled may mean that two or more elements are in direct physical or electrical contact. However, coupled may also mean that two or more elements may not be in direct contact with each other, but yet may still cooperate or interact with each other.[00105] Reference in the specification to “one example” or “some examples” means that a particular feature, structure, or characteristic described in connection with the example is included in at least an implementation. The appearances of the phrase “in one example” in various places in the specification may or may not be all referring to the same example.[00106] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. [00107] Although examples have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.
Some embodiments include an integrated assembly having a first deck with first memory cells arranged in first tiers disposed one atop another, and having a second deck over the first deck and with second memory cells arranged in second tiers disposed one atop another. Cell- material-pillars pass through the first and second decks. The cell- material-pillars have first inter-deck inflections associated with a boundary between the first and second decks. The cell-material-pillars are arranged within a configuration which includes a first memory- block-region and a second memory-block- region. A panel is between the first and second memory-block- regions. The panel has a second inter-deck inflection associated with the boundary between the first and second decks. Some embodiments include methods of forming integrated assemblies.
CLAIMS l/we claim,1 . An integrated assembly, comprising: a first deck having first memory cells arranged in first tiers disposed one atop another; a second deck over the first deck and having second memory cells arranged in second tiers disposed one atop another; cell-material-pillars passing through the first and second decks; the cell-material-pillars having first inter-deck inflections associated with a boundary between the first and second decks; the cell-material- pillars being arranged within a configuration which includes a first memory-block-region and a second memory-block-region; and a panel between the first and second memory-block-regions; the panel having a second inter-deck inflection associated with the boundary between the first and second decks.2. The integrated assembly of claim 1 wherein the panel has a pair of opposing sidewalls in top-down view, with the sidewalls extending along a horizontal direction; and wherein the opposing sidewalls are substantially horizontally parallel to one another and substantially horizontally straight.3. The integrated assembly of claim 1 wherein the panel has a pair of opposing sidewalls in top-down view, with the sidewalls extending along a horizontal direction; and wherein the opposing sidewalls are substantially horizontally parallel to one another and have serpentine configurations along the horizontal direction.4. The integrated assembly of claim 3 comprising: a first set of the cell-material-pillars within the first memory-block- region and along one of the opposing sidewalls; a second set of the cell-material-pillars within the second memory-block-region and along the other of the opposing sidewalls;24 the first set of the cell-material-pillars having neighboring edges adjacent said one of the opposing sidewalls; the second set of the cell-material-pillars having neighboring edges adjacent said other of the opposing sidewalls; and the serpentine configurations of the opposing sidewalls being configured to maintain a substantially uniform distance of said one of the of opposing sidewalls from the neighboring edges of the cell- material-pillars of the first set, and to maintain a substantially uniform distance of said other of the of opposing sidewalls from the neighboring edges of the cell-material-pillars of the second set.5. The integrated assembly of claim 4 wherein the cell- material-pillars within the first and second memory-block-regions are along pillar pitch, pp; and wherein a distance from a center of a cell- material-pillar of the first set, across the panel and to a center of a cell- material-pillar of the second set, is less than or equal to about 3 pp.6. The integrated assembly of claim 5 wherein the distance is less than or equal to about 2.5 pp.7. The integrated assembly of claim 5 wherein the distance is less than or equal to about 2 pp.8. The integrated assembly of claim 1 wherein the panel comprises silicon dioxide.9. The integrated assembly of claim 1 wherein the first interdeck inflections are regions where narrower cell-material-pillar regions associated with the second deck merge with wider cell-material-pillar regions associated with the first deck.10. The integrated assembly of claim 9 wherein the second inter-deck inflections are regions where narrower panel regions associated with the second deck merge with wider panel regions associated with the first deck.1 1. An integrated assembly, comprising: a stack of alternating conductive levels and insulative levels; cell-material-pillars passing through the stack; the cell-material- pillars being arranged within a configuration which includes a first memory-block-region and a second memory-block-region; memory cells comprising regions of the cell-material-pillars and being along the conductive levels; and a panel between the first and second memory-block-regions; the panel having a pair of opposing sidewalls in top-down view; the opposing sidewalls being substantially parallel to one another and having serpentine configurations along a horizontal direction.12. The integrated assembly of claim 1 1 wherein the stack includes two or more decks provided one atop another.13. The integrated assembly of claim 1 1 wherein the panel comprises silicon dioxide.14. The integrated assembly of claim 1 1 comprising: a first set of the cell-material-pillars within the first memory-block- region and along one of the opposing sidewalls; a second set of the cell-material-pillars within the second memory-block-region and along the other of the opposing sidewalls; the first set of the cell-material-pillars having neighboring edges adjacent said one of the opposing sidewalls; the second set of the cell-material-pillars having neighboring edges adjacent said other of the opposing sidewalls; and the serpentine configurations of the opposing sidewalls being configured to maintain a substantially uniform distance of said one of the of opposing sidewalls from the neighboring edges of the cell- material-pillars of the first set, and to maintain a substantially uniform distance of said other of the of opposing sidewalls from the neighboring edges of the cell-material-pillars of the second set.15. The integrated assembly of claim 14 wherein the cell- material-pillars within the first and second memory-block-regions are along pillar pitch, pp; and wherein a distance from a center of a cell- material-pillar of the first set, across the panel and to a center of a cell- material-pillar of the second set, is less than or equal to about 3 pp.16. The integrated assembly of claim 15 wherein the distance is less than or equal to about 2.5 pp.17. The integrated assembly of claim 15 wherein the distance is less than or equal to about 2 pp.18. A method of forming an integrated assembly, comprising: forming a first stack of alternating first and second tiers over a conductive structure; the first and second tiers comprising a first material and an insulative second material, respectively; forming first pillar openings to extend through the first stack, with the first pillar openings being arranged within a configuration which includes a first memory-block-region and a second memory-block- region, and forming a first slit opening to extend through the first stack and to be between the first and second memory-block-regions; forming a second stack of alternating third and fourth tiers over the first stack; the third and fourth tiers comprising a third material and an insulative fourth material, respectively; forming second pillar openings to extend through the second stack to the first pillar openings, and forming a second slit opening to extend through the second stack to the first slit opening; forming channel-material-pillars within the first and second pillar openings; the channel-material-pillars extending vertically through the first and second stacks and being electrically coupled with the conductive structure; replacing at least some of the first and third materials with one or more conductive materials to thereby convert the first and third tiers to first and second conductive levels, respectively; and27 forming a panel within the first and second slit openings; the panel extending vertically through the first and second stacks.19. The method of claim 18 wherein the first and second slit openings extend along a horizontal direction and have opposing sidewalls which are parallel to one another; and wherein said opposing sidewalls are substantially straight along said horizontal direction in top-down view.20. The method of claim 18 wherein the first and second slit openings extend along a horizontal direction and have opposing sidewalls which are parallel to one another; and wherein said opposing sidewalls have a serpentine configuration along said horizontal direction in top-down view.21 . The method of claim 18 wherein the conductive structure is a source structure.22. The method of claim 18 further comprising forming cell materials within the first and second pillar openings, and forming the channel-material-pillars adjacent to the cell materials; the cell materials including charge-blocking material, charge-storage material and gatedielectric material.23. The method of claim 18 further comprising forming sacrificial material within the first pillar openings and the first slit opening prior to forming the second stack.24. The method of claim 23 wherein the sacrificial material comprises silicon.25. The method of claim 23 wherein the sacrificial material comprises carbon.2826. The method of claim 23 wherein the sacrificial material comprises metal.27. The method of claim 23 further comprising: removing the sacrificial material from within the first pillar openings after forming the second pillar openings; and after removing the sacrificial material, forming the channel- material-pillars within the first and second pillar openings.28. The method of claim 27 further comprising: removing the sacrificial material from within the first slit openings after forming the second slit openings; and after removing the sacrificial material, forming the panel within the first and second slit openings.29. The method of claim 28 further comprising: after forming the channel-material-pillars, and after removing the sacrificial material from within the first slit openings, replacing said at least some of the first and third materials with said one or more conductive materials to thereby convert the first and third tiers to the first and second conductive levels; and after converting the first and third tiers to the first and second conductive levels, forming the panel within the first and second slit openings.30. The method of claim 18 wherein the second and fourth insulative materials comprise a same composition as one another.31 . The method of claim 30 wherein said same composition comprises silicon dioxide.32. The method of claim 18 wherein the first and third materials comprise a same composition as one another.2933. The method of claim 32 wherein said same composition comprises silicon nitride.30
INTEGRATED ASSEMBLIES AND METHODS OF FORMING INTEGRATED ASSEMBLIESRELATED PATENT DATAThis application claims priority to U.S. Patent Application Serial No. 17/002,339, filed August 25, 2020, the disclosure of which is incorporated herein by reference.TECHNICAL FIELDIntegrated assemblies (e.g., NAND assemblies) and methods of forming integrated assemblies.BACKGROUNDMemory provides data storage for electronic systems. Flash memory is one type of memory and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of flash memory and may be configured to comprise vertically-stacked memory cells.Before describing NAND specifically, it may be helpful to more generally describe the relationship of a memory array within an integrated arrangement. FIG. 1 shows a block diagram of a prior art device 1000 which includes a memory array 1002 having a plurality of memory cells 1003 arranged in rows and columns along with access lines 1004 (e.g., wordlines to conduct signals WL0 through WLm) and first data lines 1006 (e.g., bitlines to conduct signals BL0 through BLn). Access lines 1004 and first data lines 1006 may be used to transfer information to and from the memory cells 1003. A row decoder 1007 and a column decoder 1008 decode address signals A0 through AX on address lines 1009 to determine which ones of the memory cells 1003 are to be accessed. A sense amplifier circuit 1015 operates to determine the values of information read from the memory cells 1003. An I/O circuit 1017 transfers values of information between the memory array 1002 and input/output (I/O) lines 1005. Signals DQ0 through DON on the I/O lines 1005 can represent values of information read from or to be written into the memory cells 1003. Other devices can communicate with the device 1000 through the I/O lines 1005, the address lines 1009, or the control lines 1020. A memory control unit 1018 is used to control memory operations to be performed on the memory cells 1003, and utilizes signals on the control lines 1020. The device 1000 can receive supply voltage signals Vcc and Vss on a first supply line 1030 and a second supply line 1032, respectively. The device 1000 includes a select circuit 1040 and an input/output (I/O) circuit 1017. The select circuit 1040 can respond, via the I/O circuit 1017, to signals CSEL1 through CSELn to select signals on the first data lines 1006 and the second data lines 1013 that can represent the values of information to be read from or to be programmed into the memory cells 1003. The column decoder 1008 can selectively activate the CSEL1 through CSELn signals based on the A0 through AX address signals on the address lines 1009. The select circuit 1040 can select the signals on the first data lines 1006 and the second data lines 1013 to provide communication between the memory array 1002 and the I/O circuit 1017 during read and programming operations.The memory array 1002 of FIG. 1 may be a NAND memory array, and FIG. 2 shows a block diagram of a three-dimensional NAND memory device 200 which may be utilized for the memory array 1002 of FIG. 1 . The device 200 comprises a plurality of strings of chargestorage devices. In a first direction (Z-Z’), each string of charge-storage devices may comprise, for example, thirty-two charge-storage devices stacked over one another with each charge-storage device corresponding to one of, for example, thirty-two tiers (e.g., TierO- Tier31 ). The charge-storage devices of a respective string may share a common channel region, such as one formed in a respective pillar of semiconductor material (e.g., polysilicon) about which the string of charge-storage devices is formed. In a second direction (X-X’), each first group of, for example, sixteen first groups of the plurality of strings may comprise, for example, eight strings sharing a plurality (e.g., thirty- two) of access lines (i.e., “global control gate (CG) lines”, also known as wordlines, WLs). Each of the access lines may couple the chargestorage devices within a tier. The charge-storage devices coupled by the same access line (and thus corresponding to the same tier) may be logically grouped into, for example, two pages, such as P0/P32, P1 /P33, P2/P34 and so on, when each charge-storage device comprises a cell capable of storing two bits of information. In a third direction (Y-Y’), each second group of, for example, eight second groups of the plurality of strings, may comprise sixteen strings coupled by a corresponding one of eight data lines. The size of a memory block may comprise 1 ,024 pages and total about 16MB (e.g., 16 WLs x 32 tiers x 2 bits = 1 ,024 pages/block, block size = 1 ,024 pages x 16KB/page = 16MB). The number of the strings, tiers, access lines, data lines, first groups, second groups and/or pages may be greater or smaller than those shown in FIG. 2.FIG. 3 shows a cross-sectional view of a memory block 300 of the 3D NAND memory device 200 of FIG. 2 in an X-X’ direction, including fifteen strings of charge-storage devices in one of the sixteen first groups of strings described with respect to FIG. 2. The plurality of strings of the memory block 300 may be grouped into a plurality of subsets 310, 320, 330 (e.g., tile columns), such as tile columm, tile columnj and tile columnK, with each subset (e.g., tile column) comprising a “partial block” of the memory block 300. A global drainside select gate (SGD) line 340 may be coupled to the SGDs of the plurality of strings. For example, the global SGD line 340 may be coupled to a plurality (e.g., three) of sub-SGD lines 342, 344, 346 with each sub-SGD line corresponding to a respective subset (e.g., tile column), via a corresponding one of a plurality (e.g., three) of sub-SGD drivers 332, 334, 336. Each of the sub-SGD drivers 332, 334, 336 may concurrently couple or cut off the SGDs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global source-side select gate (SGS) line 360 may be coupled to the SGSs of the plurality of strings. For example, the global SGS line 360 may be coupled to a plurality of sub-SGS lines 362, 364, 366 with each sub-SGS line corresponding to the respective subset (e.g., tile column), via a corresponding one of a plurality of sub- SGS drivers 322, 324, 326. Each of the sub-SGS drivers 322, 324, 326 may concurrently couple or cut off the SGSs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global access line (e.g., a global CG line) 350 may couple the charge-storage devices corresponding to the respective tier of each of the plurality of strings. Each global CG line (e.g., the global CG line 350) may be coupled to a plurality of sub-access lines (e.g., sub-CG lines) 352, 354, 356 via a corresponding one of a plurality of sub-string drivers 312, 314 and 316. Each of the sub-string drivers may concurrently couple or cut off the charge-storage devices corresponding to the respective partial block and/or tier independently of those of other partial blocks and/or other tiers. The charge-storage devices corresponding to the respective subset (e.g., partial block) and the respective tier may comprise a “partial tier” (e.g., a single “tile”) of charge-storage devices. The strings corresponding to the respective subset (e.g., partial block) may be coupled to a corresponding one of sub-sources 372, 374 and 376 (e.g., “tile source”) with each sub-source being coupled to a respective power source.The NAND memory device 200 is alternatively described with reference to a schematic illustration of FIG. 4.The memory array 200 includes wordlines 202i to 202N, and bitlines 228i to 228M.The memory array 200 also includes NAND strings 206i to 206M. Each NAND string includes charge-storage transistors 208i to 208N. The charge-storage transistors may use floating gate material (e.g., polysilicon) to store charge, or may use charge-trapping material (such as, for example, silicon nitride, metallic nanodots, etc.) to store charge. The charge-storage transistors 208 are located at intersections of wordlines 202 and strings 206. The charge-storage transistors 208 represent non-volatile memory cells for storage of data. The chargestorage transistors 208 of each NAND string 206 are connected in series source-to-drain between a source-select device (e.g., sourceside select gate, SGS) 210 and a drain-select device (e.g., drain-side select gate, SGD) 212. Each source-select device 210 is located at an intersection of a string 206 and a source-select line 214, while each drain-select device 212 is located at an intersection of a string 206 and a drain-select line 215. The select devices 210 and 212 may be any suitable access devices, and are generically illustrated with boxes in FIG. 4.A source of each source-select device 210 is connected to a common source line 216. The drain of each source-select device 210 is connected to the source of the first charge-storage transistor 208 of the corresponding NAND string 206. For example, the drain of sourceselect device 210i is connected to the source of charge-storage transistor 208i of the corresponding NAND string 206i . The sourceselect devices 210 are connected to source-select line 214.The drain of each drain-select device 212 is connected to a bitline (i.e., digit line) 228 at a drain contact. For example, the drain of drainselect device 212i is connected to the bitline 228i . The source of each drain-select device 212 is connected to the drain of the last chargestorage transistor 208 of the corresponding NAND string 206. For example, the source of drain-select device 212i is connected to the drain of charge-storage transistor 208N of the corresponding NAND string 206i .The charge-storage transistors 208 include a source 230, a drain 232, a charge-storage region 234, and a control gate 236. The chargestorage transistors 208 have their control gates 236 coupled to a wordline 202. A column of the charge-storage transistors 208 are those transistors within a NAND string 206 coupled to a given bitline 228. A row of the charge-storage transistors 208 are those transistors commonly coupled to a given wordline 202. It is desired to develop improved NAND architecture and improved methods for fabricating NAND architecture.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a block diagram of a prior art memory device having a memory array with memory cells.FIG. 2 shows a schematic diagram of the prior art memory array of FIG. 1 in the form of a 3D NAND memory device.FIG. 3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in an X-X’ direction.FIG. 4 is a schematic diagram of a prior art NAND memory array.FIGS. 5-14 are diagrammatic cross-sectional side views of a region of an integrated assembly at example sequential process stages of an example method for forming an example memory array. FIGS. 6A, 9A and 14A are diagrammatic top-down views of regions of the integrated assemblies of FIGS. 6, 9 and 14, respectively; with the cross-section of FIG. 6 being along the line 6-6 of FIG. 6A, the crosssection of FIG. 9 being along the line 9-9 of FIG. 9A, and the crosssection of FIG. 14 being along the line 14-14 of FIG. 14A. FIGS. 6B, 9B and 14B are diagrammatic top-down views of regions of integrated assemblies analogous to those of FIGS. 6A, 9A and 14A. FIGS. 6C, 9C and 14C are diagrammatic top-down views of regions of integrated assemblies analogous to those of FIGS. 6A, 9A and 14A. FIG. 9D is an enlarged diagrammatic cross-sectional side view of a region “D” of FIG. 9. FIG. 14D is a diagrammatic top-down view of a region of the integrated assembly of FIG. 14C, and is through a different level than the view of FIG. 14C. FIG. 14E is an enlarged diagrammatic cross- sectional side view of a region “E” of FIG. 14. FIG. 1 1 A is an enlarged diagrammatic cross-sectional side view of a region “A” of FIG. 1 1.FIGS. 15-18 are diagrammatic cross-sectional side views of a region of an integrated assembly at example sequential process stages of an example method for forming an example memory array. The process stage of FIG. 15 may follow that of FIG. 5 in some embodiments. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSSome embodiments include methods of forming memory with two or more decks stacked one atop another, and some embodiments include configurations having two or more decks stacked one atop another. Example embodiments are described with reference to FIGS. 5-18.Referring to FIG. 5, an assembly 10 includes a conductive structure 14. The conductive structure 14 may be a source structure analogous to the source structures 216 and 360 described above in the Background section. The conductive structure 14 may comprise any suitable electrically conductive composition(s), and in some embodiments may comprise conductively-doped semiconductor material. The conductively-doped semiconductor material may be conductively-doped silicon (e.g., n-type silicon). The conductively- doped semiconductor material of the source structure 14 may be over one or more additional conductive materials of the source structure 14 (e.g., one or more metal-containing materials, such as, for example, one or both of tungsten and tungsten silicide).The conductive structure 14 may be supported by a semiconductor base (not shown). The base 12 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. The base may support CMOS, and the structure 14 may be electrically coupled with the CMOS.A stack 12 of alternating first and second tiers (levels, layers) 16 and 18 is formed over the conductive structure 14. The stack 12 may comprise any suitable number of alternating tiers 16 and 18. The tiers 16 ultimately become conductive levels of a memory arrangement. There may be any suitable number of tiers 16 to form the desired number of conductive levels. In some embodiments, the number of tiers 16 may be 8, 16, 32, 64, etc.The first tiers 16 comprise a first material 20. Such first material may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon nitride.The second tiers 18 comprise a second material 22. Such material may be an insulative material and may comprise any suitable composition(s). In some embodiments, the material 22 may comprise, consist essentially of, or consist of silicon dioxide.In some embodiments, the materials 20 and 22 may be referred to as a first material and an insulative second material, respectively.The tiers 16 and 18 may be of any suitable thicknesses; and may be the same thickness as one another, or may be different thicknesses relative to one another. In some embodiments, the tiers 16 and 18 may have vertical thicknesses within a range of from about 10 nanometers (nm) to about 400 nm. In the illustrated embodiment, the bottommost tier 18 is thicker than the other tiers 18. In other embodiments, the bottommost tier 18 may have a thickness which is about the same as the thickness of the other tiers 18, or may be less thick than the other tiers 18.In some embodiments, the stack 12 may be referred to as a first stack to distinguish it from additional stacks formed at later process stages. The first stack 12 may be considered to be comprised by a first deck 24 (Deck-1 ). The first deck 24 may also comprise the source structure 14, as shown.Referring to FIGS. 6 and 6A, pillar openings 26 are formed to extend through the stack 12. In the shown embodiment, the pillar openings 26 extend downwardly to an upper surface of the source structure 14.The pillar openings 26 are arranged within a configuration which includes adjacent memory-block-regions 28a and 28b. The memory- block-regions 28a and 28b may be referred to as first and second memory-block-regions, respectively. The first and second memory- block-regions 28a and 28b may be analogous to the memory blocks (or portions thereof) described above in the Background section.Slit openings 30, 32 and 34 are also formed to extend through the stack 12. In some embodiments, the slit opening 30 may be referred to as a first slit opening, with such first slit opening being between the first and second memory-block-regions 28a and 28b.FIG. 6A shows the pillar openings 26 to be circular-shaped in top- down view. In other embodiments, the pillar openings 26 may have other shapes (e.g., elliptical, polygonal, etc.).FIG. 6A also shows the slit openings 30, 32 and 34 formed to extend along a horizontal y-axis direction. Each of the slit openings has a pair of opposing sidewalls, with the sidewalls of the slit opening 30 being labeled 31 a and 31 b. The sidewalls 31 a and 31 b may be referred to as first and second sidewalls, respectively. The sidewalls 31 a and 31 b are parallel to one another, and are substantially straight along the y-axis direction in the embodiment of FIG. 6A (with the term “substantially straight” meaning straight to within reasonable tolerances of fabrication and measurement).FIGS. 6B and 6C show embodiments analogous to that of FIG. 6A, but with the slit openings 30, 32 and 34 each having parallel sidewalls (e.g., sidewalls 31 a and 31 b of the slit opening 30) which have a serpentine (winding, wavy, weaving, etc.) configuration along the y-axis direction.Referring to FIG. 7, sacrificial material 36 is formed within the openings 26, 30, 32 and 34. The sacrificial material 36 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of one or more of metal (e.g., tungsten), undoped semiconductor material (e.g., undoped silicon), carbon, aluminum oxide, etc. ; with the term “undoped” meaning not significantly doped, and in some embodiments meaning a dopant concentration of less than or equal to about 1 x 1016atoms/cm3. In some embodiments (not shown) the sacrificial material within the slits 30, 32 and 34 may be compositionally different than that within the pillar openings 26. A planarized surface 35 is formed to extend across the sacrificial material 36 and the upper tier 18. The planarized surface 35 may be formed with any suitable processing, including, for example, chemicalmechanical polishing (CMP).Referring to FIG. 8, a second stack 38 of alternating third and fourth tiers (levels, layers) 40 and 42 is formed over the first stack 12. The stack 38 may comprise any suitable number of alternating tiers 40 and 42. The tiers 40 ultimately become conductive levels of a memory arrangement. There may be any suitable number of tiers 40 to form the desired number of conductive levels. In some embodiments, the number of tiers 40 may be 8, 16, 32, 64, etc.The third tiers 40 comprise a third material 44. Such third material may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon nitride. Accordingly, the third material 44 may comprise a same composition as the first material 20.The fourth tiers 42 comprise a fourth material 46. Such fourth material may be an insulative material and may comprise any suitable composition(s). In some embodiments, the fourth material 46 may comprise, consist essentially of, or consist of silicon dioxide. In some embodiments, the insulative fourth material 46 may comprise a same composition as the insulative second material 22.The tiers 40 and 42 may have the same thicknesses described above relative to the tiers 16 and 18, or may have different thicknesses than the tiers 16 and 18.The second stack 38 may be considered to be comprised by a second deck 48 (Deck-2).Referring to FIG. 9, second pillar openings 50 are formed to extend through the second stack 38 to the sacrificial material 36 within the first pillar openings 26. Also, slit openings 52, 54 and 56 are formed to extend through the second stack 38 to the sacrificial material 36 within the slit openings 30, 32 and 34, respectively. In some embodiments, the slit opening 52 between the memory-block-regions 28a and 28b may be referred to as a second slit opening, and such may be considered to extend through the second stack 38 to the first slit opening 30.An inter-deck region 58 is diagrammatically indicated in FIG. 9 as a region where the decks 24 and 48 interface with one another. The openings 50 and 26 together form first inter-deck inflections 60 (only one of which is labeled in FIG. 9) where they join. The slit openings 52 and 30 together form a second inter-deck inflection 62 where they join. Similarly, the slit openings 54 and 32 together form an inter-deck inflection 62 where they join, and the slit openings 56 and 34 together form an inter-deck inflection 62 where they join.FIG. 9D shows an enlarged view of a region D along the interdeck region 58 to more clearly illustrate some of the example inter-deck inflections 60 and 62. The illustrated inter-deck inflections 60 occur where the pillar openings 50 (the pillar openings formed through the second deck 48) meet the pillar openings 26 (the pillar openings formed through the first deck 24), and are a result of having the first and second pillar openings tapered during formation of such openings. Accordingly, the inflections 60 occur where a narrow lower portion of the tapered opening 50 joins to a wide upper portion of the tapered opening 26.The illustrated inter-deck inflections 62 are similar to the inflections 60, and occur where the narrow lower portions of the tapered slit openings (e.g., 52) of the upper deck meet the wide upper portions of the tapered slit openings (e.g., 30) of the lower deck.FIGS. 9 and 9D show examples of inter-deck inflections that may be detected in in an inter-deck region. The inter-deck inflections result from one portion of a configuration being formed during fabrication associated with the lower deck and another portion of the configuration being formed during fabrication associated with the upper deck. In other embodiments, the inter-deck inflections may have other manifestations than are shown in FIGS. 9 and 9D. For instance, the inter-deck inflections may correspond to regions where an opening through an upper deck is offset relative to an opening through a lower deck (e.g., through mask misalignment during formation of the opening). In some embodiments, one or more of the openings 26, 30, 32, 34, 50, 52, 54 and 56 may not have the shown tapering.FIG. 9A shows that the slit openings 52, 54 and 56 extend along the illustrated y-axis direction. Each of the slit openings has a pair of opposing sidewalls, with the sidewalls of the slit opening 52 being labeled 53a and 53b. The sidewalls 53a and 53b may be referred to as first and second sidewalls, respectively. The sidewalls 53a and 53b are parallel to one another, and are substantially straight along the y-axis direction in the embodiment of FIG. 9A.FIGS. 9B and 9C show embodiments analogous to that of FIG. 9A, but with the slit openings 52, 54 and 56 having parallel sidewalls (e.g., sidewalls 53a and 53b of the slit opening 52) which each have a serpentine (winding, wavy, weaving, etc.) configuration along the y-axis direction.Referring to FIG. 10, additional sacrificial material 64 is formed within the slit openings 52, 54 and 56; patterned masking material 66 is provided along a top surface of the assembly 10 to protect the sacrificial materials 36 and 64 within the slit openings 30, 32, 34, 52, 54 and 56; and subsequently the sacrificial material 36 is removed from within the pillar openings 26. The pillar openings 26 and 50 within the upper and lower decks 24 and 48 merge to form vertically-extending pillar openings 26/50 which extend entirely through both of the first and second decks 24 and 48.In some embodiments, the sacrificial material 64 may be formed within the second pillar openings 50 (FIG. 9) as such sacrificial material is formed within the slit openings 52, 54 and 56; a planarized surface 63 may be formed to extend across the sacrificial material 64 and the upper tier 42; the patterned masking material 66 may be formed on such planarized surface 63; and then the sacrificial materials 36 and 64 may be removed from the pillar openings to leave the resulting configuration of FIG. 10 having vertically-extending openings 26/50 extending entirely through the decks 24 and 48 (i.e., extending entirely through the stacks 12 and 38). The patterned mask 66 may comprise any suitable composition(s), and in some embodiments may comprise photolithographically-patterned photoresist.The sacrificial material 64 may comprise any suitable composition(s), and in some embodiments may comprise a same composition as the sacrificial material 36.Referring to FIG. 1 1 , channel-material-pillars 68 are formed within the openings 26/50. The masking material 66 may or may not remain over the slit regions as the channel-material-pillars are formed, and in the shown embodiment of FIG. 1 1 remains over such slit regions.The channel-material-pillars 68 may be considered to extend vertically through the first and second decks 24 and 48, and are shown to be electrically coupled with the conductive structure 14 (and in the shown embodiment are directly against the conductive structure 14). The channel-material-pillars 68 are shown to be hollow, and to laterally surround insulative material 70. The channel material-pillars 68 are offset from edges of the openings 26/50 by regions comprising cell materials.The channel-material-pillars and cell materials are shown in more detail relative to an enlarged view of FIG. 1 1 A. The channel-material- pillars 68 comprise channel material 72. The channel material 72 may comprise any suitable semiconductor composition(s). In some embodiments, the channel material 72 may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc. ; with the term lll/V semiconductor material referring to semiconductor materials comprising elements selected from groups III and V of the periodic table (with groups III and V being old nomenclature, and now being referred to as groups 13 and 15). In some embodiments, the channel material 72 may comprise silicon. The silicon may be in any suitable crystalline state (e.g., monocrystalline, polycrystalline, amorphous, etc.).The channel material 72 is offset from the edge of the opening 26/50 by a region 74 comprising cell materials. The cell materials within the region 74 may include gate-dielectric material (insulative material, tunneling material) 76, charge-storage material 78, and chargeblocking material 80.The gate-dielectric material 76 may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, silicon nitride, aluminum oxide, hafnium oxide, zirconium oxide, etc. In some embodiments, the material 76 may comprise a bandgap-engineered laminate.The charge-storage material 78 may comprise any suitable composition(s), and in some embodiments may comprise chargetrapping material (e.g., one or more of silicon nitride, silicon oxynitride, conductive nanodots, etc.).The charge-blocking material 80 comprise any suitable composition(s), and in some embodiments may comprise one or both of silicon dioxide and silicon oxynitride.The insulative material 70 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. In some embodiments, the insulative material 70 may be omitted and the channel-material-pillars 68 may be solid pillars rather than being the illustrated hollow pillars.In some embodiments, the materials 72, 76, 78 and 80 may be considered together to form cell-material-pillars 82. In other words, the cell-material-pillars 82 may be considered to comprise the channel- material-pillars 68 together with the cell materials 76, 78 and 80.Referring to FIG. 12, the masking material 66 (FIG. 1 1 ) is removed together with the sacrificial materials 36 and 64 within the slit regions to leave slit openings 32/54, 30/52 and 34/56 extending through the decks 24 and 48 (i.e., through the stacks 12 and 38).Referring to FIG. 13, etchant (not shown) is flowed into the slit openings 32/54, 30/52 and 34/56, and is utilized to remove the materials 20 and 44 (shown in FIG. 12) to form voids 84 along the levels 16 and 40. Referring to FIG. 14, dielectric-barrier material 86 is formed within the voids 84 (FIG. 13) to line the voids, and then conductive material 88 is formed within the lined voids.The dielectric-barrier material 88 may comprise any suitable composition(s); and may, for example, comprise one or more high-k compositions (e.g., aluminum oxide, hafnium oxide, zirconium oxide, etc.). The term “high-k composition” means a composition having a dielectric constant greater than the dielectric constant associated with silicon dioxide (i.e., greater than about 3.9). In some embodiments, the dielectric-barrier material 86 may be formed within the openings 26/50 (FIGS. 1 1 and 1 1 A) as one of the cell materials within the regions 74 (FIG. 1 1 A) in addition to, or alternatively to, being formed within the voids 84.The conductive material 88 may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the conductive material 88 may comprise a metal-containing core (e.g., a tungsten-containing core), and a metal nitride (e.g., titanium nitride, tungsten nitride, etc.) along a periphery of the metal-containing core. In some embodiments, the conductive material 88 may be considered to be configured as wordlines along the conductive levels 16 and 40, and may be referred to as conductive wordline material.The alternating levels 16 and 18 of the first stack 12 may be referred to as first conductive levels and first insulative levels, respectively; and the alternating levels 40 and 42 of the second stack 38 may be referred to as second conductive levels and second insulative levels, respectively.The processing of FIGS. 13 and 14 may be considered to replace at least some of the first and third materials 20 and 44 (FIG. 12) with one or more conductive materials (e.g., the conductive material 88) to form the first and second conductive levels 16 and 40 of FIG. 14.After the materials 86 and 88 are formed within the voids 84 (FIG. 13), panels 90 are formed within the slit openings 32/54, 30/52 and 34/56. The panels 90 may comprise any suitable composition(s). In the shown embodiment, the panels 90 comprise a homogeneous insulative composition 92. Such composition may, for example, comprise, consist essentially of, or consist of silicon dioxide. In other embodiments, the panels may comprise laminates of two or more compositions, and at least one of such compositions may be conductive.First memory cells 15 (only some of which are labeled) are along the first conductive levels 16 of the first deck 24, and second memory cells 17 (only some of which are labeled) are along the second conductive levels 40 of the second deck 48. Each of the first and second memory cells includes a portion of a channel-material-pillar 68, portions of the memory cell materials adjacent the channel-material-pillar (with the memory cell materials being described above with reference to FIG. 1 1 A), and portions of the conductive levels. The memory cells 15 and 17 along the pillars 82 may correspond to vertical strings of memory cells suitable for utilization in NAND memory of the types described above with reference to FIGS. 1 -4.The bottom conductive level 16 of the first deck 24 is shown to comprise source-side select gate (SGS) devices 94 rather than comprising memory cells. In some embodiments, more than one of the conductive levels may be incorporated into the SGS devices. If multiple conductive levels are incorporated into the SGS devices, the conductive levels may be electrically ganged together.The first memory cells 15 may be considered to be arranged in first tiers (the levels 16), with such first tiers being disposed one atop another and being comprised by the first deck 24. The second memory cells 17 may be considered to be arranged in second tiers (the levels 40), with such second tiers being disposed one atop another and being comprised by the second deck 48. The cell-material-pillars 82 (and the memory cells 15 and 17 associated with such pillars) are arranged within a configuration that includes the first and second memory-block-regions 28a and 28b.The inter-deck region 58 is diagrammatically indicated in FIG. 14 as the region where the decks 24 and 48 interface with one another. The first and second inter-deck inflections 60 and 62 are shown in FIG. 14, with the first inter-deck inflections being along the cell-material- pillars 82, and the second inter-deck inflections being along the panels 90. In some embodiments, the first inter-deck inflections 60 may be considered to be associated with a boundary between the first and second decks 24 and 48, and to be within the cell-material-pillars 82; and the second inter-deck inflections 62 may be considered to be associated with the boundary between the first and second decks, and to be within the panels 90. The inter-deck inflections may result from the openings being formed in the top and bottom decks in separate process stages, as described in more detail above with reference to FIGS. 9 and 9D. FIG. 14E shows an enlarged view of a region E along the inter-deck region 58 to more clearly illustrate representative interdeck inflections 60 and 62.The top-down view of FIG. 14A shows that the panels 90 extend along the horizontal direction corresponding to the illustrated y-axis direction. Each of the panels has a pair of opposing sidewalls, with the sidewalls of the central panel being labeled 93a and 93b. The sidewalls 93a and 93b may be referred to as first and second sidewalls, respectively. The sidewalls 93a and 93b are parallel to one another, and are substantially straight along the y-axis direction in the embodiment of FIG. 14A.FIGS. 14B and 14C show embodiments analogous to that of FIG. 14A, but with the panels having parallel sidewalls (e.g., sidewalls 93a and 93b of the central panel 90) which each have a serpentine (winding, wavy, weaving, etc.) configuration along the y-axis direction.An advantage of the serpentine sidewall configurations of FIGS. 14B and 14C is that such may enable the panel sidewalls to maintain a uniform distance relative to neighboring edges of cell-material-pillars. Such advantage is described in more detail relative to FIG. 14D, which shows a top-down cross-section through one of the conductive levels 40 (the dielectric material 86 is not shown in FIG. 14D to simplify the drawing). The cell-material-pillars 82 within the first memory-block- region 28a may be considered to include a first set 96 of the cell- material-pillars 82 which are those pillars that are neighboring to the sidewall 93a of the central panel 90 (i.e., the panel what separates the memory-block-region 28a from the memory-block-region 28b). Analogously, the cell-material-pillars 82 within the second memory- block-region 28b may be considered to include a second set 98 of the cell-material-pillars 82 which are those pillars that are neighboring to the sidewall 93b of the central panel 90. The pillars 82 within the first set 96 have neighboring edges 95 (only some of which are labeled) adjacent the sidewall 93a of the central panel 90, and the pillars 82 of the second set 98 have neighboring edges 97 adjacent the sidewall 93b of the central panel 90. The serpentine configuration of the sidewalls 93a and 93b advantageously may enable the sidewalls 93a and 93b of the panel 90 to be maintained at a substantially uniform distance Di from the neighboring edges 95 and 97 of the pillars 82 within the sets 96 and 98. In contrast, the straight sidewalls of the embodiment of FIG. 14A have varying distances D2 and D3 from neighboring edges of neighboring pillars 82. Such varying distances may problematically lead to nonuniformity of device performance (e.g., memory cell performance) due to, for example, nonuniform resistances along the pillars 82 resulting from differing sizes of segments of conductive wordline material 88 between the pillars and the sidewalls of the panels 90. In some embodiments, problems associated with the straight panel sidewalls of the embodiment of FIG. 14A may be alleviated, and even prevented, utilizing weaving panel sidewalls of the types shown in the embodiments of FIGS. 14B-D.Another advantage of the serpentine sidewall configurations of FIGS. 14B-14D may be that such can enable the memory blocks 28a and 28b to be more tightly-packed than is possible with the straight sidewalls of FIG. 14A. Referring initially to FIG. 14A, a center-to-center spacing S between a pillar 82 within the memory-block-region 28a and an adjacent pillar 82 within the memory-block-region 28b may be expressed in terms of a pillar pitch (pp), with the pillar pitch being a center-to-center distance between adjacent pillars in the memory-block regions 28a and 28b. In some embodiments, the straight-sidewall panel configuration of 14A will lead to spacing distances (S) of at least about 3.5 pp. In contrast, the weaving-sidewall panel configurations of FIGS. 14B-D may lead to spacing distances (S) of less than or equal to about 3 pp, less than or equal to about 2.5 pp, and even less than or equal to about 2 pp.The embodiments described above show two decks (24 and 48) stacked one on top of the other. In some applications, analogous embodiments may be applied to configurations having more than two decks stacked one on top of another.The formation of the first regions of the slit openings (30, 32 and 34) and pillar openings (26) within the first deck 24, followed by the formation of the second regions of the slit openings (52, 54 and 56) and pillar openings (50) within the second deck 48 may enable the overall slit openings (30/52, 32/54 and 34/56) and overall pillar openings (26/50) to be formed with more uniformity than could be achieved by attempting to etch the slit openings and pillar openings through the first and second decks 24 and 48 in a single step, and may lead to better critical dimensions (e.g., less tapering) than would occur if the slit openings and pillar openings were etched through the first and second decks in a single step. However, it is to be understood that some embodiments may include formation of at least some of the openings through multiple decks in a single step, rather than forming portions of the openings within each of the decks in separate etch steps. For instance, FIGS. 15-18 illustrate an example embodiment in which the pillar openings are formed within multiple decks utilizing separate etch steps, and in which the slit openings are formed through the multiple decks with a single etch step. Referring to FIG. 15, the assembly 10 is shown at a process stage analogous to that of FIG. 6, except only the pillar openings 26 are formed within the first deck 24 rather than also forming the slit openings within the first deck.Referring to FIG. 16, the second deck 48 is formed over the first deck 24 with processing analogous to that described above with reference to FIG. 7.Referring to FIG. 17, the pillar openings 50 are formed within the second deck 48 with processing analogous that described above with reference to FIG. 9, and the cell-material-pillars 82 are formed within the openings 50/26 with processing analogous to that described above with reference to FIG. 1 1 .Referring to FIG. 18, slit openings 100 are formed through the first and second decks 24 and 48. In subsequent processing, the materials 20 and 46 may be at least partially replaced with conductive materials to form wordline levels (conductive levels) 16 and 40 analogous to those described above with reference to FIG. 14, and panels 90 (analogous to those described above with reference to FIG. 14) may be formed within the slit openings 100. In some embodiments, the slit openings 100 of FIG. 18 may be configured to have weaving (serpentine, wavy, etc.) sidewall configurations analogous to those described above with reference to FIGS. 9B and 9C, and the panels 90 formed within the slit openings 100 may be configured to have weaving (serpentine, wavy, etc.) sidewall configurations analogous to those described above with reference to FIGS. 14B-D.The assemblies and structures discussed above may be utilized within integrated circuits (with the term “integrated circuit” meaning an electronic circuit supported by a semiconductor substrate); and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms “dielectric” and “insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term “insulative” (or “electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The terms “electrically connected” and “electrically coupled” may both be utilized in this disclosure. The terms are considered synonymous. The utilization of one term in some instances and the other in other instances may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being “on”, “adjacent” or “against” another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present. The terms "directly under", "directly over", etc., do not indicate direct physical contact (unless expressly stated otherwise), but instead indicate upright alignment.Structures (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structures generally extend upwardly from an underlying base (e.g., substrate). The vertically- extending structures may extend substantially orthogonally relative to an upper surface of the base, or not.Some embodiments include an integrated assembly having a first deck with first memory cells arranged in first tiers disposed one atop another, and having a second deck over the first deck and with second memory cells arranged in second tiers disposed one atop another. Cell- material-pillars pass through the first and second decks. The cell- material-pillars have first inter-deck inflections associated with a boundary between the first and second decks. The cell-material-pillars are arranged within a configuration which includes a first memory- block-region and a second memory-block-region. A panel is between the first and second memory-block-regions. The panel has a second inter-deck inflection associated with the boundary between the first and second decks.Some embodiments include an integrated assembly having a stack of alternating conductive levels and insulative levels. Cell- material-pillars pass through the stack. The cell-material-pillars are arranged within a configuration which includes a first memory-block- region and a second memory-block-region. Memory cells include regions of the cell-material-pillars and are along the conductive levels. A panel is between the first and second memory-block-regions. The panel has a pair of opposing sidewalls in top-down view. The opposing sidewalls are substantially parallel to one another and have serpentine configurations along a horizontal direction.Some embodiments include a method of forming an integrated assembly. A first stack of alternating first and second tiers is formed over a conductive structure. The first and second tiers comprise a first material and an insulative second material, respectively. First pillar openings are formed to extend through the first stack, with the first pillar openings being arranged within a configuration which includes a first memory-block-region and a second memory-block-region. A first slit opening is formed to extend through the first stack and to be between the first and second memory-block-regions. A second stack of alternating third and fourth tiers is formed over the first stack. The third and fourth tiers comprise a third material and an insulative fourth material, respectively. Second pillar openings are formed to extend through the second stack to the first pillar openings, and a second slit opening is formed to extend through the second stack to the first slit opening. Channel-material-pillars are formed within the first and second pillar openings. The channel-material-pillars extend vertically through the first and second stacks and are electrically coupled with the conductive structure. At least some of the first and third materials is replaced with one or more conductive materials to thereby convert the first and third tiers to first and second conductive levels, respectively. A panel is formed within the first and second slit openings. The panel extends vertically through the first and second stacks.In compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents.
A method of providing a patterned conductive layer. The method includes: providing a build-up layer comprising an insulating material; laser irradiating selected portions of the build-up layer according to a predetermined pattern of the patterned conductive layer to be provided, laser irradiating comprising using a laser beam having a photon energy higher than a bonding energy of at least some ofthe chemical bonds of the insulating material to yield predetermined laser-weakened portions of the build-up layer according to the predetermined pattern; removing the laser-weakened portions of the build-up layer to yield recesses according to the predetermined pattern; and filling the recesses with a conductive material to yield the patterned conductive layer.
1.A method for providing a patterned conductive layer, including:Provide laminates containing insulating materials;Use a laser beam with a photon energy higher than the bond energy of at least a part of the chemical bonds in the insulating material to irradiate a selected portion of the build-up layer to form a faster speed than the un-irradiated portion of the build-up layer The eroded laser weakened part;Erode the weakened portion of the laser at a faster rate than the portion not irradiated by the laser to form a groove;The groove is filled with a conductive material to form at least a portion of the conductive layer that has been patterned.2.The method of claim 1, wherein the laser irradiation includes using a laser source having a photon energy between 2.00 eV and 7.00 eV.3.The method of claim 1, wherein the laser irradiation includes using a laser source having an average laser flow rate of 0.5 J / cm 2 or less.4.The method of claim 1, wherein the laser irradiation includes using a laser source having a wavelength between 150 nm and 550 nm.5.The method of claim 1, wherein the laser irradiation includes using second and third harmonic Nd: YAG or vanadate laser equipment with wavelengths of 532 nm and 355 nm, respectively.6.The method of claim 1, wherein the laser irradiation includes using second and third harmonic Nd: YLF laser devices with wavelengths of 527 nm and 351 nm, respectively.7.The method of claim 1, wherein the laser irradiation includes using an XeCl excimer laser device having a wavelength of 308 nm or an XeF excimer laser device having a wavelength of 354 nm.8.The method of claim 1, wherein the insulating material and the laser beam are selected so as to obtain a predetermined depth at which the laser beam is absorbed by the insulating material.9.The method of claim 8, wherein the depth of the patterned conductive layer is 5-15 microns.10.The method of claim 1, wherein the laser irradiation comprises:Providing a contact mask on the build-up layer; andThe laminated layer is irradiated with laser light through the contact mask to irradiate a selected portion of the laminated layer with laser light.11.The method of claim 1, wherein the laser irradiation comprises:Providing a projection mask on the build-up layer; andThe layered layer is irradiated with laser light through the projection mask to irradiate a selected portion of the layered layer with laser light.12.The method of claim 1, wherein the laser irradiation comprises: using a direct laser imaging method to irradiate a selected portion of the laminate.13.The method of claim 1, wherein the etching includes the use of a permanganate reagent.14.The method according to claim 1, wherein the filling comprises: providing a conductive seed layer formed by an electroless plating film on the build-up layer and in the groove; formed by the electroless plating film A conductive layer formed by electrolytic plating is provided on the conductive seed layer; and the conductive layer formed by the electrolytic plating is mechanically polished.15.The method of claim 1, wherein the laminate includes one of an epoxy-based dielectric material, glass fiber reinforced polyimide, or bismaleimide-triazine (BT) .16.The method of claim 15, wherein the build-up layer comprises glass fiber reinforced epoxy resin.17.The method of claim 1, wherein the conductive material comprises copper.18.The method of claim 1, wherein the patterned conductive layer comprises a conductive via layer.
Method for providing embedded conductive layer processed by layoutfieldEmbodiments of the present invention generally relate to the field of patterning conductive layers for microelectronic devices, such as for high I / O density substrates.backgroundThe conventional process of patterning a conductive layer such as a high I / O density substrate generally involves, for example, providing an initial dielectric layer by lamination, followed by a semi-additive process based on lithography. Such processes generally involve electroless seed layer coating, dry film resist lamination, exposure, development, electrolytic metal plating, and dry film resist stripping. The resulting patterned conductive metal layer can be located on top of the build-up layer.Disadvantageously, the prior art methods of patterning conductive layers are not well suited to the shrinking feature sizes and increasing I / O density considered for next-generation devices. In particular, the prior art method of patterning the conductive layer is difficult to apply to line and space features of about 10 microns or less. In addition, such methods generally require a large number of processing steps and therefore require a long production time.The prior art cannot provide a patterned conductive layer embedded in a dielectric material in a cost-effective, convenient and reliable method.Brief description of the drawingsFigures 1a-1c show three embodiments of laser irradiation;FIG. 2 shows a build-up containing a laser-weakened portion according to one embodiment;FIG. 3 illustrates a build-up layer including a conductive layer subjected to layout processing according to one embodiment;FIG. 4 shows the combined structure of the build-up layer of FIG. 3 and the patterned conductive layer, the structure additionally including a conductive material located in the groove of the patterned conductive layer.For simplicity and clarity of description, the parts in the drawings are not necessarily drawn to scale. For example, for clarity, the dimensions of some components are enlarged relative to others. Where deemed appropriate, reference numerals in the various figures are repeated to indicate corresponding or similar parts.Detailed descriptionIn the following detailed description, a method of providing a patterned conductive layer is described. Referring to the drawings, the drawings show by way of example some specific embodiments to which the present invention can be applied. It is to be understood that other embodiments may exist and other structural changes may be made without departing from the scope and spirit of the invention.As used herein, the terms "above", "above", "below", and "adjacent" mean the position of one component relative to another. As such, the first component on, above, or below the second component may be in direct contact with the second component or include one or more intervening components. In addition, the first component disposed near or adjacent to the second component may directly contact the second component or include one or more intervening components. In addition, in the following description, the figures and / or components may be represented in an alternative manner. In this case, for example, when the specification mentions that Figure X / Y shows part A / B, it means that Figure X shows part A and Figure Y shows part B. In addition, "layer" as used herein may refer to a layer made of one material, a layer made of a mixture of multiple components, a layer composed of individual sub-layers, and each sub-layer also has the same as the aforementioned layer Definition.Various aspects of this embodiment and other embodiments are described herein with reference to FIGS. 1a-3. However, these drawings should not be considered as limiting, as they are intended to facilitate understanding and explanation.Referring first to FIGS. 1a-1c, the illustrated embodiment includes a layered portion selected according to laser irradiation of a predetermined pattern. The laminate may include any known dielectric material, such as epoxy-based dielectric materials (eg glass fiber reinforced epoxy resin), glass fiber reinforced polyimide or bismaleimide- Triazine (BT), to name just one or two. The predetermined pattern irradiated to the laminated laser according to these embodiments corresponds to the predetermined pattern of the patterned conductive layer to be provided in the laminated layer. In this specification, "layout-processed conductive layer" means a plurality of layer components determined to include one or more conductive materials in a cross-sectional side view thereof. Therefore, according to various embodiments, the patterned conductive layer may, for example, cover a conductive metallization layer (including traces, solder joints, and datums but not vias) on the one hand, and cover a Layer conductive path. The patterned conductive layer according to these embodiments may include a single conductive material or multiple conductive materials according to occasion requirements.Still referring to FIGS. 1a-1c, the build-up layer 10 may be irradiated with laser light at selected portions 12 thereof (as shown by the dashed lines in FIGS. 1a-1c), and these selected portions have a patterned conductive layer pattern to be provided . Laser irradiation can be achieved using a laser source or device 14 that emits a laser beam 16 as shown. According to some embodiments, the laser source may be selected so that the photon energy of the laser beam it generates is higher than the bond energy of at least a part of the chemical bonds present in the insulating material of the build-up layer 10. In this way, the laser beam can break these chemical bonds to form a laser-weakened region described further in conjunction with FIG. 2. Laser irradiation of selected portions can be achieved in any known manner. For example, referring to FIG. 1 a, according to one embodiment, laser irradiation may include providing a contact mask 8 on the build-up layer 10 and irradiating the build-up layer 10 with the laser beam 16 through the contact mask 18. Referring next to FIG. 1b, laser irradiation may include providing a projection mask 20 at a certain distance above the build-up layer 10 and irradiating the build-up layer 10 with laser light through the projection mask. Laser irradiation can be assisted by the known projection optics 17 shown in FIG. 1b. Referring next to FIG. 1c, laser irradiation may include the use of a direct laser imaging method through a direct laser imager 22 that uses a laser beam 16 to irradiate the laminate 10 at a selected portion 12.According to one embodiment, the photon energy level emitted by the laser source 14 is between about 2.00 eV and 7.00 eV, preferably between about 2.25 eV and about 3.65 eV to destroy the existing in the insulating material of the build-up layer At least part of the chemical bond. In order to make the laser source 14 not ablate but only weaken the insulating material, the average laser flow rate of the laser source may be less than or equal to about 0.5 J / cm2. The laser beam 16 may have a wavelength from a short visible region to a deep UV region (about 550nm to about 150nm). The laser device may include second and third harmonic Nd: YAG or vanadate lasers with wavelengths approximately 532 nm and approximately 355 nm, respectively. Alternatively, the laser device may include a second and third harmonic Nd: YLF laser device with wavelengths of about 527 nm and about 351 nm, or an XeCl excimer laser device with a wavelength of about 354 nm or an XeF excimer laser device with a wavelength of about 308 nm. According to various embodiments, the aforementioned excimer laser device is preferred because of its high pulse energy (about 100 megajoules to about 2 joules).Most of the chemical bonds of the chemical bonds in the insulating material of the build-up layer 10 listed above are in the range of about 1 eV to about 10 eV. After being irradiated with a laser beam such as light beam 16, the bonded atoms in the selected portion 12 will absorb photons and be excited to a higher energy level. If the photon energy is higher than the bond energy, the atom that absorbed the photon energy will break the chemical bond of the bonded atom. The fraction of broken bonds caused by laser irradiation depends on the photon absorption cross section, local photon intensity and flow rate. The laser irradiation parameters including photon energy selection can be selected according to an embodiment in order to obtain a predetermined depth at which the insulating material of the build-up layer 10 absorbs the laser beam 16. The depth of laser penetration is indicated by the dimension D in the drawings including FIGS. 1a-1c. According to various embodiments, laser photons need to be absorbed into the build-up layer in order to weaken the selected portion 12 to a depth D. According to a preferred embodiment, the depth D may be approximately 5-15 microns.2, the laser irradiation of the selected portion 12 results in the predetermined laser-weakened portion 24 on the build-up layer 10. As shown in FIG. 2, according to various embodiments, the laser irradiation of the build-up layer 10 does not ablate all materials of the selected portion 12 (see FIGS. 1a-1c), but destroys at least a portion of the chemical bonds in these selected portions to form Laser weakened portion 24. The characteristics of the laser-weakened part include: its erosion speed is faster than the erosion speed of the original material deposited under the same erosion chemical reaction and erosion process parameters.Referring next to FIG. 3, the illustrated embodiment includes removing the laser-weakened portion 24 to form a plurality of grooves 26 that exhibit an embedded pattern corresponding to the predetermined pattern of the patterned conductive layer to be provided. Removal according to one embodiment may include etching, for example using one of the well-known surface descaling solutions and surface descaling process parameters commonly used for surface descaling of laser drilled via openings after laser drilling. An example of such a surface detersive solution includes permanganate reagents. The etching solution can be selected so that the original build-up material is hardly etched away, but the amount of erosion on the laser-weakened portions is much greater because the chemical bonds in these portions are weakened.4, the illustrated embodiment includes filling the groove 26 with a conductive material 27 to form a patterned conductive layer 28. According to one embodiment, the surface of the recess 26 is filled with a copper seed layer that may initially be formed with an electroless film, and then electroplated on top of the electroless copper seed layer using electrolytic copper plating. Thereafter, mechanical polishing methods such as CMP may be used to confine the copper to the recessed area. Other methods of metalizing the grooves also belong to the common sense of those skilled in the art. In the embodiment shown in FIG. 4, the patterned conductive layer 27 includes a conductive metallization layer (shown in cross section).Although the layout-processed conductive layer of the embodiment shown in FIG. 4 only shows the conductive metal layer as previously defined, the embodiment is not limited thereto, and its scope includes the layout including a plurality of conductive paths as described above Processed conductive layer. The via can be a blind hole or a through hole according to the needs of the occasion. Therefore, in this case, the laser irradiation can be selected to weaken the build-up material to a depth greater than the depth associated with the conductive metallization layer.Preferably, various embodiments provide a method of providing a patterned conductive layer such as a conductive metallization layer or a conductive via layer, which does not use dry film resist lamination, exposure, development, and stripping In the lithography, the process of lithography is replaced by a process that only requires laser irradiation and chemical etching. In addition, the given embodiment preferably produces embedded metal features in the build-up, which allows the realization of finer lines and spaces than prior art processes, such as fine lines less than about 10 microns And spatial characteristics. In addition, each embodiment advantageously provides a laser intensity and flow rate that is much lower than that of a simple laser ablation process (approximately 2-10 times lower depending on the layering material). This advantage is equivalent to the same laser budget given It can cover a much larger area. In addition, the chemical etching of the laser-weakened portion according to an embodiment can also be used as a surface cleaning and roughening process on the build-up surface, which is a process required by the prior art. Therefore, each embodiment does not increase the processing steps compared to the prior art, but reduces the processing steps. In addition, the various embodiments can be advantageously used for layout processing of vias, lines, and spatial features. Compared with the prior art laser via and lithographic layout processing technology, the alignment accuracy is improved. One problem with prior art build-up processes is that laser-drilled via alignment and lithographic feature alignment affect each other, where laser alignment is the limiting factor for build-up alignment. This constraint can be overcome by using a unified layout processing technique for both the via and the conductive layout.The above-described embodiments have been described by way of example, not limitation. Although the detailed embodiments of the present invention have been described as such, it should be understood that the present invention defined by the appended claims is not limited by the specific details described in the preceding description, since it can be done without departing from the spirit and scope of the present invention Make many changes.
Systems, methods, and apparatus for data communication are provided. An apparatus maybe configured to generate a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits, provide a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, and transmit the packet through the interface. The packet may be addressed to a control register of the slave device. The control register may have the first number of bits. Each bit in the control-bit field may correspond to a bit of the control register that is identified by the mask field.
CLAIMS1. A method performed at a device operating as a bus master, comprising:generating a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits;providing a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control- bit field is to be written; andtransmitting the packet through the interface, wherein the packet is addressed to the control register of the slave device, the control register having the first number of bits,wherein each bit in the control-bit field corresponds to a bit location in the control register that is identified by the mask field.2. The method of claim 1, wherein the mask field is further generated by providing a second bit value in each bit location of the mask field that does not correspond to a bit location in the control register in which a bit of the control-bit field is to be written.3. The method of claim 1, wherein:positions of bit locations in the mask field are independent of positions of bit locations in the control-bit field; andpositions of bit locations in the control register are independent of the positions of bit locations in the control-bit field.4. The method of claim 1, wherein positions of bit locations in the mask field directly correspond to positions of bit locations in the control register.5. The method of claim 1, wherein the interface is a radio frequency front end (RFFE) interface.6. The method of claim 1, wherein the slave device is configured to perform one or more functions of a radio frequency (RF) front end.7. The method of claim 1, wherein the interface is an I3C interface.8. A bus master apparatus, comprising:an interface circuit configured to couple the bus master apparatus to a serial bus; anda processing circuit configured to:generate a mask field in a packet to be transmitted through the interface circuit to a slave device, the mask field having a first number of bits,provide a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits,wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control- bit field is to be written, andtransmit the packet through the interface circuit, wherein the packet is addressed to the control register of the slave device, the control register having the first number of bits,wherein each bit in the control-bit field corresponds to a bit location in the control register that is identified by the mask field.9. The apparatus of claim 8, wherein the processing circuit configured to generate the mask field is further configured to:provide a second bit value in each bit location of the mask field that does not correspond to a bit location in the control register in which a bit of the control-bit field is to be written.The apparatus of claim 8, wherein: positions of bit locations in the mask field are independent of positions of bit locations in the control-bit field; andpositions of bit locations in the control register are independent of the positions of bit locations in the control-bit field.11. The apparatus of claim 8, wherein positions of bit locations in the mask field directly correspond to positions of bit locations in the control register.12. The apparatus of claim 8, wherein the interface circuit is configured to operate as a radio frequency front end (RFFE) interface, and wherein the slave device is configured to perform one or more functions of a radio frequency (RF) front end.13. The apparatus of claim 8, wherein the interface circuit is configured to operate as an 13 C interface.14. A method performed at a slave device coupled to a bus, comprising:receiving a packet from the bus, wherein the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control-bit field;identifying at least one bit in the mask field having a first value;detecting at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value;obtaining a load value to write to the control register based on the at least one bit in the control-bit field; andwriting the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.15. The method of claim 14, wherein:obtaining the load value includes:reading the control register to obtain an initial value of the control register, and merging the at least one bit in the control-bit field with the initial value of the control register to obtain a merged value; andwriting the load value to the control register includes:writing the merged value to the control register such that each bit location in the control register identified by the mask field as corresponding to the associated bit in the control-bit field is merged with the associated bit in the control-bit field.16. The method of claim 15, wherein only bits in the control register identified by the mask field as corresponding to bits in the control-bit field are affected by writing the merged value to the control register.17. The method of claim 15, wherein obtaining the load value further includes: writing each bit in the control-bit field to a masking word at a bit location identified by the first value in a corresponding bit location of the mask field;writing a predefined masking bit value to each bit location in the masking word identified by a second value in a corresponding bit location of the mask field; andmerging the at least one bit in the control-bit field with the initial value of the control register using the masking word.18. The method of claim 17, wherein merging the at least one bit in the control-bit field with the initial value of the control register includes:performing a logic AND operation between the initial value of the control register and the masking word to generate the merged value.19. The method of claim 17, wherein merging the at least one bit in the control-bit field with the initial value of the control register includes:performing a logic OR operation between the initial value of the control register and the masking word to generate the merged value.20. The method of claim 14, wherein:positions of bit locations in the mask field are independent of positions of bit locations in the control-bit field; and positions of bit locations in the control register are independent of the positions of bit locations in the control-bit field.21. The method of claim 14, wherein positions of bit locations in the mask field directly correspond to positions of bit locations in the control register.22. The method of claim 14, wherein the bus is a radio frequency front end (RFFE) bus, and wherein the slave device is configured to perform one or more functions of a radio frequency (RF) front end.23. The method of claim 14, wherein the bus is an I3C bus.24. A slave device, comprising:an interface circuit configured to couple the slave device to a serial bus; and a processing circuit configured to:receive a packet from the serial bus via the interface circuit, wherein the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control- bit field,identify at least one bit in the mask field having a first value, detect at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value,obtain a load value to write to the control register based on the at least one bit in the control-bit field, andwrite the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.25. The slave device of claim 24, wherein:the processing circuit configured to obtain the load value is configured to:read the control register to obtain an initial value of the control register, and merge the at least one bit in the control-bit field with the initial value of the control register to obtain a merged value; andthe processing circuit configured to write the load value to the control register is configured to:write the merged value to the control register such that each bit location in the control register identified by the mask field as corresponding to the associated bit in the control-bit field is merged with the associated bit in the control-bit field.26. The slave device of claim 25, wherein only bits in the control register identified by the mask field as corresponding to bits in the control-bit field are affected by writing the merged value to the control register.27. The slave device of claim 25, wherein the processing circuit configured to obtain the load value is further configured to:write each bit in the control-bit field to a masking word at a bit location identified by the first value in a corresponding bit location of the mask field;write a predefined masking bit value to each bit location in the masking word identified by a second value in a corresponding bit location of the mask field; andperform a logic AND operation or a logic OR operation between the initial value of the control register and the masking word to generate the merged value.28. The slave device of claim 24, wherein:positions of bit locations in the mask field are independent of positions of bit locations in the control-bit field;positions of bit locations in the control register are independent of the positions of bit locations in the control-bit field; andthe positions of bit locations in the mask field directly correspond to the positions of bit locations in the control register.29. The slave device of claim 24, wherein the serial bus is a radio frequency front end (RFFE) bus, and wherein the slave device is adapted to perform one or more functions of a radio frequency (RF) front end. The slave device of claim 24, wherein the serial bus is an 13 C bus.
FULL-MASK PARTIAL-BIT-FIELD (FM-PBF) TECHNIQUE FOR LATENCYSENSITIVE MASKED-WRITECROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority to and the benefit of Provisional Application No. 62/259,543 filed in the U.S. Patent and Trademark Office on November 24, 2015, and Non-Provisional Application No. 15/346,602 filed in the U.S. Patent and Trademark Office on November 8, 2016, the entire contents of which are incorporated herein by reference.BACKGROUNDField[0002] The present disclosure relates generally to communication devices, and more particularly, to communications links connecting integrated circuit devices within an apparatus.Background[0003] Serial interfaces have become the preferred method for digital communication between integrated circuit (IC) devices in various apparatus. For example, mobile communications equipment may perform certain functions and provide capabilities using IC devices that include radio frequency transceivers, cameras, display systems, user interfaces, controllers, storage, and the like. General-purpose serial interfaces known in the industry, including the Inter-Integrated Circuit (I2C or PC) serial bus and its derivatives and alternatives, including interfaces defined by the Mobile Industry Processor Interface (MIPI) Alliance, such as I3C and the Radio Frequency Front End (RFFE) interface.[0004] In one example, the I2C serial bus is a serial single-ended computer bus that was intended for use in connecting low-speed peripherals to a processor. Some interfaces provide multi-master buses in which two or more devices can serve as a bus master for different messages transmitted on the serial bus. In another example, the RFFE interface defines a communication interface for controlling various radio frequency (RF) front end devices, including power amplifier (PA), low-noise amplifiers (LNAs), antenna tuners, filters, sensors, power management devices, switches, etc. These devices may be collocated in a single integrated circuit (IC) or provided in multiple IC devices. In a mobile communications device, multiple antennas and radio transceivers may support multiple concurrent RF links. Certain functions can be shared among the front end devices and the RFFE interface enables concurrent and/or parallel operation of transceivers using multi-master, multi-slave configurations.[0005] As the demand for improved communications between devices continues to increase, there exists a need for improvements in protocols and methods for managing the interfaces between RF front end devices.SUMMARY[0006] Certain aspects of the disclosure relate to systems, apparatus, methods and techniques for implementing and managing digital communication interfaces that may be used between IC devices in various apparatus.[0007] In various aspects of the disclosure, a method performed by a device operating as a bus master may include generating a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits, providing a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control- bit field is to be written, and transmitting the packet through the interface. The packet may be addressed to the control register of the slave device. The control register may have the first number of bits. Each bit in the control-bit field may correspond to a bit location in the control register that is identified by the mask field.[0008] In various aspects of the disclosure, an apparatus may be adapted to generate a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits, provide a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control-bit field is to be written, and transmit the packet through the interface. The packet may be addressed to the control register of the slave device. The control register may have the first number of bits. Each bit in the control-bit field may correspond to a bit location in the control register that is identified by the mask field.[0009] In various aspects of the disclosure, an apparatus may have means for generating a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits, means for providing a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control-bit field is to be written, and means for transmitting the packet through the interface. The packet may be addressed to the control register of the slave device. The control register may have the first number of bits. Each bit in the control-bit field may correspond to a bit location in the control register that is identified by the mask field.[0010] In various aspects of the disclosure, a processor readable storage medium is disclosed. The storage medium may be a non-transitory storage medium and may store code that, when executed by one or more processors, causes the one or more processors to generate a mask field in a packet to be transmitted through an interface to a slave device, the mask field having a first number of bits, provide a control-bit field in the packet, the control-bit field having a second number of bits, where the second number of bits is less than the first number of bits, wherein the mask field is generated to identify at least one bit location in a control register of the slave device in which at least one bit of the control-bit field is to be written by providing a first bit value in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the of the control-bit field is to be written, and transmit the packet through the interface. The packet may be addressed to the control register of the slave device. The control register may have the first number of bits. Each bit in the control- bit field may correspond to a bit location in the control register that is identified by the mask field.[0011] In various aspects of the disclosure, a method performed by a slave device coupled to a bus may include receiving a packet from the bus, where the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control-bit field, identifying at least one bit in the mask field having a first value, detecting at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value, obtaining a load value to write to the control register based on the at least one bit in the control-bit field, and writing the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.[0012] In various aspects of the disclosure, an apparatus may be adapted to receive a packet from the bus, where the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control-bit field, identify at least one bit in the mask field having a first value, detect at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value, obtain a load value to write to the control register based on the at least one bit in the control-bit field, and write the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.[0013] In various aspects of the disclosure, an apparatus may have means for receiving a packet from the bus, where the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control-bit field, means for identifying at least one bit in the mask field having a first value, means for detecting at least one bit in the control- bit field corresponding to the at least one bit in the mask field having the first value, means for obtaining a load value to write to the control register based on the at least one bit in the control-bit field, and means for writing the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.[0014] In an aspect of the disclosure, a processor readable storage medium is disclosed. The storage medium may be a non-transitory storage medium and may store code that, when executed by one or more processors, causes the one or more processors to receive a packet from the bus, where the packet is addressed to a control register of the slave device and includes a mask field and a control-bit field, the mask field having a greater number of bits than the control-bit field, identify at least one bit in the mask field having a first value, detect at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value, obtain a load value to write to the control register based on the at least one bit in the control-bit field, and write the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.BRIEF DESCRIPTION OF THE DRAWINGS[0015] FIG. 1 depicts an apparatus that includes an RF front end and that may be adapted according to certain aspects disclosed herein.[0016] FIG. 2 is a block diagram illustrating a device that employs an RFFE bus to couple various front end devices.[0017] FIG. 3 is a diagram that illustrates an example of a system architecture for an apparatus employing a data link between IC devices according to certain aspects disclosed herein.[0018] FIG. 4 is a diagram illustrating an example of apparatus in which masked- write operations may be used.[0019] FIG. 5 illustrates a packet that may be transmitted to write control bits to the register in the slave device of FIG. 4.[0020] FIG. 6 illustrates examples of full-mask partial-bit-field (FM-PBF) packets in accordance with certain aspects disclosed herein.[0021] FIG. 7 illustrates FM-PBF write packets that cause a single bit to be written to a control register in accordance with certain aspects disclosed herein. [0022] FIG. 8 illustrates FM-PBF write packets that cause multiple bits to be written to a control register in accordance with certain aspects disclosed herein.[0023] FIG. 9 illustrates an example of FM-PBF write packet processing in accordance with certain aspects disclosed herein.[0024] FIG. 10 illustrates reductions in latency obtained from FM-PBF write performed in accordance with certain aspects disclosed herein.[0025] FIG. 11 is a block diagram illustrating an example of an apparatus employing a processing circuit that may be adapted according to certain aspects disclosed herein.[0026] FIG. 12 is a flow chart of a method of data communication performed at a bus master device adapted in accordance with certain aspects disclosed herein.[0027] FIG. 13 is a diagram illustrating an example of a hardware implementation for a transmitting apparatus and employing a processing circuit adapted according to certain aspects disclosed herein.[0028] FIG. 14 is a flow chart of a method of data communication performed at a slave device adapted in accordance with certain aspects disclosed herein.[0029] FIG. 15 is a diagram illustrating an example of a hardware implementation for a receiving apparatus and employing a processing circuit adapted according to certain aspects disclosed herein.DETAILED DESCRIPTION[0030] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.[0031] Several aspects of telecommunication systems will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.Example of an Apparatus with Multiple IC Device Subcomponents[0032] Certain aspects of the invention may be applicable to communications links deployed between electronic devices that include subcomponents of an apparatus such as a telephone, a mobile computing device, an appliance, automobile electronics, avionics systems, etc. FIG. 1 depicts an apparatus 100 that may employ a communication link between IC devices. In one example, the apparatus 100 may be a communication device. The apparatus 100 may include a processing circuit having two or more IC devices 104, 106 that may be coupled using a first communication link. One IC device may be include a radio frequency (RF) front end 106 that may be operated to enable the apparatus to communicate through one or more antennas 108 with a radio access network, a core access network, the Internet and/or another network. The RF front end 106 may include a plurality of devices coupled by a second communication link, which may include a radio frequency front end (RFFE) bus.[0033] The processing circuit 102 may include one or more application-specific IC (ASIC) devices. An IC device 104 may include and/or be coupled to one or more processing devices 112, logic circuits, one or more modems 110, and processor readable storage such as a memory device 114 that may maintain instructions and data that may be executed by a processor on the processing circuit 102. The processing circuit 102 may be controlled by one or more of an operating system and an application programming interface (API) layer that supports and enables execution of software modules residing in storage media. The memory device 114 may include read-only memory (ROM) or random-access memory (RAM), electrically erasable programmable ROM (EEPROM), flash cards, or any memory device that can be used in processing systems and computing platforms. The processing circuit 102 may include or have access to a local database or parameter storage that can maintain operational parameters and other information used to configure and operate apparatus 100. The local database may be implemented using one or more of a database module, flash memory, magnetic media, EEPROM, optical media, tape, soft or hard disk, or the like. The processing circuit may also be operably coupled to external devices such as the antennas 108, a display 120, operator controls, such as a button 124 and/or an integrated or external keypad 122, among other components.Overview of the RFFE Bus[0034] FIG. 2 is a block diagram 200 illustrating an example of a device 202 that employs an RFFE bus 208 to couple various front end devices 212-217. A modem 204 may also be coupled to the RFFE bus 208. The modem may communicate with a baseband processor 206. The illustrated device 202 may be embodied in one or more of a mobile device, a mobile telephone, a mobile computing system, a telephone, a notebook computer, a tablet computing device, a media player, a gaming device, a wearable computing and/or communications device, an appliance, or the like. In various examples, the device 202 may be implemented with one or more baseband processors 206, modems 204, multiple communications links 208, 220, and various other buses, devices and/or different functionalities.[0035] In the example illustrated in FIG. 2, the RFFE bus 208 may be coupled to an RF integrated circuit (RFIC) 212, which may include one or more controllers, and/or processors that configure and control certain aspects of the RF front end. The RFFE bus 208 may couple the RFIC 212 to a switch 213, an RF tuner 214, a power amplifier (PA) 215, a low noise amplifier (LNA) 216, and a power management module 217.[0036] FIG. 3 is a block schematic diagram illustrating an example of an architecture for a device 300 that may employ an RFFE bus 330 to connect bus master devices 320i-320Nand slave devices 302 and 322i-322N. The RFFE bus 330 may be configured according to application needs, and access to multiple buses 330 may be provided to certain of the devices 320i-320N, 302, and 322i-322N. In operation, one of the bus master devices 320i-320Nmay gain control of the bus and transmit a slave identifier (slave address) to identify one of the slave devices 302 and 322i-322Nto engage in a communication transaction. Bus master devices 320i-320Nmay read data and/or status from slave devices 302 and 322i-322N, and may write data to memory or may configure the slave devices 302 and 322I-322N. Configuration may involve writing to one or more registers or other storage on the slave devices 302 and 322i- 322N. [0037] In the example illustrated in FIG. 3, a first slave device 302 coupled to the RFFE bus 330 may respond to one or more bus master devices 320i-320N, which may read data from, or write data to the first slave device 302. In one example, the first slave device 302 may include or control a power amplifier (see the PA 215 in FIG. 2), and one or more bus master devices 320i-320Nmay from time-to-time configure a gain setting at the first slave device 302.[0038] The first slave device 302 may include configuration registers 306 and/or other storage devices 324, a processing circuit and/or control logic 312, a transceiver 310 and a number of line driver/receiver circuits 314a, 314b as needed to couple the first slave device 302 to the RFFE bus 330 (e.g., via a serial clock line 316 and a serial data line 318). The processing circuit and/or control logic 312 may include a processor such as a state machine, sequencer, signal processor or general-purpose processor. The transceiver 310 may include one or more receivers 310a, one or more transmitters 310c and certain common circuits 310b, including timing, logic and storage circuits and/or devices. In some instances, the transceiver 310 may include encoders and decoders, clock and data recovery circuits, and the like. A transmit clock (TXCLK) signal 328 may be provided to the transmitter 310c, where the TXCLK signal 328 can be used to determine data transmission rates.[0039] The RFFE bus 330 is typically implemented as a serial bus in which data is converted from parallel to serial form by a transmitter, which transmits the encoded data as a serial bitstream. A receiver processes the received serial bitstream using a serial-to- parallel converter to deserialize the data.Masked-Writes on a Shared Bus[0040] Certain aspects disclosed herein relate to masked-write operations that may be used in certain applications where low-latency responses are desired, and/or where a single resource may be written, modified, or otherwise addressed by multiple bus masters. FIG. 4 illustrates an example of apparatus 400 in which a masked-write operation may be used. The apparatus 400 may be provided in a mobile communications device, for example, and may include two or more bus master devices 402, 404 and at least one slave device 406 communicatively coupled by a serial bus 408. The serial bus may be an I2C bus, a camera control interface (CCI) bus, an I3C bus, or an RFFE bus, or any other bus suited to the application and function of the apparatus 400.[0041] In one example, the serial bus 408 conforms or complies with MIPI Alliance specifications for an RFFE bus. The bus master devices 402, 404 may include a modem, application processor or controller. In the example, the slave device 406 may be a power amplifier, although the principles disclosed herein apply to other types of slave devices. The slave device 406 may include a processor 416, a memory device 414, and one or more functional circuits or modules 412. In the example of a power amplifier, the functional circuits or modules 412 may include a gain control circuit. The slave device 406 may be configurable using configuration circuits and modules 412, which may include parameter storage including a control register 420 that is writable and/or readable by the bus master devices 402, 404 through the serial bus 408. In some instances, the control register 420 may be an 8-bit register (b0-b7), where a first group of bits 422 is configured only by the first bus master device 402, a second group of bits 424 is certain bits configured only by the second bus master device 404, and a third group of bits 426 includes unused bits or bits configured by the first bus master device 402 and the second bus master device 404. Masked-write operations may be used to permit the first bus master device 402 to write the first group of bits 422 without affecting other bits 424, 426 in the control register 420, and to permit the second bus master device 404 to write the second group of bits 424 without affecting other bits 422, 426 in the control register 420.[0042] FIG. 5 illustrates a packet 500 that may be transmitted to write control bits to the control register 420 in the slave device 406, for example. The packet 500 includes two 8-bit fields 502, 504. The mask field 502 has the same width as the control register 420 and indicates the bits to be written or modified in response to the packet 500. In one example, a bit in the mask field 502 with the value T indicates a bit location in the control register 420 that is be written or modified in response to the packet 500, and a bit in the mask field 502 with the value '0' indicates a bit location in the control register 420 that is be unaffected by the response to the packet 500. The control-bit field 504 has the same width as the control register 420 and carries the value to be written to corresponding bit locations in the control register 420.[0043] With reference to the data flow diagram 510 in FIG. 5, each of the bits in the mask field 502 is used by a gating function 512 that operates on the corresponding bit in the control-bit field 504 to produce a bit update value 514 that selectively modifies a corresponding bit in the control register 420. In one example, the gating function 512 controlled by a bit of the mask field 502 may prevent the latching of a corresponding input of the control register 420 that causes the control register 420 to ignore a write operation for the affected bit. In another example, the gating function 512 controlled by a bit of the mask field 502 may select between a current bit value stored in the control register 420 and a corresponding bit value on the control-bit field 504 during a masked- write operation. In the example, 8-bit mask fields 502 and 8-bit control-bit fields 504 are used to perform masked-write operations on an 8-bit control register 420. That is, a 16-bit transmission is required whether 1 bit is modified or 8 bits are modified.Full-Mask Partial-Bit-Field Writes[0044] An apparatus in accordance with certain aspects disclosed herein may employ a modified masked-write operation that can provide decreased latency. In one example, a full-mask partial-bit-field (FM-PBF) masked-write operation can decrease the number of bits transmitted in a write operation that is used to configure or control the operation of a slave device 406.[0045] With reference to FIG. 6, the FM-PBF write packet 600 uses a fixed, full- length mask field 602 having a first number of bits while providing a control-bit field 604 that includes only the bits (a second number of bits) that affect the control register 420. Accordingly, the length of the control-bit field 604 varies based on the nature of the masked-write to be performed. FIG. 6 illustrates the different configurations 606- 613 of an FM-PBF write packet 600 that may be transmitted over the bus. The latency reduction obtained using the FM-PBF write packet 600 is a function of the number of bits to be modified. The first configuration 606 may be used to write all 8 bits of the control register 420, with the other configurations 607-613 being used to write less than 8 bits of the control register 420. The first configuration 606 may use a 16-bit transmission to write all 8 bits of the control register 420, while the eighth configuration 613 uses a 16-bit transmission to write 1 bit of the control register 420.[0046] In a FM-PBF write packet 600, the value of a bit in the mask field 602 at a given mask-bit location implies the presence or absence of a corresponding control-bit in the control-bit field 604. The meaning of the bit in the mask field 602 may be expressed as follows: MF-Dx = 1 => CF-Dx, is available in the masked-write packet 600,MF-Dx = 0 => CF-Dx is not available in the masked-write packet 600.When a CF-Dx bit is not available in the masked-write packet 600, the corresponding bit in the control register 420 is unaffected by the execution of the masked-write operation. When a CF-Dx bit is available, a value ('0' or T) may be written to the corresponding bit location of the control register 420.[0047] Certain aspects of the FM-PBF write technique may find application in low- latency environments, including in RF front ends.Examples of Full-Mask Partial-Bit-Field Writes[0048] FIGs. 7 and 8 illustrate examples of FM-PBF write operations. FIG. 7 illustrates FM-PBF write packets 700, 720 that cause a single bit to be written to a control register 710. In a first FM-PBF write packet 700, a T is present in the MF-D3 712 bit-location of the mask field 702, indicating the availability of a control bit 714 in the control-bit field 704. In this example, a single bit, set to a Ί ' value, is provided in the control-bit field 704. The remaining bits of the mask field 702 are set to '0' indicating that no other bits are available in the control-bit field 704 for writing to the control register 710. An FM-PBF write executed in response to the first FM-PBF write packet 700 causes a T 716 to be written to the R3 bit of the control register 710.[0049] In a second FM-PBF write packet 720, a ' 1 ' is present in the MF-D3 bit- location 712 of the mask field 722, indicating the availability of a control bit 732 in the control-bit field 724. In this example, a single bit, set to a '0' value, is provided in the control-bit field 724. The remaining bits of the mask field 722 are set to '0' indicating that no other bits are available in the control-bit field 724 for writing to the control register 710. An FM-PBF write executed in response to the second FM-PBF write packet 720 causes a '0' 734 to be written to the R3 bit of the control register 716.[0050] FIG. 8 illustrates FM-PBF write packets 800, 820 that cause multiple bits to be written to a control register 810, 830. In a first FM-PBF write packet 800, a T is present in the MF-D6 bit-location 812a and the MF-D3 bit-location 812b of the mask field 802, indicating the availability of two control bits 814a, 814b in the control -bit field 804. In this example, the first control bit 814a is set to a T value and the second control bit 814b is set to a '0' value. The remaining bits of the mask field 802 are set to '0' indicating that no other bits are available in the control-bit field 804 for writing to the control register 810. An FM-PBF write executed in response to the first FM-PBF write packet 800 causes a T 816a to be written to the R6 bit of the control register 810 and a '0' 816b to be written to the R3 bit of the control register 810.[0051] In a second FM-PBF write packet 820, a ' 1 ' is present in the MF-D7 bit- location 832a, the MF-D2 bit-location 832b, and the MF-D0 bit-location 812c of the mask field 822, indicating the availability of three control bits 834a, 834b, 834c in the control-bit field 824. In this example, the first control bit 834a is set to a T value, the second control bit 834b is set to a T value, and the third control bit 834c is set to a '0' value. The remaining bits of the mask field 822 are set to '0' indicating that no other bits are available in the control-bit field 824 for writing to the control register 830. An FM-PBF write executed in response to the first FM-PBF write packet 820 causes a Ί ' 836a to be written to the R7 bit of the control register 810, a T 836b to be written to the R2 bit of the control register 830 and a '0' 836c to be written to the R0 bit of the control register 830.[0052] With reference again to the example illustrated in FIG. 4, a bus master 402 or 404 may wish to configure a slave device 406 that operates as an amplifier. The bus master 402 or 404 may send a sequence of FM-PBF write packets 700, 720, and/or 820 to a control register of the slave device 406 to configure the operation of the amplifier. A first FM-PBF write packet 720 may cause a '0' value to be written into an enable field of the control register, thereby disabling the amplifier during configuration. A second FM-PBF write packet 820 may cause the gain of the amplifier to be adjusted, and a third FM-PBF write packet 700 may be sent after a delay to cause a T value to be written into the enable field of the control register, thereby enabling the amplifier. The delay between the second and third FM-PBF write packets 820, 700 may be provided to allow the amplifier to stabilize after the gain has been changed.Examples of Processing Full-Mask Partial-Bit-Field Write Packets[0053] FIG. 9 illustrates an example of FM-PBF write packet processing 900 using one example of a process 920 and in accordance with certain aspects disclosed herein. In the example, the FM-PBF write packet has a mask field 902 set to '00100010' value and a two-bit control-bit field 904 set to '01 '. A load value register 906 may be used to build a value to be written into a target control register to which the FM-PBF write packet is addressed. As shown in block 922, the mask field 902 of the FM-PBF write packet is parsed in a direction 908 from most-significant bit (MSB) to least-significant bit (LSB). Parsing may include examining a current bit to determine whether a corresponding bit is provided in the control-bit field 904. In the example, two bits 912, 914 of the mask field 902 are set to T, indicating the presence of corresponding bits in the control-bit field 904. When the current bit of the mask field 902 is set (set to ' 1 '), the next value in the control-bit field may be shifted out and stored in the current bit location of the load value register 906 (corresponding to the current bit location in the mask field 902). When the current bit of the mask field 902 is cleared (set to Ό'), a Ί ' value may be stored in the current bit location of the load value register 906. The current bit of the mask field 902 and the load value register 906 may then be advanced.[0054] As shown in block 924, the current target control register content (TCRC) may be read when parsing of the mask field 902 has been completed. At block 926, the final load value may then be obtained by performing a logic AND of the TCRC with the masked control field data (MCFD) in the load value register 906 such that the load value register 906 takes the value: MCFD && TCRC. At block 928, the content of the load value register may be written to the target control register.[0055] FIG. 10 includes tables 1000, 1020 that illustrate reduction in latency obtained from FM-PBF write performed in accordance with certain aspects disclosed herein. In the first table 1000, the latency reduction is shown as a comparison to a conventional masked-write operation for an 8-bit mask, 8-bit control register, and differences are shown for different numbers of bits targeted for modification. In the second table 1020, the latency reduction is shown as a comparison to a conventional masked-write operation for a 16-bit mask, 16-bit control register, and differences are shown for different numbers of bits targeted for modification.Examples of Processing Circuits and Methods[0056] FIG. 11 is a conceptual diagram illustrating a simplified example of a hardware implementation for an apparatus 1100 employing a processing circuit 1102 that may be configured to perform one or more functions disclosed herein. In accordance with various aspects of the disclosure, an element, or any portion of an element, or any combination of elements as disclosed herein may be implemented using the processing circuit 1102. The processing circuit 1102 may include one or more processors 1104 that are controlled by some combination of hardware and software modules. Examples of processors 1 104 include microprocessors, microcontrollers, digital signal processors (DSPs), ASICs, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, sequencers, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. The one or more processors 1104 may include specialized processors that perform specific functions, and that may be configured, augmented or controlled by one of the software modules 11 16. The one or more processors 1104 may be configured through a combination of software modules 11 16 loaded during initialization, and further configured by loading or unloading one or more software modules 11 16 during operation.[0057] In the illustrated example, the processing circuit 1102 may be implemented with a bus architecture, represented generally by the bus 1 1 10. The bus 1 110 may include any number of interconnecting buses and bridges depending on the specific application of the processing circuit 1102 and the overall design constraints. The bus 11 10 links together various circuits including the one or more processors 1104, and storage 1 106. Storage 1106 may include memory devices and mass storage devices, and may be referred to herein as computer-readable media and/or processor-readable media. The bus 1 110 may also link various other circuits such as timing sources, timers, peripherals, voltage regulators, and power management circuits. A bus interface 1108 may provide an interface between the bus 1 110 and one or more transceivers 11 12. A transceiver 11 12 may be provided for each networking technology supported by the processing circuit. In some instances, multiple networking technologies may share some or all of the circuitry or processing modules found in a transceiver 1 112. Each transceiver 11 12 provides a means for communicating with various other apparatus over a transmission medium. Depending upon the nature of the apparatus 1100, a user interface 1 118 (e.g., keypad, display, speaker, microphone, joystick) may also be provided, and may be communicatively coupled to the bus 1 110 directly or through the bus interface 1 108.[0058] A processor 1104 may be responsible for managing the bus 11 10 and for general processing that may include the execution of software stored in a computer- readable medium that may include the storage 1106. In this respect, the processing circuit 1102, including the processor 1104, may be used to implement any of the methods, functions and techniques disclosed herein. The storage 1106 may be used for storing data that is manipulated by the processor 1104 when executing software, and the software may be configured to implement any one of the methods disclosed herein.[0059] One or more processors 1104 in the processing circuit 1102 may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, algorithms, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may reside in computer-readable form in the storage 1106 or in an external computer readable medium. The external computer- readable medium and/or storage 1106 may include a non-transitory computer-readable medium. A non-transitory computer-readable medium includes, by way of example, a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a "flash drive," a card, a stick, or a key drive), a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, and any other suitable medium for storing software and/or instructions that may be accessed and read by a computer. The computer-readable medium and/or storage 1106 may also include, by way of example, a carrier wave, a transmission line, and any other suitable medium for transmitting software and/or instructions that may be accessed and read by a computer. Computer-readable medium and/or the storage 1106 may reside in the processing circuit 1102, in the processor 1104, external to the processing circuit 1102, or be distributed across multiple entities including the processing circuit 1102. The computer-readable medium and/or storage 1106 may be embodied in a computer program product. By way of example, a computer program product may include a computer-readable medium in packaging materials. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system.[0060] The storage 1106 may maintain software maintained and/or organized in loadable code segments, modules, applications, programs, etc., which may be referred to herein as software modules 1116. Each of the software modules 1116 may include instructions and data that, when installed or loaded on the processing circuit 1102 and executed by the one or more processors 1104, contribute to a run-time image 11 14 that controls the operation of the one or more processors 1104. When executed, certain instructions may cause the processing circuit 1 102 to perform functions in accordance with certain methods, algorithms and processes described herein.[0061] Some of the software modules 1 116 may be loaded during initialization of the processing circuit 1 102, and these software modules 11 16 may configure the processing circuit 1 102 to enable performance of the various functions disclosed herein. For example, some software modules 1 116 may configure internal devices and/or logic circuits 1122 of the processor 1 104, and may manage access to external devices such as the transceiver 11 12, the bus interface 1 108, the user interface 11 18, timers, mathematical coprocessors, and so on. The software modules 1 116 may include a control program and/or an operating system that interacts with interrupt handlers and device drivers, and that controls access to various resources provided by the processing circuit 1102. The resources may include memory, processing time, access to the transceiver 11 12, the user interface 11 18, and so on.[0062] One or more processors 1104 of the processing circuit 1102 may be multifunctional, whereby some of the software modules 1 116 are loaded and configured to perform different functions or different instances of the same function. The one or more processors 1 104 may additionally be adapted to manage background tasks initiated in response to inputs from the user interface 1 118, the transceiver 11 12, and device drivers, for example. To support the performance of multiple functions, the one or more processors 1 104 may be configured to provide a multitasking environment, whereby each of a plurality of functions is implemented as a set of tasks serviced by the one or more processors 1 104 as needed or desired. In one example, the multitasking environment may be implemented using a timesharing program 1120 that passes control of a processor 1 104 between different tasks, whereby each task returns control of the one or more processors 1 104 to the timesharing program 1 120 upon completion of any outstanding operations and/or in response to an input such as an interrupt. When a task has control of the one or more processors 1 104, the processing circuit is effectively specialized for the purposes addressed by the function associated with the controlling task. The timesharing program 1 120 may include an operating system, a main loop that transfers control on a round-robin basis, a function that allocates control of the one or more processors 1104 in accordance with a prioritization of the functions, and/or an interrupt driven main loop that responds to external events by providing control of the one or more processors 1 104 to a handling function.[0063] FIG. 12 is a flow chart 1200 of a method of communication using a serial communication link. The method may be performed at a device operating as a bus master (e.g., apparatus 1100 of FIG. 1 1 or apparatus 1300 of FIG. 13).[0064] The device may generate a mask field in a packet to be transmitted through an interface to a slave device, wherein the mask field has a first number of bits 1202 (e.g., 8 bits or 16 bits). The device may generate the mask field to identify at least one bit location in a control register of the slave device in which at least one bit of a control- bit field is to be written. For example, the device may provide a first bit value (e.g., bit value of Ί ') in each bit location of the mask field that corresponds to a bit location in the control register in which a bit of the control-bit field is to be written. The device may further generate the mask field by providing a second bit value (e.g., bit value of 'Ο') in each bit location of the mask field that does not correspond to a bit location in the control register in which a bit of the control-bit field is to be written.[0065] The device may provide the control-bit field in the packet 1204. The control-bit field has a second number of bits, where the second number of bits is less than the first number of bits.[0066] The device may transmit the packet through the interface, wherein the packet is addressed to the control register of the slave device 1206. The control register has the first number of bits (e.g., 8 bits or 16 bits), wherein each bit in the control-bit field corresponds to a bit location in the control register that is identified by the mask field.[0067] Positions of bit locations in the mask field may be independent of positions of bit locations in the control-bit field. That is, the bit locations in the mask field may have no positional correspondence to the bit locations in the control-bit field. Moreover, positions of bit locations in the control register may be independent of the positions of bit locations in the control-bit field. That is, the bit locations in the control register may have no positional correspondence to the bit locations in the control-bit field. However, in an aspect, the positions of bit locations in the mask field directly correspond to the positions of bit locations in the control register. [0068] In one example, the interface is an RFFE interface and the slave device may be adapted to perform one or more functions of a RFFE device. In another example, the interface is an 13 C interface.[0069] FIG. 13 is a diagram illustrating a simplified example of a hardware implementation for an apparatus 1300 employing a processing circuit 1302 to support operations related to one or more aspects of the disclosure (e.g., aspects related to the method of FIG. 12 described above). The processing circuit typically has a processor 1316 that may include one or more of a microprocessor, microcontroller, digital signal processor, a sequencer and a state machine. The processing circuit 1302 may be implemented with a bus architecture, represented generally by the bus 1320. The bus 1320 may include any number of interconnecting buses and bridges depending on the specific application of the processing circuit 1302 and the overall design constraints. The bus 1320 links together various circuits including one or more processors and/or hardware modules, represented by the processor 1316, the modules or circuits 1304, 1306, 1308, line/bus interface circuits 1312 configurable to communicate over connectors or wires 1314 and the computer-readable storage medium 1318. The bus 1320 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further.[0070] The processor 1316 is responsible for general processing, including the execution of code/instructions stored on the computer-readable storage medium 1318. The code/instructions, when executed by the processor 1316, causes the processing circuit 1302 to perform the various functions described supra for any particular apparatus. The computer-readable storage medium may also be used for storing data that is manipulated by the processor 1316 when executing software, including data decoded from symbols transmitted over the connectors or wires 1314, which may be configured as data lanes and clock lanes. The processing circuit 1302 further includes at least one of the modules/circuits 1304, 1306, and 1308. The modules/circuits 1304, 1306, and 1308 may be software modules running in the processor 1316, resident/stored in the computer-readable storage medium 1318, one or more hardware modules coupled to the processor 1316, or some combination thereof. The modules/circuits 1304, 1306, and/or 1308 may include microcontroller instructions, state machine configuration parameters, or some combination thereof. [0071] In one configuration, the apparatus 1300 includes a mask field generation module and/or circuit 1304 that is configured to generate a mask field in a packet to be transmitted through an interface circuit 1312 to a slave device of a communication link, a control-bit field generation module and/or circuit 1306 that is configured to provide a control-bit field in the packet, and a packet transmission module and/or circuit 1308 that is configured to transmit the packet through the interface circuit 1312.[0072] FIG. 14 is a flow chart 1400 of a method of communication using a serial communication link. The method may be performed at a slave device coupled to a bus (e.g., apparatus 1100 of FIG. 11 or apparatus 1500 of FIG. 15).[0073] The slave device may receive a packet from the bus 1402. The packet may be addressed to a control register of the slave device. The packet may include a mask field and a control-bit field. The mask field may have a greater number of bits than the control-bit field. For example, the mask field may include 8 bits while the control-bit field may include less than 8 bits. Alternatively, the mask field may include 16 bits while the control-bit field may include less than 16 bits.[0074] The slave device may identify at least one bit in the mask field having a first value (e.g., a number of mask field bits having a bit value set to T) 1404. The slave device may further detect at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value 1406.[0075] The slave device may obtain a load value to write to the control register based on the at least one bit in the control-bit field 1408. Thereafter, the slave device may write the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control- bit field contains a bit value based on the associated bit in the control-bit field 1410.[0076] In an aspect of the disclosure, the slave device obtains the load value by reading the control register to obtain an initial value of the control register and merging the at least one bit in the control-bit field with the initial value of the control register to obtain a merged value. Moreover, the slave device writes the load value to the control register by writing the merged value to the control register such that each bit location in the control register identified by the mask field as corresponding to the associated bit in the control-bit field is merged with the associated bit in the control-bit field. In some examples, only bits in the control register identified by the mask field as corresponding to bits in the control-bit field are affected by writing the merged value to the control register.[0077] In one example, the slave device further obtains the load value by writing each bit in the control-bit field to a masking word at a bit location identified by the first value in a corresponding bit location of the mask field (e.g., bit location identified by a value of Ί ' in a corresponding bit location of the mask field), and writing a predefined masking bit value (e.g., masking bit value of '0' or T) to each bit location in the masking word identified by a second value in a corresponding bit location of the mask field (e.g., bit location identified by a value of '0' in a corresponding bit location of the mask field). The slave device then merges the at least one bit in the control-bit field with the initial value of the control register using the masking word. The slave device may merge the at least one bit in the control-bit field with the initial value of the control register by performing a logic AND operation between the initial value of the control register and the masking word to generate the merged value. Alternatively, the slave device may merge the at least one bit in the control-bit field with the initial value of the control register by performing a logic OR operation between the initial value of the control register and the masking word to generate the merged value.[0078] In some examples, positions of bit locations in the mask field are independent of positions of bit locations in the control-bit field. That is, the bit locations in the mask field have no positional correspondence to the bit locations in the control-bit field. Moreover, positions of bit locations in the control register are independent of the positions of bit locations in the control-bit field. That is, the bit locations in the control register have no positional correspondence to the bit locations in the control-bit field. However, in an aspect, the positions of bit locations in the mask field directly correspond to the positions of bit locations in the control register.[0079] In one example, the bus is an RFFE bus, and the slave device may be adapted to perform one or more functions of a RF front end. In another example, the bus is an I3C bus.[0080] FIG. 15 is a diagram illustrating a simplified example of a hardware implementation for an apparatus 1500 employing a processing circuit 1502 to support operations related to one or more aspects of the disclosure (e.g., aspects related to the method of FIG. 14 described above). The processing circuit typically has a processor 1516 that may include one or more of a microprocessor, microcontroller, digital signal processor, a sequencer and a state machine. The processing circuit 1502 may be implemented with a bus architecture, represented generally by the bus 1520. The bus 1520 may include any number of interconnecting buses and bridges depending on the specific application of the processing circuit 1502 and the overall design constraints. The bus 1520 links together various circuits including one or more processors and/or hardware modules, represented by the processor 1516, the modules or circuits 1504, 1506, 1508, line/bus interface circuits 1512 configurable to communicate over connectors or wires 1514 and the computer-readable storage medium 1518. The bus 1520 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further.[0081] The processor 1516 is responsible for general processing, including the execution of code/instructions stored on the computer-readable storage medium 1518. The code/instructions, when executed by the processor 1516, causes the processing circuit 1502 to perform the various functions described supra for any particular apparatus. The computer-readable storage medium may also be used for storing data that is manipulated by the processor 1516 when executing software, including data decoded from symbols transmitted over the connectors or wires 1514, which may be configured as data lanes and clock lanes. The processing circuit 1502 further includes at least one of the modules/circuits 1504, 1506, and 1508. The modules/circuits 1504, 1506, and 1508 may be software modules running in the processor 1516, resident/stored in the computer-readable storage medium 1518, one or more hardware modules coupled to the processor 1516, or some combination thereof. The modules/circuits 1504, 1506, and/or 1508 may include microcontroller instructions, state machine configuration parameters, or some combination thereof.[0082] In one configuration, the apparatus 1500 includes a packet receiving module and/or circuit 1504 that is configured to receive a packet from the connectors or wires 1514 of the bus 1520, wherein the packet is addressed to a control register of the apparatus 1500 and includes a mask field and control-bit field, the mask field having a greater number of bits than the control-bit field. The apparatus 1500 further includes a load value obtaining module and/or circuit 1506 that is configured to identify at least one bit in the mask field having a first value, detect at least one bit in the control-bit field corresponding to the at least one bit in the mask field having the first value, and obtain a load value to write to the control register based on the at least one bit in the control-bit field. The apparatus 1500 also includes a control register management module and/or circuit 1508 that is configured to write the load value to the control register, wherein each bit location in the control register identified by the mask field as corresponding to an associated bit in the control-bit field contains a bit value based on the associated bit in the control-bit field.[0083] It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.[0084] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase "means for."
A connection manager (112) of a communication device (103). The connection manager registers with a device driver (109) associated with a network interface (106, 228), which monitors the communication device for network access data from a third party connection manager (118). After detecting network access data, the device driver notifies the connection manager. Depending on one or more policies or user input received from a user interface (115), the connection manager may unregister with the device driver, disable the third party connection manager, or notify the user via the user interface that manual intervention may be required. The connection manager may prevent conflicts between itself and the third party connection manager.
1.A system including:Drives suitable for controlling network interface cards and monitoring network access data; andA first connection manager adapted to register to the drive and receive notification data from the drive,Wherein, when the driver detects the network access data from the second connection manager, the driver provides the notification data to the first connection manager.2.The system of claim 1, further comprising a user interface adapted to receive notification data from the first connection manager, receive user input from a user, and provide the user input to the first Connection manager.3.The system of claim 1, further comprising a user interface adapted to receive notification data from the first connection manager, receive user input from a user, and provide the user input to the first connection A manager, and displaying the notification data received from the first connection manager to the user.4.The system of claim 1, wherein the first connection manager is adapted to deregister from the driver, and the driver is further adapted to stop monitoring network access data.5.The system of claim 1, wherein the first connection manager is adapted to deregister from the driver, and the driver is further adapted to stop monitoring network access when the user instructs to stop monitoring network access data through a user interface data.6.The system of claim 1, wherein, when a predetermined rule requires, the first connection manager is adapted to deregister from the driver and the driver is further adapted to stop monitoring network access data.7.The system of claim 1, wherein the first connection manager is further adapted to invalidate the second connection manager.8.The system of claim 1, wherein the network access data includes a network drive interface specification object identifier.9.One method includes:Register the first connection manager to the drive associated with the network interface card;Monitoring network access data from the second connection manager; andIf network access data is detected, the first connection manager is notified.10.The method of claim 9, wherein monitoring network access data from the second connection manager includes monitoring a network driver interface specification object identifier.11.The method of claim 9, wherein the method further comprises:Deregister the first connection manager from the drive; andEnd monitoring of network access data from the second connection manager.12.The method of claim 9, wherein the method further comprises:Prevent the second connection manager from accessing the network interface card through the driver.13.The method of claim 9, wherein the method further comprises:Displaying the notification received by the first connection manager, wherein the notification is displayed to the user through a user interface.14.The method of claim 9, wherein the method further comprises:The notification received by the first connection manager is displayed, wherein the notification shows that the second connection manager must be manually disabled by the user, and the notification is displayed to the user through the user interface.15.The method of claim 9, wherein the method further comprises:Displaying the notification received by the first connection manager, wherein the notification is displayed to the user through a user interface;Receiving user input from the user interface;Determining whether user input requires invalidating the first connection manager; andIf the user input request invalidates the first connection manager, a first sequence is executed, the first sequence includes:Deregister the first connection manager from the drive; andEnd monitoring of network access data from the second connection manager.16.The method of claim 9, wherein the method further comprises:If the second connection manager is registered to the drive, then deregister the second connection manager from the drive associated with the network interface card,Wherein the de-registration of the second connection manager is before registering the first connection manager to the drive associated with the network interface card.17.A device, including:Means for registering the first connection manager to the driver associated with the network interface card;Means for monitoring network access data from the second connection manager; andMeans for notifying the first connection manager if network access data is detected.18.The apparatus of claim 17, wherein the means for monitoring network access data from the second connection manager includes means for monitoring network drive interface specification object identifiers.19.The device of claim 17, further comprising:Means for deregistering the first connection manager from the drive; andMeans for terminating monitoring of network access data from the second connection manager.20.The device of claim 17, further comprising:A device for preventing the second connection manager from accessing the network interface card through the driver.21.The apparatus of claim 17, further comprising:An apparatus for displaying the notification received by the first connection manager, wherein the notification is displayed to a user through a user interface.22.The device of claim 17, further comprising:An apparatus for displaying the notification received by the first connection manager, wherein the notification shows that the second connection manager must be manually disabled by the user, and the notification is displayed to the user through the user interface .23.The device of claim 17, further comprising:An apparatus for displaying the notification received by the first connection manager, wherein the notification is displayed to a user through a user interface;A device for receiving user input from the user interface;Means for determining whether user input requires invalidation of the first connection manager; andMeans for de-registering the first connection manager from the drive if the user input requires invalidating the first connection manager, and terminating monitoring of network access data from the second connection manager.24.The device of claim 17, further comprising:Means for deregistering the second connection manager from the drive associated with the network interface card if the second connection manager is registered to the drive,Wherein the de-registration of the second connection manager is before registering the first connection manager to the drive associated with the network interface card.
System and method for monitoring and managing connection manager activityBackground of the inventionWireless technology has significantly increased in demand and production. The rapid deployment of private and public wireless networks has brought about the convenience of "hot spots" that allow mobile computer systems to access network resources. Generally, hotspots are geographically specific locations where access points provide broadband network services to wireless devices such as, but not limited to, laptop computers, personal digital assistants (PDAs), mobile phones, and pagers. But unlike wired technology, wireless devices require a connection manager to help discover the network and connect the wireless device to the desired network. Generally, the connection manager uses policies or rules to automatically connect to the identified wireless network through the access point. These policies and rules preclude manual intervention to achieve network connectivity. An example of a connection manager is the "ZERO CONFIG" device management tool provided by Microsoft Corporation of Redmond, Washington.It can be found that the connection manager is included in the computer operating system, or installed by the information technology personnel for remote access. In addition, vendors of public hotspots can provide customers with customized connection managers for public networks. Therefore, a wireless device may install multiple connection managers on its system to implement essentially different strategies. The connection manager often registers to the device driver of the network interface in order to customize the configuration of the network adapter. The connection manager relies on this customized configuration when implementing one or more of its policies. Unfortunately, problems occur when multiple connection managers operate concurrently. Because each connection manager attempts to implement its policies, failures may occur in the network connection or the execution of applications on the local system. The interdependence and communication between connection managers is not a viable method for solving these problems, because each connection manager will require familiarity with every other connection manager that may be installed on the same system, even those described in the creation of A connection manager that does not yet exist when it is a specific connection manager.BRIEF DESCRIPTIONFigure 1 shows a block diagram representation of a network environment according to an exemplary embodiment of the invention.2 shows a block diagram representation of an exemplary computing environment.3A to 3C show a flowchart representation of a method of monitoring and managing connection manager activity according to an exemplary embodiment of the present invention.4 shows a flowchart representation of a method of applying a policy after detecting network access data from a third-party connection manager.5A to 5B show a flowchart representation of a method of monitoring and managing connection manager activity when a third-party connection manager is registered to a device driver.Specific implementationReferring now to the drawings, in which the same numerals indicate the same parts throughout several views, FIG. 1 shows a block diagram representation of a network environment 100 according to an exemplary embodiment of the present invention. The network environment 100 may include a communication device 103. The communication device 103 may be constructed using hardware and software suitable for performing tasks and providing performance and functions as described herein (see FIG. 2). The communication device 103 may include network interfaces 106, 228, a device driver 109, a connection manager 112, a user interface 115, at least one third-party connection manager 118A-118Z, and at least one third-party application 121A-121Z.The network interfaces 106, 228 (sometimes referred to as network adapters) may be connected to the device driver 109 in a communication manner. Those skilled in the art will find that the network interfaces 106, 228 may generally be hardware devices, such as the network interface card 106 or the communication expansion card of the communication device 103, and the expansion card is used to assist the connection between the communication device 103 and the network. The network is, for example, a local area network 127 or a wide area network 130. Although not shown in FIG. 1, the network interfaces 106, 228 may include a transceiver (eg, a radio transceiver or an infrared transceiver) that enables wireless communication between the communication device 103 and the access point 124 of the network.The device driver 109 may be communicatively connected to the network interfaces 106, 228, the connection manager 112, and at least one third-party connection manager 118A-118Z, such as the third-party connection manager "A" 118A and Third-party connection manager "Z" 118Z. The device driver 109 may include program modules or hardware suitable for performing tasks and providing performance and functions as described herein (see FIG. 2). Generally, the device driver 109 can convert data between a device such as the network interface card 106 and a program or application using the device (eg, a program or application residing on the communication device 103). Each device (hardware or software) connected to or residing on the communication device 103 can utilize a dedicated command set that only the device driver 109 can translate. Programs or applications on the communication device 103 can access the device by using a common command set. Therefore, the device driver 109 can be adapted to receive a general command from a program or application and convert the general command into a dedicated command required by the device.In addition, the device driver 109 may be adapted to monitor the communication device 103 for network access data. Network access data may include, but is not limited to, communications from third-party connection managers 118 that attempt to control, configure, or access network interfaces 106, 228 through device driver 109. For example, but not by way of limitation, the network access data may be a network drive interface specification (NDIS) object identifier. Those skilled in the art will find that NDIS is a software interface designed to allow different network protocols to communicate with different types of network adapters, such as network interfaces 106, 228. Specifically, the NDIS object identifier is a sequence of numbers, and when the sequence of numbers is interpreted by the device driver 109, enables a network adapter and multiple protocols or a protocol and different network adapters (for example, for network Compatibility between the standard application program interfaces (APIs) of the interfaces 106, 228. The predefined set of NDIS object identifiers can be used by the third-party connection manager 118 to control, configure, and access the network interfaces 106, 228. The device driver 109 may also be adapted to notify or warn the connection manager 112 when the device driver 109 detects network access data.The connection manager 112 may be connected to the device driver 109 and the user interface 115 in a communication manner. The connection manager 112 may be configured with hardware and software suitable for performing tasks and providing performance and functionality as described herein (see FIG. 2). The connection manager 112 may be adapted to receive notification data from the device driver 109, register to the device driver 109, deregister from the device driver 109, provide notification data to the user interface 115, and receive user input from the user interface 115. In general, the connection manager 112 can help discover the network and connect the wireless communication device 103 to the correct or required network. To reduce the process of manually discovering the network and connecting the wireless communication device 103 to the discovered network, the connection manager 112 may include policies or rules that dictate the behavior of the connection manager 112 under certain conditions or events. For example, but not by way of limitation, the connection manager 112 may include one or more policies that require the connection manager 112 to automatically slave the device when the third-party connection manager 118 attempts to configure or access the network interfaces 106, 228 The drive 109 cancels the registration. In another embodiment, the policy may require the connection manager 112 to notify the user whenever a third-party connection manager attempts to configure or access the network interface 106,228. Those skilled in the art will find that policies or rules may be defined in various structures and may require the connection manager 112 to perform different functions under different conditions. In another embodiment of the invention, the connection manager 112 may also be adapted to invalidate the third-party connection manager 118. Invalidating the third-party connection manager 118 may prevent the third-party connection manager 118 from configuring or accessing the network interfaces 106, 228 through the device driver 109.The user interface 115 may be adapted to display data (eg, notification data) to the user, and receive user input. The notification data may include, but is not limited to: an indication that the third-party connection manager 118 is trying to configure or access the network interfaces 106, 228, a confirmation request to deregister the connection manager 112, a confirmation request to invalidate the third-party connection manager 118, and The user is notified how to manually enable or disable the instruction set of the connection manager 112 or the third-party connection manager 118. The user interface 115 may also be adapted to receive user input from the user and provide the user input to the connection manager 112. Those skilled in the art will find that the user interface 115 can be designed in various embodiments and formats, and can range from simple to more complex structures. In an exemplary embodiment of the invention, the user interface 115 may include a keypad, display, touch screen, or other convenient device, and may also include program modules or machine instructions to perform the tasks described herein, which may be processed Executed on the unit 212, the processing unit 212, for example, the processing unit 212 of the communication device 103 (see FIG. 2).Each third-party connection manager 118A-118Z can be communicatively connected to the device driver 109 and the corresponding third-party application 121A-121Z. The third-party connector 118 can help discover the network and connect the wireless communication device 103 to the correct or required network through the network interfaces 106, 228. Similar to the connection manager 112 described above, each third-party connection manager 118A-118Z may include policies or rules that specify the behavior of the connection manager 118 under certain conditions or events. In general, the policies of each third-party connection manager 118A-118Z may be different from the policies of other third-party connection managers 118A-118Z or the policies of the connection manager 112. Generally, only one third-party connection manager 118 or connection manager 112 can register with device driver 109 at any given time. Due to differences in policies, simultaneous registration of the connection manager 112 and the third-party connection manager 118 to the device driver 109 may result in system or network failure due to the implementation of conflicting policies. For example, but not by way of limitation, the connection manager 112 may configure the network interfaces 106, 228 in a manner that prevents the third-party application 121 associated with the third-party connection manager 118 from working correctly.Each third-party application 121A-121Z can be communicatively connected to the corresponding third-party connection manager 118A-118Z. The third-party application 121 may be configured with hardware and software suitable for performing multiple tasks and providing multiple performance and functions as described herein (see FIG. 2). In general, the third-party application 121 may include a program or program group specifically designed for user interaction. The third-party applications 121A-121Z may include, but are not limited to: word processing programs, spreadsheet programs, and database programs. In one embodiment of the present invention, third-party applications 121A-121Z can utilize network resources, and thus can require communication with network interfaces 106,228. Each third-party application 121A-121Z may require a unique configuration of the network interface 106, 228, which can be achieved through the corresponding third-party connection manager 118A-118Z.The network environment 100 may also include an access point 124, such as a transceiver, for wireless communication 105 with the communication device 103, a local area network (LAN) 127, and a wide area network (WAN) 130, such as but not limited to the Internet. The access point 124 may be communicatively connected to the local area network 127 and the communication device 103 through the network interfaces 106, 228. Generally speaking, the access point 124 may be a hardware device and / or software that functions as a communication hub between the communication device 103 and the local area network 127. The access point 124 may be adapted to receive wireless communication from the communication device 103 and provide wireless network communication thereto. The local area network 127 may be connected to the access point 124 and the wide area network 130 in a communication manner. Those skilled in the art will find that the local area network 127 and the wide area network 130 may generally include infrastructures and facilities suitable for communicatively connecting two or more groups of communication devices 103 (including but not limited to: multiple computer systems in communication with each other) . Multiple topologies can be used to configure the local area network 127, the wide area network 130, and the communication device 103, including but not limited to: star, bus, or ring structure. Moreover, the local area network 127, the wide area network 130, and the communication device 103 can be more broadly classified as belonging to a specific architecture, including but not limited to: peer-to-peer or client / server architecture.In operation, the connection manager 112 may register with the device driver 109 during the initialization of the operating system of the communication device 103. Registering the connection manager 112 to the device driver 109 may enable the device driver 109 to monitor the communication device 103 for network access data. Regular programs and / or applications on the communication device 103 can access the network through the device driver 109 and the network interfaces 106, 228, but the device driver 109 can continue to monitor third-party connections from attempts to register to the device driver 109 or configure the network interface 106, 228 Manager 118's network access data. Once the device driver 109 detects the network access data from the third-party connection manager 118, the device driver 109 may provide notification data (eg, a warning) to the connection manager 112.The connection manager 112 may receive notification data from the device driver 109 and apply one or more policies that may or may not invalidate the connection manager 112. If the policy requires invalidating the connection manager 112, the connection manager 112 may deregister from the device driver 109, and the device driver 109 stops monitoring the communication device 103 for network access data. If the one or more policies do not require invalidating the connection manager 112, the connection manager 112 may provide notification data to the user interface 115, and the user interface 115 may display the notification to the user. Or, instead of applying a policy, the connection manager 112 may alternatively provide notification data to the user interface 115, and the user interface 115 may display the notification and response request to the user. The user may provide user input to the user interface 115 indicating whether the connection manager 112 should be invalidated. The user interface 115 may provide user input to the connection manager 112 for processing. If the user input indicates that the connection manager 112 should be invalidated, the connection manager 112 may deregister from the device driver 109, and the device driver 109 may stop monitoring the communication device 103 for network access data.In another embodiment of the present invention, after receiving notification data from the device driver 109, the connection manager 112 may apply one or more policies that may or may not require the connection manager 112 to connect a third party Manager 118 is invalid. If the policy requires the connection manager 112 to invalidate the third-party connection manager 118, the connection manager 112 may invalidate the third-party connection manager 118 so that the third-party connection manager 118 cannot access or configure the network interface through the device driver 109 106,228. Alternatively, instead of automatically invalidating the third-party connection manager 118, the connection manager 112 may provide notification data to the user interface 115, and the user interface 115 may display a message indicating that the third-party connection manager 118 may or should be manually invalidated by the user.In yet another embodiment, the third-party connection manager 118 has already registered with the device driver 109 before the connection manager 112 attempts to register with the device driver 109. The connection manager 112 may apply one or more policies that require the third-party connection manager 118 to deregister from the device driver 109. The connection manager 112 may deregister the third-party connection manager 118 from the device driver 109, and then the connection manager 112 may register with the device driver 109, which may enable the device driver 109 to monitor the communication device 103 for network access data. Instead of applying a policy, the connection manager 112 may provide the user interface 115 with a message specifying that the third-party connection manager 118 may or should be manually deregistered from the device driver 109 by the user. The user interface 115 may display the message to the user.Although radio frequency radio is a form of communication 105, those skilled in the art will find that a communicative connection may include any suitable type of connection, including but not limited to: analog, digital, wireless, and wired communication channels. These communication channels may include, but are not limited to: copper wire, optical fiber, radio frequency, infrared, satellite, or other media.FIG. 2 shows a block diagram representation of an exemplary computing environment 200. The communication devices 103 of the environment 200 may include, but are not limited to: personal computers, mainframe computers, servers, handheld or laptop devices, multi-processor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs , Small computers, mainframe computers, distributed computing environments including any of the above systems or devices, and so on. However, it should be understood that the features and aspects of the exemplary embodiments of the present invention may be implemented by or in various systems and system structures, and any examples provided in this description This is for illustrative purposes only.Figure 2 and the following discussion provide a general overview of a platform on which embodiments or portions of the invention can be integrated, implemented, and / or performed. Although reference is made to the instructions in the software program being executed by the processing unit, those skilled in the art will understand that at least some of the functions performed by the software can also be implemented using hardware components, state machines, or any combination of any of these technologies . In addition, the software program that can implement the embodiments of the present invention can also be run as an independent program or as a software module, routine, or function call, combined with an operating system, another program, system call, interrupt routine, library routine, etc. operating. The term "program module" is used here to refer to any collection of software programs, routines, functions, macros, data, data structures, or machine-readable instructions or object code, or it can be compiled into such a "program module" and composed of Software instructions executed by the processing unit 212.Those skilled in the art will understand that the computing environment shown in FIG. 2 may have many forms and may be directed to perform various functions. Generally speaking, the computing environment shown in FIG. 2 may be any system including a computer processor. Examples of these forms and functions may include, but are not limited to: personal computers, handheld devices such as personal digital assistants, notebook computers, laptop computers, mainframe computers, servers, and various types of other applications, each of them All can function as an exemplary environment for embodiments of the present invention.Exemplary computing device 210 (eg, communication device 103) may include various components, including but not limited to: processing unit 212, non-volatile memory 214, volatile memory 216, and the combination of non-volatile memory 214 and volatile The sexual memory 216 is coupled to the system bus 218 of the processing unit 212. The non-volatile memory 214 may include various memory types, including but not limited to: read only memory (ROM), electrically erasable read only memory (EEROM), electrically erasable programmable read only memory (EEPROM), electrical Programmable read only memory (EPROM), electrically changeable read only memory (EAROM), flash memory, bubble memory, battery-backed random access memory (RAM), CDROM, digital versatile disk (DVD), or other Optical disc storage, magnetic cassette, magnetic tape, magneto-optical storage device, magnetic disk storage or other magnetic storage device, or any other medium that can be used to store the required information. The non-volatile memory 214 may provide storage for power-on reset routines (boot routines) that are called upon powering on or resetting the computing device 210. In some architectures, the non-volatile memory 214 may provide basic input / output system (BIOS) routines to perform information transfer between components within various components of the computing device 210.Volatile memory 216 can include, but is not limited to, various memory types and devices including, but not limited to: random access memory (RAM), dynamic random access memory (DRAM), bubble memory, registers, etc. Wait. Volatile memory 216 can provide temporary storage for routines, modules, functions, macros, data, etc. that processing unit 212 is executing or can execute, or is accessing or modifying.In another embodiment, non-volatile memory 214 and / or volatile memory 216 may include remote storage facilities accessible through wired and / or wireless network systems. Furthermore, the non-volatile memory 214 and / or the volatile memory 216 may include a memory system including a multi-level system of basic and auxiliary memory devices as described above. The primary memory device and the secondary memory device can work as another cache, or the secondary memory device can function as a backup of the basic memory device. In yet another embodiment, the non-volatile memory 214 and / or the volatile memory 216 may include a memory device configured as a simple database file or a searchable relational database using a query language (eg, SQL).The computing device 210 can access one or more external display devices 230, such as a CRT monitor, LCD panel, LED panel, electroluminescent panel, or other display device, with the purpose of providing information or calculation results to the user. In some embodiments, the external display device 230 may actually be included in the product itself. The processing unit 212 may be connected to each display device 230 through a video interface 220, which is coupled to the processing unit 210 through a system bus 218.The computing device 210 may send output information to the display device 230 and to one or more output devices 236, such as speakers, modems, printers, plotters, fax machines, RF or infrared transmitters, computers, or Any other various devices that can be controlled by the computing device 210. The processing unit 212 may be connected to each output device 236 through an output interface 226 that is coupled to the processing unit 212 through a system bus 218.The computing device 210 may receive input or commands from one or more input devices 234, such as a keyboard, pointing device, mouse, modem, RF or infrared receiver, microphone, game joystick, trackball, light pen, game pad (game) pad, scanner, camera, computer, etc., but not limited to this. The processing unit 212 may be connected to each input device 234 through an input interface 224 that is coupled to the processing unit 212 through a system bus 218.It will be appreciated that program modules implementing various embodiments of the present invention may be stored in non-volatile memory 214, volatile memory 216, or in a remote memory storage device accessible through output interface 226 and input interface 224 . Program modules may include operating systems, application programs, other program modules, and program data. In response to various instructions contained in the program module, and under the indication of an event that has occurred or received through the input interface 224, the processing unit 212 can access various parts of the program module.The computing device 210 may provide data to and receive data from one or more other storage devices 232, and the storage device 232 may provide non-volatile memory or volatile memory for storage and may be accessed by the computing device 210. The processing unit 212 may be connected to each storage device 232 via the system bus 218 through the storage interface 222.The interfaces 220, 222, 224, 226, and 228 may include one or more of various interfaces, including but not limited to: cable modem, DSL, T1, V series modem, RS-232 serial port Interface or other serial port interface, parallel port interface, universal serial bus (USB), general interface bus (GPIB), optical interface such as infrared or IrDA, RF or wireless interface such as Bluetooth, or other interface.3A to 3C show a flowchart representation of a method 300 for monitoring and managing connection manager 112 activity according to an exemplary embodiment of the present invention. After starting at 301, the connection manager 112 may register at 303 to the device driver 109 associated with the network interface 106, 228. At any given time, only one connection manager 112 can register with the device driver 109 of the network interface 106,228. In general, when the operating system of the communication device 103 is initialized (for example, when the operating system is started), the connection manager 112 may register with the device driver 109.At 306 the device driver 109 may monitor the communication device 103 for network access data from the third-party connection manager 118. Then at 309, the device driver 109 may determine whether it detects network communication data. If the device driver 109 does not detect network communication data, the device driver 109 may proceed to 306 described above. However, if the device driver 109 detects network communication data at 309, the device driver 109 may check the network access data and may determine at 312 whether the third-party connection manager 118 is trying to configure the network interface 106, 228 (eg, third-party connection The manager 118 is using a settable NDIS wireless LAN object identifier). If at 312 the device driver 109 determines that the third-party connection manager 118 is not attempting to configure the network interfaces 106, 228, then the device driver 109 may proceed to 306 described above. If at 312 the device driver 109 determines that the third-party connection manager 118 is attempting to configure the network interfaces 106, 228, the device driver 109 may proceed to 315, at which the device driver 109 may notify the connection manager 112 that the third-party connection manager 118 has attempted Configure network interfaces 106,228.At 318, the connection manager 112 may apply one or more predetermined connection manager policies. Generally, the connection manager policy may be a set of rules or criteria that require a certain response (ie, action or no action) when certain conditions as described above are met. The connection manager 112 may proceed to 321, where the connection manager 112 may determine whether one or more policy requirements invalidate the connection manager 112. If at 321 one or more policies do not require invalidating the connection manager 112, the connection manager 112 may proceed to 324, where the connection manager 112 may notify the user interface 115 that the third-party connection manager 118 has attempted to configure the network Interface 106,228. Then, the connection manager 112 may proceed to 306 above. However, if at 321 the connection manager 112 determines that one or more policy requirements invalidate the connection manager 112, the connection manager 112 may proceed to 327, where the connection manager 112 may deregister from the device driver 109. Next, at 330, the device driver 109 may stop monitoring the communication device 103 for network access data from the third-party connection manager 118. The connection manager 112 may terminate the operation at 333.4 shows a flowchart representation of a method 400 of applying a policy after detecting network access data from a third-party connection manager 118. As described above with reference to FIG. 1, policies or rules may be defined in various configurations, and the connection manager 112 may be required to perform different functions under different conditions. Therefore, the policy may instruct the connection manager 112 to deregister in one set of conditions, and may instruct the connection manager 112 to invalidate the third-party connection manager 118 in the second set of conditions.After starting at 401, the connection manager 112 may determine at 403 whether one or more policies require invalidating the connection manager 112. If one or more policies do require invalidating the connection manager 112, the connection manager 112 may proceed to 406 to deregister from the device driver 109. Then, at 409, the device driver 109 may stop monitoring the communication device 103 for network access data from the third-party connection manager 118. Then, the connection manager 112 may stop operation at 412. However, if at 403 one or more policies do not require invalidating the connection manager 112, the connection manager 112 may proceed to 415, at which the connection manager 112 may determine whether it is capable of attempting to configure the network interface 106, The 228 third-party connection manager 118 is invalid. If at 415 the connection manager 112 determines that it can invalidate the third-party connection manager 118 trying to configure the network interfaces 106, 228, then the connection manager 112 can invalidate the third-party connection manager 118 at 418 to enable third-party connection management The device 118 can no longer access or configure the network interfaces 106, 228 through the device driver 109. Then, the connection manager 112 may proceed to 403 above. However, if at 415 the connection manager 112 determines that it cannot invalidate the third-party connection manager 118 attempting to configure the network interfaces 106, 228, then the connection manager 112 may notify the user interface 115 that the third-party connection manager 118 may or should be Manually invalid, the user interface 115 can display this to the user. Then, the connection manager 112 may stop the operation at 412.5A-5B show a flowchart representation of a method 500 for monitoring and managing connection manager activity when a third-party connection manager 118 is registered with a device driver. Generally, the connection manager 112 may register with the device driver 109 when the operating system of the communication device 103 is initialized. However, if the third-party connection manager 118 has been registered to the device driver 109 before the connection manager 112 attempts to register to the device driver 109, the third-party connection manager 118 may be deregistered so that the connection manager 112 can successfully register to Device driver 109.After starting at 501, the connection manager 112 may determine at 503 whether the third-party connection manager 118 has been registered with the device driver 109. At 503, if the third-party connection manager 118 is not registered with the device driver 109, the connection manager 112 may be registered with the device driver 109 at 506. Then, at 509, the device driver 109 may start monitoring the communication device 103 for network access data from the third-party connection manager 118. Then, at 512 the connection manager 112 may continue to operate.However, if the third-party connection manager 118 is registered with the device driver 109 at 503, the connection manager 112 may proceed to 515 to determine whether the one or more policies allow the third-party connection manager 118 to be invalidated. If the policy does not allow the third-party connection manager 118 to be invalidated, then at 518 the connection manager 112 may inform the user interface 115 that the third-party connection manager 118 may or should be manually invalidated by the user. The user interface 115 may display the notification to the user. Next, at 527, the connection manager 112 may determine whether the user has invalidated the third-party connection manager 118. If the user has invalidated the third-party connection manager 118, the connection manager 112 may proceed to 506 above. However, if the user has not invalidated the third-party connection manager 118 at 527, the connection manager 112 may proceed to 530, where the connection manager 112 is not registered with the device driver 109. Then, the connection manager 112 may halt the operation at 533.At 515, if the one or more policies do allow the third-party connection manager 118 to be invalidated, then at 521 the connection manager 112 can determine whether the connection manager 112 can invalidate the third-party connection manager 118. If at 521 the connection manager 112 can determine that it cannot invalidate the third-party connection manager 118, then the connection manager 112 can proceed to 518 above. However, if at 521 the connection manager 112 can determine that it can invalidate the third-party connection manager 118, then at 524, the connection manager 112 can invalidate the third-party connection manager 118 so that the third-party connection manager 118 cannot pass the device The driver 109 configures or accesses the network interfaces 106,228. Then, the connection manager 112 may proceed to 506 above.Although the embodiments of the present invention have been described in detail, it should be understood that various changes and modifications can be made within the spirit and scope of the present invention as described earlier in this document and defined in the appended claims. It is expected that the corresponding structures, materials, actions, and equivalents of all means-plus-function elements in the appended claims, if any, include any that are used in conjunction with other claimed (such as specifically claimed) That) the structure, material or action of an element to perform a function.
A method, apparatus, and system are disclosed. In one embodiment the method comprises determining whether a feature on a device is permitted to be enabled, determining whether a total number of enabled features on the device is less than or equal to a maximum number of allowable features on the device, and allowing the enabling of the device feature if the device feature is permitted to be enabled and the total number of enabled features on the device is less than or equal to the maximum number of allowable features on the device.
1.A method including:Determine whether to enable features on the device;Determining whether the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device; andWhere the device feature is allowed to be enabled and the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device, the device feature is allowed to be enabled.2.The method of claim 1, further comprising programming the device with a permanent feature indicator value to allow subsequent determination of feature rights.3.The method of claim 2, further comprising allowing the device feature to be enabled if the permanent feature indicator value allows the device feature to be enabled.4.The method of claim 2, wherein programming the device with a permanent characteristic indicator value comprises storing a binary value in a register on the device, wherein the binary value indicates whether the device is allowed to be enabled .5.The method of claim 1, further comprising programming a value of a maximum number of permanently allowed features into a register of the device.6.The method of claim 1, further comprising:Associate each device characteristic with one or more credit values;Add up the associated credits of all enabled device features; andIn the event that the sum of the credits of all enabled device features is greater than the maximum number of allowed features, one or more device features are disabled.7.The method of claim 6, wherein disabling one or more device features further comprises disabling all device features.8.The method of claim 1, wherein the device further comprises an I / O controller hub.9.The method of claim 1, wherein the device further comprises a memory controller hub.10.The method of claim 1, wherein the device further comprises a central processing unit.11.A method including:Specify the maximum number of features allowed on the device; andIn the event that the total number of features enabled on the device exceeds the maximum number of features allowed on the device, one or more device features are disabled.12.The method of claim 11, further comprising disabling all device features if the total number of features enabled on the device exceeds a maximum number of features allowed on the device.13.The method of claim 11, further comprising storing information indicating a maximum number of allowed features in a register on the device.14.The method of claim 11, wherein the device further comprises an I / O controller hub.15.The method of claim 11, wherein the device further comprises a memory controller hub.16.The method according to claim 11, wherein the device further comprises a central processing unit.17.A method including:Programming values in one or more feature permission registers on a device to specify whether to enable one or more features on the device, where each feature permission register value is associated with an individual device characteristic;Programming a value in a feature count register on the device to specify a maximum number of features allowed on the device;In the case where the corresponding feature permission register value allows the feature to be enabled, the corresponding feature of each device is enabled when the device is initialized;Count the total number of enabled device features during device initialization; andIn the event that the total number of features enabled on the device exceeds the count of the maximum number of features allowed on the device, all device features are disabled.18.The method according to claim 17, wherein the device initialization further comprises a boot sequence of the device.19.The method of claim 17, wherein the device further comprises an I / O controller hub.20.The method of claim 17, wherein the device further comprises a memory controller hub.21.The method of claim 17, wherein the device further comprises a central processing unit.22.A device including:Circuits that determine whether to enable features on the device;Circuitry that determines whether the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device; andA circuit that allows the device feature to be enabled if the device feature is allowed to be enabled and the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device.23.The device of claim 22, further operable to store a binary value in a register on the device, wherein the binary value indicates whether operation of the device is allowed to be enabled.24.The apparatus of claim 22, further used to:Specify the maximum number of features allowed on the device; andInformation indicating the maximum number of features allowed on the device is stored in a register.25.The device of claim 24, further comprising circuitry to disable all device features if the total number of features enabled on the device exceeds the maximum number of features allowed on the device.26.The device of claim 22, further comprising an I / O controller hub.27.The device of claim 22, further comprising a memory controller hub.28.The apparatus according to claim 22, further comprising a central processing unit.29.A system including:bus;A processor coupled to the bus; andA chipset coupled to the bus, the chipset comprising:Circuits that determine whether to enable features on the device;Circuitry that determines whether the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device; andA circuit that allows the device feature to be enabled if the device feature is allowed to be enabled and the total number of features enabled on the device is less than or equal to the maximum number of features allowed on the device.30.The system of claim 29, further comprising a circuit that stores a binary value in a register on the device, wherein the binary value indicates whether operation of the device feature is enabled.31.The system of claim 29, further comprising circuitry that stores information indicating a maximum amount of allowed features in a register on the device.
Configurable feature selection mechanismTechnical fieldThe invention relates to programming and selection features on a device.Background techniqueInventory forecasting, inventory management, and inventory keeping unit (SKU) management costs are a huge burden for large hardware technology companies. The ability of a hardware company to meet customer requirements for the individual features and combination of features of each piece of hardware it manufactures is limited by manufacturing restrictions on the number of hardware SKUs that the company can support. For example, chipsets often have many possible feature combinations, and each combination currently requires a different hardware SKU. Customers must maintain multiple boards for each of the chip's unique hardware SKUs. This also forces customers to maintain unique motherboard wiring and management inventory for each different SKU of the chipset. The additional hardware SKUs impose associated financial burdens and contribute to inventory management risks and complexity. Hardware companies are currently unable to support multiple alternative configurable features on a single physical hardware SKU. Therefore, it would be beneficial to have a single physical hardware SKU that can support multiple alternative configurable features. This allows hardware companies to capture values with those features and combinations that cannot be supported using existing SKU methods due to cost or inventory complexity constraints.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is illustrated by way of example, and the drawings in the accompanying drawings do not limit the invention, wherein like reference numerals denote like parts, where:FIG. 1 is a block diagram of one embodiment of a computer system.FIG. 2 is a circuit diagram of an embodiment of a feature selection mechanism.FIG. 3 is a circuit diagram of another embodiment of a feature selection mechanism.Figure 4 is an example of the results of a feature selection mechanism in one embodiment.FIG. 5 is a flowchart of one embodiment of a process for enabling features on a device.6 is a flowchart of one embodiment of a process for initially configuring a device to allow feature activation.FIG. 7 is a flowchart of one embodiment of a process for utilizing a feature selection register and a feature permission indicator to enable a feature on a device.8 is a flowchart of one embodiment of a process for determining whether a device feature count exceeds a maximum allowed feature count.detailed descriptionEmbodiments of an effectively configurable feature selection mechanism are disclosed. In the following description, many specific details are proposed. It is understood, however, that these embodiments may be practiced without these specific details. In other instances, well-known components, specifications and protocols have not been discussed in detail so as not to obscure the present invention.FIG. 1 is a block diagram of one embodiment of a computer system.The computer system may include a processor 100, a memory controller hub (MCH) 102, and an I / O controller hub (ICH) 108. MCH 102 and ICH 108 may include a chipset. The processor 100 may be coupled to the MCH 102 via a host bus. MCH 102 may be coupled to system memory 104. In different embodiments, the system memory may be one of the main system memory of synchronous dynamic random access memory (SDRAM), double data rate SDRAM (DDR-SDRAM), Rambus RAM (RDRAM), or many other formats. MCH 102 may also be coupled to graphics module 106. In one embodiment, the graphics module may be an accelerated graphics port (AGP) graphics card. The ICH 108 may be coupled to an I / O bus 110, a hard disk drive 112, a keyboard controller 114, and a mouse controller 116. In various embodiments, the ICH 108 may also be coupled to any number of I / O devices, buses, and / or controllers, such as a redundant array of independent disks (RAID) controller, a peripheral component interface (PCI) bus, or a universal string Line bus (USB). In another embodiment, the ICH 108 may also have multiple internal features, such as internal high-definition audio capabilities and power management features used on mobile platforms to save battery life.In one embodiment, the ICH 108 may have a programmable feature permission indicator (FPD) 118 for determining whether a feature is allowed to be enabled during system initialization. In one embodiment, the FPD may be a 1-bit value located in a register within the ICH 108. In one embodiment, the 1-bit FPD value in the register is programmable only once and is then permanently hard-wired to the programmed value. In this embodiment, it can be hard-wired during programming by coupling the fuse to a register bit line (with associated bit value) and allowing the fuse to remain closed or open according to the desired bit value during initial programming The value.In one embodiment, if the bit value associated with the FPD 118 is permanently programmed to be deselected (ie, the permission to enable the feature is disabled), the feature associated with the FPD 118 is permanently disabled. Also in this embodiment, if the bit value associated with the FPD 118 is permanently programmed to be selected (ie, the right to enable the feature is granted), the feature associated with the FPD 118 is enabled during subsequent system initialization. In one embodiment, if FPD 118 selects this feature, the feature is enabled when the system is booted after a power outage. In another embodiment, if this feature is selected, the feature will only be enabled when it is initialized after disabling the battery backup of the save real-time clock function. In various embodiments, the ICH 108 may have a feature enable register (FER) 120, which is programmed by the basic input / output system (BIOS), software, or other programming mechanism during system boot Enable each feature. In one embodiment, the hardware band can permanently program FER 120 with a value. A hardware band is a bit signal transmitted on a pin in the device. It sets certain bits in the hardware based on the logic value of the bit signal at a certain time during the initialization (ie, during the boot) period.In one embodiment, each feature is associated with 1 bit in FER 120. In another embodiment, certain features are associated with multiple bits within FER 120 to allow multiple levels of functionality for each feature. In one embodiment, there are multiple features associated with ICH 108. In this embodiment, each FPD 118 value is stored in the FPD 118 register, and FER 120 has a corresponding bit (and associated value) for each FPD 118 value. The feature selection mechanism then compares the FPD 118 value with the FER 120 value to determine which features are selected and then enabled in the system. Therefore, if the FPD 118 value of a specific feature is selected (that is, allowed to be enabled), the corresponding FER 120 value of the same feature can be modified during system initialization. Therefore, in one embodiment, this feature is enabled if both the FPD 118 value and the corresponding FER 120 value are logical 1-bit values. The FPD118 and FER120 logic circuits in ICH108 will be referred to as the Software Feature Selector (SFS) because it can enable and disable each feature during system initialization using BIOS, software, or other programming mechanisms.FIG. 2 is a circuit diagram of an embodiment of a feature selection mechanism. Initially, the original function (ie, feature) disable value (input 202) is input to a logical OR gate 212 to perform an OR operation with the SFS output (output 210). Therefore, if the function disable value (input 202) is a logic one, the function is automatically disabled because the logic OR gate 212 will output a one.Once the initial FPD programming (described above) is completed, the specific FPD value (input 204) associated with the feature is entered into the SFS 200. The FPD value (input 204) is input to a logical NAND gate 208 to perform a NAND operation with the corresponding FER value (input 206). The FER value (input 206) can be modified during system initialization to enable or disable this feature. The SFS output (output 210) is input to a logical OR gate 212. Finally, a feature selection mechanism output value 214 is output from the logical OR gate 212. Therefore, in this embodiment, this feature is enabled only if the FPD value (input 204) and the corresponding FER value (input 206) are both logic 1 and the function disable value (input 202) is logic 0. The results of the feature selection mechanism output value 214 are shown in Table 1.Table 1. Feature selection mechanism results of the embodiment of Figure 2FIG. 3 is a circuit diagram of another embodiment of a feature selection mechanism. In this embodiment, the FDP value (input 304) input into SFS 300 determines whether SFS 300 is enabled or disabled. Initially, the original function (ie, feature) disable value (input 302) is input to a logical OR gate 314 to perform an OR operation with the SFS output (output 312). Therefore, if the function disable value (input 302) is a logic one, the function is automatically disabled because the logic OR gate 314 will output a one.Once the initial FPD programming (described above) is completed, the specific FPD value (input 304) associated with the feature is entered into the SFS 300. The FPD value (input 304) is input to a logical AND gate 310 to perform an AND operation with the corresponding FER value (input 306) (inverted by the inverter 308). The FER value (input 306) can be modified during system initialization to enable or disable this feature. The SFS output (output 312) is input to a logical OR gate 314. Finally, a feature selection mechanism output value 316 is output from the logical OR gate 314. As described above, in this embodiment, SFS 300 is effectively enabled and disabled by the FPD value (input 304). Therefore, if an FPD value is input as a logic 0 (input 304), the SFS 300 is disabled, the function disable value (input 302) is then controlled to enable or disable related features. The results of the feature selection mechanism output value 316 are shown in Table 2.Table 2. Feature selection mechanism results of the embodiment of Figure 3Returning to FIG. 1, in one embodiment, the ICH 108 may have a programmable feature count indicator (FCD) field 122. In this embodiment, FCD 122 can be set to a value equal to the maximum number of features that ICH 108 can simultaneously enable. In one embodiment, FCD 122 may be represented by a value located in a register located within ICH 108. In one embodiment, the value in this register can only be programmed once, and then permanently hard-wired to the programmed value. In one embodiment, the fuse can be left closed or opened according to the desired bit value during the initial programming of the register by coupling the fuse to each register bit line (each with an associated bit value). Fuse to hardwire this value. In one embodiment, the FCD 122 value may be a 3-bit value that may represent a feature count from 0 to 7. In other embodiments, the FPD 122 value will be equal to the number of bits required to enable the FCD value to count all features associated with the ICH 108.In one embodiment, the FCD 122 value may be utilized to limit the number of selected features on the ICH 108. Therefore, in this embodiment, during system initialization, the FCD 122 value is compared with the total number of selected features (that is, the number of FPD values at logic 1). If the total number of selected features is greater than the FCD 122 value, it is disabled All features. In another embodiment, during system initialization, the FCD 122 value is compared to the total number of enabled features (that is, the number of FER 120 bits at logic 1). If the total number of enabled features is greater than the FCD 122 value, it is disabled All features. In this embodiment, the system can be initially programmed to allow all features (ie, each FPD value is initially programmed to a logical one to select all features), and then the number of features enabled each subsequent system initialization is limited. For example, if ICH 108 has three allowed features (RAID, SCSI, and USB) and FCD 122 is hardwired to a value of 2, then two of these three features may be selected during system initialization (Ie RAID and SCSI, RAID and USB or SCSI and USB), not all three are selected. Therefore, in this embodiment, custom programming may be allowed during system initialization, but still allow all features to be disabled if the allowed feature count is exceeded.In another embodiment, during system initialization, the FCD 122 value is compared to the total number of enabled features (that is, the number of FER 120 bits at logic 1). If the total number of enabled features is greater than the FCD 122 value, it will be disabled A certain number of features so that the total number of FER 120 bits is less than or equal to the FCD 122 value. In one embodiment, the set of features in the ICH 108 is prioritized and disabled in order of priority.In one embodiment, a feature is associated with more than one FER 120-bit value. In this embodiment, different characteristics on the ICH 108 may be assigned values in a differentiated manner. For example, a RAID characteristic may be twice as valuable as a SCSI characteristic. Therefore, a SCSI feature may have one associated FER 120 bits, while a RAID feature may have two separate FER 120 bits associated with it. In this embodiment, the number of FER 120 bits associated with each feature gives the feature a certain feature credit value. Therefore, in this embodiment, during system initialization, the total number of credits associated with all enabled features is added and the value is compared to the FCD 122 value. If the total number of credits is greater than the FCD 122 value, all features are disabled. In another embodiment, if the total number of credits is greater than the FCD 122 value, one or more features are disabled.FIG. 4 is a result example of a feature selection mechanism in one embodiment. The FPD register 400 that stores the FPD bit value can be permanently programmed to select certain features that can be subsequently enabled by the FER 402. These features that have been enabled have been "selected" (ie, a logical 1 value of the FPD bit) and "enabled" (ie, a 120 logical value of the FER). Thus, in this example, the enabled features are PCI, USB, and Serial Advanced Technology Connections (SATA), which are represented by a logic 1 in the resulting arrow 404. In addition, FCD 406 is represented by a 3-bit value. In this example, the 3-bit value is 1-0-1 in binary or 5 in decimal. Therefore, for both FCD embodiments (described above), these features remain enabled because neither the FPD register 400 nor the FER 402 has a number of logical 1 bits greater than five.FIG. 5 is a flowchart of one embodiment of a process for enabling features on a device. This process is performed by processing logic, which may include hardware (circuits, dedicated logic, etc.), software (such as software running on a general-purpose computer system or a dedicated machine), or a combination of the two. Referring to FIG. 5, the process begins with processing logic determining whether a feature on a device is allowed to be enabled or disabled (processing block 500). Next, if the processing logic determines that the feature is not allowed to be enabled or disabled, the processing logic does not allow (ie, prohibit) the device feature from being enabled or disabled (processing block 506). Otherwise, if the processing logic determines that the feature is allowed to be enabled or disabled, the processing logic determines whether the total number of features enabled on the device is less than the maximum number of features allowed on the device (processing block 502). If the processing logic determines that the total number of features enabled on the device is greater than the maximum number of features allowed on the device, the processing logic does not allow the device features to be enabled or disabled (processing block 506). Otherwise, if the processing logic determines that the total number of features enabled on the device is less than the maximum number of features allowed on the device, the processing logic allows the device features to be enabled or disabled (processing block 504).6 is a flowchart of one embodiment of a process for initially programming a device to allow feature activation. This process is performed by processing logic, which may include hardware (circuits, special logic, etc.), software (such as software running on a general-purpose computer system or a special-purpose machine), or a combination of the two. Referring to FIG. 6, the process begins with processing logic programming the device's FPD (processing block 600). In one embodiment, each FPD is associated with a device feature. If the FPD value is logic 1, processing logic allows the associated device to be enabled. Otherwise, if the FPD value is a logic 0, processing logic prohibits enabling the associated device. In one embodiment, the FPD value can only be programmed once, and then permanently hardwired to the programmed value.The process continues with processing logic programming the device's FCD (processing block 602). In one embodiment, the FCD can be set to a value equal to the maximum number of features that the device can enable simultaneously. In one embodiment, the value in this register can only be programmed once, and then permanently hard-wired to the programmed value.7 is a flowchart of one embodiment of a process for enabling a feature on a device using a feature enable register (FER) and a feature authority indicator (FPD). This process is performed by processing logic, which may include hardware (circuits, special logic, etc.), software (such as software running on a general-purpose computer system or a special-purpose machine), or a combination of the two. Referring to FIG. 7, the process begins with processing logic determining whether FER requests to enable related device features (processing block 700). In one embodiment, the processing logic determines whether the value at the bit position associated with the relevant device feature in the FER is a logical one or a logical zero. If the value is logic 0, there is no request to enable the feature and the process is complete. Otherwise, if the value is a logical one, processing logic determines whether the FPD allows device features to be enabled (processing block 702). In one embodiment, the processing logic determines whether the value of the FPD associated with the relevant device characteristic is a logical one or a logical zero. If the value is a logical one, processing logic allows the device feature to be enabled (processing block 704). Otherwise, if the value is logic 0, processing logic does not enable the device feature (processing block 706).8 is a flowchart of an embodiment of a process for determining whether a device feature count exceeds a maximum allowed feature count. This process is performed by processing logic, which may include hardware (circuits, special logic, etc.), software (such as software running on a general-purpose computer system or a special-purpose machine), or a combination of the two. Referring to FIG. 8, the process begins with processing logic determining a total number of features enabled on a device (processing block 800). Next, the process continues and processing logic determines the value of FCD (processing block 802).Processing logic compares the total number of enabled features with the FCD value (processing block 804). If the total number of enabled features does not exceed the FCD value, processing logic allows these features to remain enabled (processing block 806). Otherwise, if the total number of enabled features exceeds the FCD value, the processing logic disables all of these features (processing block 808).Many of the embodiments cited above make use of the ICH as an example of the device in question. Nonetheless, the devices cited in the above embodiments can also be any type of device with modifiable characteristics, such as an MCH, a processor, or any other type of integrated circuit device. Furthermore, in some embodiments, the FPD, FSR, and FCD values are not stored on devices with modifiable characteristics. In some embodiments, the FPD, FSR, and FCD values are not stored in a second device or non-volatile memory within the system in which the related device is located.Thus, an embodiment of an effectively programmable feature selection mechanism is disclosed. These embodiments are described with reference to specific exemplary embodiments herein. However, it will be apparent to those skilled in the art that various modifications and changes can be made to these embodiments without departing from the spirit and scope of the embodiments described herein. Accordingly, the description and drawings are to be regarded as illustrative rather than restrictive.
In described examples, a leadframe (100) includes a frame (101) of sheet metal in a first planar level, where the frame (101) has metallic leads (110) and a first metallic pad (120) extending inward from the frame (101), and the first metallic pad (120) is tied to the frame (101) by first metallic straps (120a). The leadframe (100) further has a second metallic pad (130) in a second planar level parallel to and spaced from the first planar level, where the second metallic pad (130) is tied by second metallic straps (132) to the frame (101). Also, the leadframe (100) has a third metallic pad (140) in a third planar level parallel to and spaced from the second planar level and additively from the first planar level, where the third metallic pad (140) is tied by third metallic straps (131) to the second metallic pad (130).
CLAIMSWhat is claimed is:1. A leadframe comprising:a frame of sheet metal in a first planar level, the frame having metallic leads and a first metallic pad extending inward from the frame, the first pad tied to the frame by first metallic straps;a second metallic pad in a second planar level parallel to and spaced from the first level, the second pad tied by second metallic straps to the frame; anda third metallic pad in a third planar level parallel to and spaced from the second level and additively from the first level, the third pad tied by third metallic straps to the second pad.2. The leadframe of Claim 1 wherein the second pad is further tied by second straps to one or more leads.3. The leadframe of Claim 1 wherein the third pad is further tied by third straps to the frame.4. The leadframe of Claim 1 wherein the third pad is further tied by third straps to one or more leads.5. The leadframe of Claim 1 further including configurations of the first and second straps suitable to accommodate bending and stretching beyond the limit of simple elongation based upon inherent metal characteristics.6. The leadframe of Claim 5 wherein the configurations are selected from a group including bent geometry, curved geometry, and toroidal geometry.7. The leadframe of Claim 1 wherein one or more pads in the first, second, and third level are suitable to serve as mount pads for semiconductor chips or passive electronic components.8. The leadframe of Claim 1 wherein the third pad surface facing away from the first pad is solderable.9. A semiconductor device comprising:a leadframe including:a frame of sheet metal in a first planar level, the frame having metallic leads and a first metallic pad extending inward from the frame, the first pad tied to the frame by first metallic straps;a second metallic pad in a second planar level parallel to and spaced from the first level, the second pad tied by second metallic straps to the frame; anda third metallic pad in a third planar level parallel to and spaced from the second level and additively from the first level, the third pad tied by third metallic straps to the second pad, the third pad surface facing away from the first pad being solderable;at least one semiconductor chip attached to at least one of the pads and connected to adjacent leads; anda package encapsulating the at least one chip, the leads, the first and second pad, and portions of the third pad, while leaving the solderable third pad surface un-encapsulated and exposed to the ambient.
SEMICONDUCTOR PACKAGE HAVING A LEADFRAME WITHMULTI-LEVEL ASSEMBLY PADS[0001] This relates generally to semiconductor devices and processes, and more particularly to a structure and fabrication method of leadframes with assembly pads situated at more than one level.BACKGROUND[0002] A metallic leadframe for semiconductor devices provides an assembly pad as stable support for firmly positioning the semiconductor chip, and further offers a multitude of leads for bringing electrical conductors into close proximity of the chip. The remaining gaps between the tip of the leads and the chip terminals are typically bridged by thin wires (commonly coper or gold, about 25 μπι diameter).[0003] For reasons of easy and cost-effective manufacturing, single piece leadframes are commonly manufactured from flat thin sheets of metal, such as copper (typical thickness range 120 to 250 μπι). The desired shape of the leadframe is etched or stamped from the original flat sheet. For most purposes, the length of a typical lead is considerably longer than its width.[0004] For technical reasons of wire bonding, the chip mount pad is often desirable to position in a horizontal plane about 10 to 20 μπι downset from the starting plane of the leads. In some devices, the height difference may be greater. Consequently, those straps which connect the chip mount pad with the frame have to be bent to overcome the required height difference between the two parallel planes.[0005] Semiconductor devices which dissipate high power or are used in high frequency telecommunications often need to be packaged so that the package allows the leadframe to expose the chip assembly pad at the bottom surface of the package in order to facilitate direct attachment of the pad to external heat sinks. In these devices, the distance between the horizontal plane of the chip mount pad and the horizontal plane of the leads (measured along a line at right angles with the planes) increases significantly. In packages with a final thickness of about 1.0 mm, the distance may be between 400 and 500 μπι. This challenge can usually be met by elongation while staying within the limits of material characteristics (such as, for copper, less than about 8%), if the distance is bridged by the strap at an inclination angle of 30° or less.SUMMARY[0006] In described examples, a leadframe includes a frame and multiple leads in a first horizontal plane, a first chip mount in a second horizontal plane, a second chip mount pad in a third horizontal plane, and multiple straps connecting the chip mount pads and the frame. The straps have a geometry designed so that the straps can accommodate bending and stretching in the forming process beyond the limit of simple elongation based upon inherent material characteristics. At least one of the chip mount pads extends to and through the encapsulating plastic package.BRIEF DESCRIPTION OF THE DRAWINGS[0007] FIG. 1 shows a perspective top view of a leadframe according to an embodiment, with semiconductor chips attached to pads at different planar levels.[0008] FIG. 2 illustrates a perspective bottom view of the leadframe of FIG. 1, with semiconductor chips attached to pads at different planar levels.[0009] FIG. 3 displays a perspective top view of a leadframe according to another embodiment, with semiconductor chips attached to pads at different planar levels.[0010] FIG. 4 depicts a perspective bottom view of the leadframe of FIG. 3, with semiconductor chips attached to pads at different planar levels.DETAILED DESCRIPTION OF EXAMPLE EMB ODFMENT S[0011] For many device families with chips encapsulated in standard thickness packages (> 1.0 mm), the market in electronics equipment and applications calls for devices, where packages expose the chip assembly pad for effective heat dissipation, even for large chip areas and sometimes multi-chip assembly. Also, the packages should have small footprint. To expose chip mount pads in packages of more than about 1.0 mm thickness, the direct distance between the horizontal plane of the chip mount pad and the horizontal plane of the leads increases up to 260 % over the respective distance in "thin" packages (into the 1100 to 1200 μπι range). As a consequence for standard thickness packages, a copper strap elongation of more than 8 % would be required, which is beyond the elastic limit of copper leadframe materials and would result in segment cracking and breaking.[0012] Similar difficulties arise in packages when the direct distance between the planes of the chip pad and the leads has to be bridged at angles steeper than 30°, such as 45°. Often, this steep angle is a consequence of the desire to shrink the outline of a package, i.e. the area it consumes when mounted on a printed wiring board, or to accommodate an extra-large chip pad in a fixed package. Here again, a copper strap elongation of more than 8 % would be required, which is beyond the elastic limit of copper leadframe materials.[0013] To solve the footprint problem, example embodiments use a methodology to distribute the assembly pads over more than one level and thus widen the concept of three-dimensional leadframes.[0014] FIG. 1 illustrates in top view an example embodiment, a leadframe generally designated 100. The same embodiment is shown in FIG. 2 as bottom view. A leadframe 300 as another example embodiment is illustrated in FIG. 3 in top view and in FIG. 4 in bottom view. Leadframes 100 and 300 serve several needs of semiconductor devices and their operation simultaneously.[0015] Leadframe 100 comprises several portions; one portion is a frame 101, which is made of flat sheet metal. The planar level, or plane, in which frame 101 is situated, is referred to herein as first planar level; frame 101 operates in two dimensions. Respectively, leadframe 300 comprises several portions; one portion is a frame 301 in a first planar level. For manufacturing leadframes in mass production, the complete pattern of frame, pads, leads and support structures is first stamped or etched out of the original flat thin sheet of metal; example thicknesses are between about 0.25 and 0.15 mm. The first planar level is the plane of the starting sheet of metal. Starting materials include, but are not limited to, copper, copper alloys, aluminum, iron-nickel alloys, and Kovar™.[0016] Referring to FIGS. 1 and 2, frame 101 has a plurality of leads 110 and a first assembly pad 120 extending inward from the frame; leads 110 and pad 120 are in the same first planar level, or plane, as frame 101. First pad 120 is attached to frame 101 by first strap 120a. Based on the fabrication process, leads 110 and first pad 120 are made of the same metal as frame 101. First pad 120 may be suitable for assembling a semiconductor chip 121 or a passive electronic component. However, other embodiments may have more than one assembly pad. Other devices may have no assembly pad 120 in the first planar level. A function of assembly pads 120 is to provide stable support for firmly positioning one or more semiconductor chips or passive electronic components. Because the leadframe including the pad is made of electrically conducting material, the pad may be biased, when needed, to any electrical potential required by the network involving the semiconductor device, especially the ground potential.[0017] Referring to FIGS. 3 and 4, frame 301 analogously has a plurality of leads 310 and a first assembly pad 320 extending inward from the frame; leads 310 and pad 320 are in the same first planar level, or plane, as frame 301, and are made of the same metal as frame 301. First pad 320 is attached to frame 301 by first strap 320a. First pad 320 may be operable to assemble and thereafter support a semiconductor chip 321 or a passive electronic component. However, other embodiments may have more than one assembly pad. Other devices may have no assembly pad 320 in the first planar level.[0018] A function of the plurality conductive leads 110 and 310 is to bring various electrical lines into close proximity of the chip. The remaining gaps between the tip of the leads and the terminals of the chips are usually bridged by thin wires, individually bonded to the chip terminals and the leads 110 and 310. In FIG. 1, a few of the bonding wires are shown as ball and stitch bond connections and are designated 150.[0019] As FIGS. 1 and 2 indicate, example embodiment 100 further includes a second metallic pad 130 in a second planar level, which is parallel to the first level yet spaced from it by a distance. Similarly in FIGS. 3 and 4, embodiment 300 includes a second metallic pad 330 in a second planar level, which is parallel to the first lever yet spaced from it by a distance. It should be mentioned that herein the distance between the plane of the first level and the plane of the second level is to be considered along an axis vertical to both planes. In FIG. 1, second pad 130 is sized to offer support for a chip 131 and is connected to a lead 111 of leadframe 100 by second strap 132 in order to enable access to a discrete input/output bias for attached chip 131 or passive component, as provided by lead 111. With the help of strap 131, this discrete bias can further be transmitted to third pad 140.[0020] In contrast, in FIG. 3 second pad 330 is designed solely as a support pad for strap 332 at the second planar level; strap 332 is attached to input/output lead 311. Pad 330 in turn is connected to third pad 340 by strap 331; consequently, third pad 340 can be biased at the potential of lead 311. The advantage of introducing interim support level 330 is that without level 330, strap 332 would have to be designed overly long for connecting third pad 340 to lead 311. Overly long straps are difficult to handle in the manufacturing processes. Alternatively, straps like strap 332 can be designed in a configuration suitable to accommodate bending and stretching beyond the limit of simple elongation based upon inherent material characteristics. Such configurations may be selected from a group including bent geometry, curved geometry, and toroidal geometry.[0021] As shown by the embodiment in FIGS. 1 and 2, inside frame 101 is a third metallic pad 140 at a third planar level parallel to and spaced from the second level. Because the distances between levels are additive, the third level is even further distant from the first level than the second level. Preferably, third pad 140 is so far removed from the first level of frame 101 that the bottom surface 140a is exposed from a future device package 160 and can thus be used, when having a solderable surface metallurgy, to be solder-attached directly to a board or a heat sink. Third pad 140 may be sized to offer support for one of more semiconductor chips or passive components. In the example embodiment of FIGS. 1 and 2, a vertical stack of two chips 141 and 142 is attached on third pad 140, taking advantage of the deep downset of pad 140 relative to the original first level of the frame.[0022] Analogously, the embodiment depicted in FIGS. 3 and 4 displays a third metallic pad 340 at a third planar level parallel to and spaced from the second level, which accommodates pad 320. Because the distances between levels are additive, the third level is even further distant from the first level than the second level. Preferably, third pad 340 is so far removed from the first level of frame 301 that the bottom surface 340a is exposed from a future device package and can thus be used, when having a solderable surface metallurgy, to be solder-attached directly to a board or heat sink. Third pad 340 may be sized to offer support for one of more semiconductor chips or passive components. In the example embodiment of FIG. 3, a vertical stack of two chips 341 and 342 is attached on third pad 340, taking advantage of the deep downset of pad 340 relative to the original first level of the frame. Also, pad 340 has an addition 343, which expands the area of the third pad available for assembling a chip or a passive component 344.[0023] For manufacturing leadframes like 100 and 300 in mass production, the complete pattern of chip pads, leads and support structures is first stamped or etched from the original flat thin sheet of metal. The thicknesses of the starting sheet metal are preferably between about 0.25 and 0.15 mm. Starting materials include, but are not limited to, copper, copper alloys, aluminum, iron-nickel alloys, and Kovar™. In the stamping or etching process, an individual lead and strap of the leadframe takes the form of a thin metallic strip with its particular geometric shape determined by the design. For most purposes, the length of an example lead and strap is considerably longer than its width. [0024] Then, major parts of the leadframe are clamped in one horizontal plane, while an outside force is applied to the chip pads in order to press them into their new horizontal planes. The straps supporting the chip pads have to absorb this force by stretching; they are "pressed" into their final geometrical shape.[0025] An outside force, applied along the length of the strap, can stretch the strap in the direction of the length, while the dimension of the width is only slightly reduced, so that the new shape appears elongated. For elongations small compared to the length, and up to a limit, called the elastic limit given by the material characteristics, the amount of elongation is linearly proportional to the force. Beyond that elastic limit, the strap suffers irreversible changes to its inner strength.[0026] As the perspective views in FIGS. 2 and 4 illustrate, the lengths of straps such as 131, 132, and 332 is within the quoted elastic range of elongation (approximately 7 to 8 % of original strap length). If more elongation than this elastic limit is required, the needed elongation may be obtained by linearizing a designed-in bending. The contribution of linearizing can be obtained when a topologically long body is first designed and stamped out so that it contains curves, bendings, meanderings or similar non-linearities. An example are configurations selected from a group including bent geometry, curved geometry, and toroidal geometry. By applying force, at least part of the non-linearity is stretched or straightened so that afterwards the body is elongated.[0027] An example of the linearizing of designed-in bending is indicated in FIG. 4 by the strap designated 335. Strap 335 originally had a curved shape indicated by the dashed contours 335a.[0028] Another embodiment is a semiconductor device, such as illustrated in FIGS. 1 and 2. The device includes leadframe 100, semiconductor chips 121, 131, 141, and 142, and a package 160. As discussed above, leadframe 100 comprises a frame 101 of sheet metal in a first planar level, wherein the frame has metallic leads 110 and a first metallic pad 120 extending inward from the frame; the first pad is tied to the frame by first metallic straps 120a. Further, the leadframe includes a second metallic pad 130 in a second planar level parallel to and spaced from the first level; second pad 130 may be tied by second metallic straps to the frame. A third metallic pad 140 is in a third planar level parallel to and spaced from the second level and additively from the first level; third pad 140 tied by third metallic straps 131 to the second pad. Preferably, the third pad surface 140a facing away from the first pad is solderable. For example, when pad 140 is made of copper, surface 140a may include a layer of tin or may have a sequence of thin layers made of nickel, palladium, and - optionally - gold.[0029] The terminals of the semiconductor chips are connected to respective leads before the assembly is encapsulated in a package 160. For clarity purpose, FIGS. 1 and 2 show the package as being made of transparent material and in dashed outlines. Preferably the package is made of an epoxy-based compound, which is opaque and encapsulates the chips and bonding wires, the leads, the first and second pad, and portions of the third pad, while it leaves the solderable third pad surface (140a) un-encapsulated and thus exposed to the ambient.[0030] After completing the encapsulation process, the packaged unit undergoes the trimming and forming process steps. In the trimming process, frame 101 is removed so that the individual leads 110 are freed-up. In the forming process, the discrete leads 110 may be bent or otherwise formed to obtain the desired outline so that the completed packaged device can be inserted into or attached to a board.[0031] Example embodiments apply to products using any type of semiconductor chip, discrete or integrated circuit. The material of the semiconductor chip may comprise silicon, silicon germanium, gallium arsenide, gallium nitride, or any other semiconductor or compound material used in integrated circuit manufacturing.[0032] As another example, example embodiments apply to devices with one or more semiconductor chips assembled on the leadframe by attachment and electrical connection.[0033] As yet another example, example embodiments apply to leadframes with pad planar levels used to various degrees for accommodating chips. In some devices, the pads of all levels may populated by chips. In other devices, the pad of only one level (or a few levels) may be so populated. In some devices, a pad may have more than one chip assembled. In other devices, one or more pads may be unpopulated.[0034] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
Methods, systems, and devices for fabrication of memory cells are described. An electrode layer may have an initial thickness variation after being formed. The electrode layer may be smoothened priorto forming additional layers of a memory cell, thus decreasing the thickness variation. The subsequent layer fabricated may have a thickness variation that may be dependent on the thickness variationof the electrode layer. By decreasing the thickness variation of the electrode layer prior to forming the subsequent layer, the subsequent layer may also have a decreased thickness variation. The decreased thickness variation of the subsequent layer may impact the electrical behavior of memory cells formed from the subsequent layer. In some cases, the decreased thickness variation of the subsequent layer may allow for more predictable voltage thresholds for such memory cells, thus increasing the read windows for the memory cells.
1.A device including:The first access line for the memory cell;A first electrode for the memory cell, the first electrode being disposed above the first access line and including carbon oxide; andAn active component for the memory cell, the active component is in contact with the first electrode and includes a chalcogenide.2.The apparatus of claim 1, wherein the carbon oxide is oxidized based at least in part on a chemical mechanical planarization CMP process associated with the first electrode.3.The apparatus of claim 2, wherein the carbon oxide is oxidized based at least in part on a breach of a vacuum seal associated with the CMP process.4.The device of claim 1, wherein the active component for the memory cell includes a selection component, a storage component, or a combination thereof for the memory cell.5.The device of claim 1, further comprising:A second electrode for the memory cell; andA second active component for the memory cell, the second active component is in contact with the second electrode and includes a chalcogenide.6.The device of claim 5, wherein:The first electrode includes a first surface in contact with the active component, the first surface having a first roughness; andThe second electrode includes a second surface in contact with the second active component, the second surface having a second roughness greater than the first roughness.7.The device of claim 5, wherein:The active component includes a first chalcogenide material; andThe second active component includes a second chalcogenide material, the second chalcogenide material being different from the first chalcogenide material.8.The device of claim 5, wherein the active component and the second active component comprise the same chalcogenide material.9.The device of claim 5, wherein the second electrode includes carbon oxide.10.The device of claim 5, further comprising:A third electrode for the memory cell, the third electrode being in contact with the second active component; andThe second access line for the memory cell.11.The apparatus of claim 10, wherein the third electrode includes carbon oxide.12.The device of claim 1, wherein the first electrode includes two sublayers, and the sublayer in contact with the active component includes carbon.13.A method including:Forming a metal layer for access lines;Forming an electrode layer for a memory cell over the metal layer, wherein the surface of the electrode layer has an initial surface roughness;Polishing the surface of the electrode layer to change the surface from having the initial surface roughness to having a subsequent surface roughness smaller than the initial surface roughness; andAfter the polishing, an active layer in contact with the surface of the electrode layer is formed, wherein the thickness uniformity of the active layer is based at least in part on the subsequent surface roughness.14.The method of claim 13, wherein polishing the surface of the electrode layer comprises:A chemical mechanical planarization CMP process is applied to the surface of the electrode layer.15.The method of claim 13, wherein:Forming the electrode layer includes depositing electrode material through a deposition process; andPolishing the surface of the electrode layer includes breaking a vacuum seal in association with the deposition process.16.The method of claim 13, further comprising:Forming a second electrode layer for the memory cell over the active layer; andA second active layer is formed on the second electrode layer.17.The method of claim 16, further comprising:The surface of the second electrode layer is polished before forming the second active layer to change the surface of the second electrode layer from a second initial surface roughness to less than the second initial surface roughness Degree of the second subsequent surface roughness.18.The method of claim 16, further comprising:Before forming the second electrode layer, polishing the surface of the active layer; orPolishing the surface of the second active layer.19.The method of claim 16, wherein the memory component for the memory cell includes at least a part of the second active layer.20.The method of claim 16, wherein:The active layer includes a first chalcogenide material;The second active layer includes a second chalcogenide material, the second chalcogenide material being different from the first chalcogenide material; andThe electrode layer and the second electrode layer each include carbon.21.The method of claim 16, further comprising:Forming a third electrode layer for the memory cell over the second active layer;Forming a second metal layer for the second access line of the memory cell, the second metal layer above the third electrode layer; andBefore forming the second metal layer, the surface of the third electrode layer is polished.22.A method including:Forming a metal layer for access lines;Forming a first electrode layer including carbon for a memory cell over the metal layer;Reducing the surface roughness of the upper surface of the first electrode layer by applying a chemical mechanical planarization CMP process to the upper surface of the first electrode layer;After applying the CMP process, forming a chalcogenide layer in contact with the upper surface of the first electrode layer; andA second electrode layer including carbon for the memory cell is formed over the chalcogenide layer.23.The method of claim 22, wherein:Forming the first electrode layer includes depositing an electrode material through a deposition process; andApplying the CMP process to the upper surface of the first electrode layer includes breaking a vacuum seal associated with the deposition process.24.The method of claim 22, further comprising:Reducing the surface roughness of the upper surface of the second electrode layer by applying a second CMP process to the upper surface of the second electrode layer; andForming a second chalcogenide layer in contact with the upper surface of the second electrode layer, wherein the thickness of the second chalcogenide layer is based at least in part on reducing the upper surface of the second electrode layer The surface roughness.25.The method of claim 22, further comprising:Forming a second chalcogenide layer in contact with the upper surface of the second electrode layer, wherein the thickness of the second chalcogenide layer is based at least in part on the initial value of the upper surface of the second electrode layer Surface roughness.
Manufacturing of electrodes for memory cellscross referenceThis patent application requires the US Patent Application No. 16/001,795 entitled "Fabrication of Electrodes for Memory Cells" filed on June 6, 2018 by Zheng et al. Priority, the US patent application is assigned to the present assignee, and is incorporated herein by reference in its entirety.Background techniqueThe following generally relates to the manufacture of memory cells, and more specifically, to the manufacture of electrodes for the memory cells.Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, and digital displays. Information is stored by programming different states of the memory device. For example, a binary device has two states, usually represented as a logic "1" or a logic "0". In other systems, more than two states can be stored. In order to access the stored information, the components of the electronic device can read or sense the stored state of at least one of the memory devices. In order to store information, the components of the electronic device can write or program the state in the memory device.There are various types of memory devices, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM ( MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), etc. The memory device can be volatile or non-volatile. For example, the non-volatile memory of FeRAM can maintain the stored logic state for a long period of time even when there is no external power supply. Volatile memory devices such as DRAMs may lose their stored state over time unless they are periodically refreshed by an external power source. FeRAM can use a device architecture similar to that of volatile memory, but can have non-volatile properties due to the use of ferroelectric capacitors as storage devices. Therefore, FeRAM devices may have improved performance compared to other non-volatile and volatile memory devices.In some memory devices, the electrical performance of the memory cell (eg, one or more threshold voltages of the memory cell) may depend at least in part on the physical size of the memory cell. There may be a need for a solution for reducing the change in physical size and therefore the change in the electrical performance of the memory cell associated with the memory device.Description of the drawings1A to 1C illustrate an example of manufacturing technology according to an embodiment of the present disclosure.2A and 2B show an example of manufacturing technology according to an embodiment of the present disclosure.3 to 5 illustrate a method for manufacturing a memory cell according to an embodiment of the present disclosure.Detailed waysSome memory devices can be formed at least in part by forming a stack of various materials (e.g., a stack of materials can be formed, and additional process steps can be applied to the stack). In some cases, the layers of the stack may be formed sequentially, so the formation of the stack may involve forming a second layer of the stack on or on top of the first previous layer of the stack. The method of forming the first layer may result in the layer having a rough surface and an associated thickness variation. If the stacked second layer is formed in contact with the uneven first layer, the thickness change of the first layer can propagate upward to the next second layer, thereby also causing the thickness change of the second layer. Variations in thickness may affect the performance of one layer, two layers and/or components. For example, when the materials in a given layer are exposed to different voltages (e.g., the material or the threshold voltage of the layer), the material properties may depend on the thickness of the layer. Therefore, it may be desirable to minimize the thickness variation of the previous layer to maximize the thickness uniformity in the subsequent layer.According to the teachings herein, manufacturing a memory cell may include smoothing (e.g., polishing) the previous layer before forming the next layer. For example, a technique of varying the thickness of the entire layer can be used to manufacture the first electrode layer. In some cases, polishing the electrode layer before forming the active layer can reduce the thickness variation in the electrode layer, thereby reducing the thickness variation in the active layer. Because the electrode layer is polished before the active layer is formed, the thickness of the resulting active layer changes less than the active layer formed without an intermediate polishing step. Therefore, the active layer can have more predictable and uniform performance. For example, when each memory cell is exposed to the same voltage (for example, the memory cell formed by the active layer may have a more uniform threshold voltage), the active layer may exhibit similar performance on multiple memory cells. Therefore, these and other manufacturing techniques described herein can improve the performance and performance of memory cells.The features of the present disclosure introduced above are further described below in the context of the example manufacturing techniques of FIGS. 1A, 1B, 1C and FIGS. 2A, 2B. These and other features of the present disclosure are further illustrated and described with reference to the flowcharts of FIGS. 3 to 5 related to the fabrication of electrodes for memory cells.1A-1C are schematic diagrams of the structure of an intermediate memory array, showing a method of manufacturing a memory cell stack with a smooth electrode layer depicted in various manufacturing stages.Referring to the intermediate array structure 100-a of FIG. 1A, according to some examples, the area 105-a may include aspects of the array structure for the first memory cell stack, and 105-b may include the array structure for the second memory cell stack Aspect. In some cases, the first memory cell stack and the second memory cell stack may finally be configured (eg, manufactured as) two different memory cells, and the data stored in the first memory cell may be independent of the data stored in the first memory cell. 2. Data in the memory cell. Although only two regions 105-a and 105-b are shown, those of ordinary skill in the art will understand that many regions may actually be formed.In some cases, manufacturing a memory cell stack may include forming conductive material 110 over a substrate (not shown). The conductive material 110 may be used to form one or more access lines, for example, word lines or bit lines of memory cells corresponding to the region 105-a and/or the region 105-b.The method may additionally include forming an electrode material 115 over the conductive material 110. The electrode material 115 may be used to form one or more electrodes (e.g., to couple the access line to the active components of the memory cell), such as electrodes corresponding to the regions 105-a and 105-b, respectively. The electrode material 115 may include carbon. In some cases, the electrode material 115 may be composed of two sub-layers (not shown), and therefore may be referred to as a double-layer electrode, where the first sub-layer is in contact with the conductive material 110, and the second sub-layer is formed on the first sub-layer Above. In this case, the second upper sublayer may contain carbon, and may be referred to as a carbon-based material. The electrode material 115 may be formed, for example, by a deposition technique, such as physical vapor deposition (PVD), chemical vapor deposition (CVD), or atomic layer deposition (ALD), among other deposition techniques. Each layer may be initially formed as a cover layer covering the entire surface area of the die or substrate (eg, wafer).In some instances, the deposition technique (eg, PVD, CVD, or ALD technique) used to form the electrode material 115 may cause the top (eg, exposed) surface of the electrode material 115 to be damaged due to, for example, sputtering or other aspects of related deposition techniques. Becomes undesirably rough. The roughness of the top surface of the electrode material 115 causes some parts of the electrode material 115 to have a different thickness from other parts. For example, the thickness T1 of the electrode material 115 may be greater than the thickness T2, the thickness T2 may be greater than the thickness T3, and the thickness T3 may be greater than the thickness T4. The electrode material thicknesses T1-T4 can therefore vary within a single memory stack region 105 or between different memory stack regions 105-a and 105-b. That is, in some cases, the thickness of the electrode material 115 in a part of the region 105-a may be greater than the thickness of the electrode material 115 in another part of the region 105-b (ie, T1>T2). In other cases, the thickness of the electrode material 115 in one region 105-a may be greater than the thickness of the electrode material 115 in a different region 105-b (ie, T1, T2>T3, T4).Referring now to the intermediate array structure 100-b of FIG. 1B, according to some examples, the method may include smoothing the electrode material 115. The smoothing process can smooth the upper surface of the electrode material 115, thereby reducing thickness variation within the electrode material 115 (and thus also increasing thickness uniformity). Therefore, the smoothing process can reduce the thickness variation of the electrode material 115 in the single memory stack region 105. For example, the thickness of the electrode material 115 may be the same as or substantially the same as the thickness T5 of the entire region 105-c, and the electrode material thickness of the region 105-a may have changed more before smoothing (ie, thickness T1>thickness T2). The smoothing process can also reduce the variation of the electrode material thickness between the regions 105. For example, the thickness of the electrode material in the region 105-c T5 may be the same as or substantially the same as the thickness T6 of the electrode material in the region 105-d, and the thickness of the electrode material 115 in the region 105-a before smoothing is greater than that of the electrode material 115 in 105-b. The thickness (ie, T1, T2>T3, T4).The smoothing process may involve polishing the electrode material 115 using, for example, chemical mechanical planarization (CMP). In some cases, the intermediate array structure 100-a may undergo a CMP process to form the intermediate array structure 100-b. For example, CMP may be used to polish the top surface of the electrode material 115 to form the electrode material layer 115 of the intermediate array structure 100-b. The polishing process may not change the overall properties of the electrode material layer 115. For example, due to the polishing process, the relevant properties of the electrode material layer 115 may remain unchanged. That is, the electrode material layer 115 can exhibit performance similar to the performance that the electrode material layer 115 would exhibit without the CMP process when exposed to different voltages and currents after the CMP process. In some examples, performing CMP may involve breaking the vacuum seal, which may be associated with the manufacturing process (eg, PVD, CVD, or ALD process) used to form the electrode material layer 115, and breaking the vacuum seal may last at least a period of time The top (eg, exposed) surface of the electrode material 115 is exposed to oxygen. Therefore, the lack of vacuum sealing may cause oxidation to occur at the electrode material layer 115 of the intermediate array structure 100-b. Additionally or alternatively, the CMP process itself may cause oxidation to occur at the electrode material layer 115 of the intermediate array structure 100-b. Therefore, in some cases, the electrode material layer 115 may eventually include carbon oxide.Referring to the intermediate array structure 100-c of FIG. 1C, according to some examples, manufacturing a memory cell stack may additionally include forming an active component layer 120 over the polished electrode material 115. In some examples, the active component layer 120 may be used to form one or more selector components (eg, selector diodes) or memory components. In some cases, the oxidation of the electrode material layer 115 may be localized or more extensive at or near the surface of the electrode material layer 115 that is closest to (eg, in contact with) the active component layer 120.In some cases, the thickness uniformity of the active component layer 120 may be due to the polishing of the electrode material 115. That is, any change in the thickness of the electrode material 115 may cause the thickness in the active component layer 120 to reversely change. For example, if the electrode material 115 in the region 105-e is thicker than the electrode material 115 in the region 105-f, the active component layer 120 in the region 105-e may be thicker than the active component layer 120 in the region 105-f. thin.The active component layer 120 may be formed of a chalcogenide material. In the case of using the chalcogenide material of the active component layer 120 to form one or more selector components, the chalcogenide material of the active component layer 120 may remain amorphous, but when the voltage difference across the chalcogenide material is It can be in a high resistance state (e.g., an insulating state) below the threshold amplitude, and can be in a low resistance state (e.g., a conductive state) when the voltage difference across the chalcogenide material is at or above the threshold amplitude. In this case, the threshold amplitude may include the switching threshold voltage of the chalcogenide material of the active component layer 120.In the case of using the chalcogenide material of the active component layer 120 to form one or more memory components, the chalcogenide material of the active component layer 120 may be transformed between an amorphous state and a crystalline state. In some cases, when the active device layer 120 is in a crystalline state compared to the amorphous state, there may be a larger resistance contrast in the active device layer 120. A material in a crystalline state may have atoms arranged in a periodic structure, which may result in a relatively low resistance (e.g., set state). In contrast, a material in an amorphous state may have no or relatively few periodic atomic structures, which may have a relatively high resistance (for example, a reset state). The difference in resistance value between the amorphous and crystalline states of a material may be large; for example, the resistance of a material in an amorphous state may be one or more orders of magnitude greater than the resistance of a material in its crystalline state.In some cases where the chalcogenide material of the active device layer 120 is used to form one or more memory devices, in order to set the region 105 of the active device layer 120 to a low resistance state, heating can be performed by passing a current through the region 105 District 105. Heating the region 105 of the active component layer 120 to an elevated temperature (but below its melting temperature) can crystallize the region 105 of the active component layer 120 and form a low resistance state. The current may be generated by applying a voltage to the region 105, where the applied voltage is based on the first threshold voltage of the region 105. For example, if the region 105 is in the reset state, current may not flow through the region 105 unless the applied voltage is greater than the first threshold voltage.In some other cases where the chalcogenide material of the active component layer 120 is used to form one or more memory components, in order to set the region 105 of the active component layer 120 to a high resistance state, the region 105 may be heated above Melting temperature. By setting the voltage across the region 105 of the active component layer 120 (and therefore the current flowing through the region 105 of the active component layer 120) to the second threshold voltage (this can increase the temperature of the chalcogenide material beyond the melting point) Temperature), and then the voltage/current is removed suddenly enough (for example, voltage/current is only applied for a relatively short period of time so that crystallization does not occur), the region 105 of the active component layer 120 can be switched from the crystalline state to the non-crystalline state. Crystalline.When the switching threshold voltage of the active component layer 120 is used to form one or more selector components, and the first and second active component layers 120 corresponding to the setting state of the material of the active component layer 120 and the reset voltage The threshold voltage, when used to form one or more memory devices, may depend on the thickness of the active device layer 120. In other words, a larger thickness can correspond to a larger threshold voltage. In addition, the thickness of the active component layer 120 can change the threshold voltage accordingly. In some cases, it may be desirable for the entire active component layer 120 to have an accurate threshold voltage. For example, it may be desirable that the threshold voltage in the region 105-e remains the same in the region 105-e, and the threshold voltage in the region 105-e is similar to the threshold voltage in another region 105-f. That is, it may be desirable that the standard deviation of the threshold voltage of the active component layer 120 is small. In the case of using the chalcogenide material of the active component layer 120 to form one or more selector components, a threshold voltage with a small standard deviation may provide benefits such as improved reliability and improved design tolerances for memory devices. In the case of using the chalcogenide material of the active device layer 120 to form one or more memory devices, a threshold voltage with a small standard deviation can also provide benefits such as improved reliability and improved design tolerances for memory devices, including: A larger or more reliable large window between the first threshold voltage and the second voltage (which may, for example, correspond to the read or write window of the memory cell containing the region 105).2A-2B are schematic diagrams of the structure of an intermediate memory array, showing a method of manufacturing a memory cell stack with a smooth electrode layer depicted in various manufacturing stages. The memory array structure shown in FIGS. 2A-2B may correspond to the memory array structure described with reference to FIGS. 1A-1C that has been subsequently processed through additional manufacturing steps. For example, the conductive material 110 of FIGS. 1A-1C may correspond to the conductive material 210 of FIGS. 2A and 2B. In addition, the electrode material 115 of FIGS. 1A-1C may correspond to the electrode material 215 of FIGS. 2A and 2B.Referring to the intermediate array structure 200-a of FIG. 2A, according to some examples, manufacturing the memory cell stack may additionally include forming a second electrode material 225 over the first active component layer 220. In some cases, the second electrode material 225 may be a carbon-based material. The second electrode material 225 may be formed using a technique similar to the first electrode material 215 (for example, PVD, CVD, ALD). The formation technology of the second electrode material 225 may or may not produce a thickness change similar to the thickness change of the electrode material 115 seen in the intermediate array structure 100-a of FIG. 1A. That is to say, in some cases, the thickness of the second electrode material 225 at the time of initial formation may vary within a single region 105 or between regions, for example, in regions 105-g and 105-h (which can respectively correspond to such as reference Figures 1A-1C describe the variation between zones 105-a and 105-b).Fabricating the intermediate array structure 200-a may include an additional step of polishing the electrode material 225 using CMP, for example, to achieve a more uniform thickness. In this case, the electrode material 225 may become oxidized carbon, because polishing the intermediate array structure 200-a outside the vacuum environment may expose the top of the second electrode material 225 to oxygen and/or the polishing process itself may introduce oxidation . In some other cases, manufacturing the memory cell stack may not include polishing of the second electrode material 225. In this case, the second electrode material 225 may not include carbon oxide.Referring to the intermediate array structure 200-b of FIG. 2B, according to some examples, manufacturing the memory cell stack may additionally include forming a second active component layer 230 over the second electrode material 225. The thickness of the second active component layer 230 may vary based on the thickness of the second electrode material 225. For example, if the electrode material in the region 105-i is thicker than the electrode material in the region 105-j, the second active component layer 230 in the region 105-i may be thinner and the second active component layer 230 in the region 105-j The component layer 230 may be thicker. Alternatively, if the thickness of the second electrode material 225 is uniform on the region 105, the thickness of the second active component layer 230 may also be uniform on the region 105.In some examples, the second active component layer 230 may include cell materials to form, for example, one or more memory components or selector components for memory cells. The second active component layer 230 may be formed of a chalcogenide material. In some cases, the second active component layer 230 may include the same chalcogenide material as the active component layer 220 shown in FIG. 2A. In some other examples, the second active component layer 230 may include a chalcogenide material different from the active component layer 220 (for example, may have a different stoichiometry).Still referring to FIG. 2B, according to some examples, fabricating a memory cell stack may additionally include forming a third electrode material 235 over the second active component layer 230. The third electrode material 235 may be formed using techniques similar to those used to form the electrode materials 215 and 225 (for example, PVD, CVD, ALD). In some cases, the thickness variation and surface roughness produced by the formation technique of the electrode material 235 may be similar to the thickness variation and surface roughness of the electrode material 115 in FIG. 1A. Manufacturing the intermediate array structure 200-b may optionally include polishing the third electrode material 235 to reduce thickness variation, thereby reducing the surface roughness of the third electrode material 235. In the case of polishing the third electrode material 235, the third electrode material 235 may contain carbon oxide, which is generated due to polishing of the intermediate array structure 200-b in a non-vacuum environment or due to oxygen associated with breaking the vacuum seal Exposure or due to the polishing process itself. In some other cases, manufacturing the memory cell stack may not include polishing the third electrode material 235. In this case, the third electrode material 235 may not include carbon oxide. Therefore, memory devices manufactured in accordance with the techniques described herein may include layers that include carbon (e.g., carbon electrode layers), and all or any subset of such carbon-based layers may exhibit oxidation. In addition, such oxidation can be localized or more extensive at or near the polishing surface, which can also be a surface exposed to oxygen in connection with polishing or other smoothing processes.Referring again to FIG. 2B, manufacturing the intermediate array structure 200-b may include forming a second conductive material 240 over the third electrode material 235. The second conductive material 240 may be used to form one or more access lines, such as bit lines or word lines for memory cells corresponding to the region 105-g and/or region 105-h.In some cases, the forming method may optionally include etching the space between the regions 105-i and 105-j in the layers 220, 225, 230, and 235. This can form different memory cells in regions 105-i and 105-j. Of course, when the space between the regions 105-i and 105-j is not etched, the two regions 105 can still form different memory cells. For example, the voltage applied to the active component 230 in the region 105-i may not sufficiently propagate through the material of the active component 230, thereby disturbing (eg, destroying) the logic state stored in the region 105-j.In addition, in some examples, the second electrode layer (including the second electrode material 225) and the second active component layer 230 may be omitted, and the active component layer 120 may be configured as a storage element for a self-selected memory cell.In some cases, the conductive material 110 or 210 may be smoothed before manufacturing the additional layer (eg, electrode material 115 or 215). Smoothing the conductive material 110 and/or 210 can reduce the thickness variation of the conductive material, thereby reducing the thickness variation of any subsequent layer (for example, a layer including the electrode material 115 or 215) formed thereon. In addition, in some other cases, the active component layer 120 or the active component layer may be made before the additional layer is manufactured thereon (for example, before the second electrode layer 225 is manufactured and/or before the third electrode layer 235 is manufactured) One or more of 230 are smooth. This additional smoothing of the additional surface of the active component layer 120 and/or the active component layer 230 (for example, the upper surface, the lower surface is smoothed due to the smoothing of the immediately adjacent lower layer) can further reduce the area 105 or the span 105 The thickness of the active component layer changes, so the change in one or more threshold voltages (for example, for setting or reset) of the active component layer in or across the region 105 can be further reduced. The smoothing of the surface of the active component layer 120 or the active component layer 230 includes the application of the CMP process. Depending on the details of the CMP process, contamination of the active component layer (for example, chemical contamination) may occur, which can be relative to the thickness uniformity The slight increase presents a compromise.Although not shown for clarity and ease of description, it will be understood that the array structure shown may be formed on or under other layers (for example, on a substrate), and the other layers may include, among other things, various Peripheral and supporting circuitry, for example, complementary metal oxide semiconductor (CMOS) transistors that form part of the column and row driver circuitry and sense amplifier circuitry, and sockets that connect such circuitry to the memory array through the aforementioned columns and rows And wiring. In addition, other layers may contain one or more memory arrays or array "levels"—the structures shown in the examples of FIGS. 1A, 1B, 1C and 2A, 2B may correspond to one level of memory array, and may be in any number of Above or below the additional level of the memory array.FIG. 3 shows a flowchart illustrating a method 300 of manufacturing an electrode for a memory cell according to an embodiment of the present disclosure. The operations of method 300 may be implemented according to various manufacturing techniques as described herein. For example, the operation of method 300 may be implemented by the manufacturing techniques discussed with reference to FIGS. 1 and 2.At 305, a metal layer for the access line can be formed. The operation of 305 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to Figures 1 and 2 may be used to perform aspects of the operations of 305.At 310, an electrode layer for a memory cell may be formed over the metal layer. In some examples, the surface of the electrode layer has an initial surface roughness. In some examples, the electrode layer may be formed by depositing an electrode material through a deposition process. The operation of 310 may be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of 310 operations.At 315, the surface of the electrode layer may be polished. In some examples, polishing can change the surface from having an initial surface roughness to having a subsequent surface roughness that is less than the initial surface roughness. In some instances, polishing can be accomplished by applying a CMP process to the surface of the electrode layer. In some cases, polishing the surface of the electrode layer may include breaking the vacuum seal associated with the deposition process. The operation of 315 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of the operations of 315.At 320, an active layer may be formed after polishing. In some examples, the active layer may be in contact with the surface of the electrode layer. The uniformity of the thickness of the active layer may be based on the subsequent surface roughness. The operation of 320 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of the operation of 320.In some instances, the device may use general-purpose or special-purpose hardware to perform the various aspects of manufacturing described above. The apparatus may include features, means or instructions for forming the metal layer of the access line. The apparatus may further include features, means, or instructions for forming an electrode layer for the memory cell over the metal layer, wherein the surface of the electrode layer has an initial surface roughness. The apparatus may also include features, devices, or instructions for polishing the surface of the electrode layer to change the surface from having an initial surface roughness to having a subsequent surface roughness less than the initial surface roughness. The apparatus may additionally include features, means or instructions for forming an active layer in contact with the surface of the electrode layer after polishing, wherein the uniformity of the thickness of the active layer is based on subsequent surface roughness.In some examples of the above method and apparatus, polishing the surface of the electrode layer may include applying a CMP process to the surface of the electrode layer. In some examples of the method and apparatus, forming the electrode layer may include depositing electrode material via a deposition process. In some cases, polishing the surface of the electrode layer may include breaking the vacuum seal associated with the deposition process.Some examples of the methods and apparatuses described above may further include processes, features, devices, or instructions for forming a second electrode layer for a memory cell over the active layer. Some examples of the above methods and apparatus may further include processes, features, devices, or instructions for forming a second active layer over the second electrode layer. Some examples of the above methods and apparatuses may further include methods for polishing the surface of the second electrode layer before forming the second active layer to change the surface of the second electrode layer from having a second initial surface roughness to having a surface roughness smaller than the second The process, feature, device, or instruction of the second subsequent surface roughness of the initial surface roughness.Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for polishing the surface of the active layer before forming the second electrode layer. Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for polishing the surface of the second active layer. In some examples of the above methods and devices, the storage component for the memory cell includes at least a part of the second active layer. In some examples of the above methods and apparatuses, the active layer may include the first chalcogenide material. In some examples, the second active layer may include a second chalcogenide material, the second chalcogenide material being different from the first chalcogenide material. In some examples of the above methods and devices, both the electrode layer and the second electrode layer include carbon.Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for forming a third electrode layer for the memory cell over the second active layer. Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for forming the second metal layer of the second access line of the memory cell, the second metal layer above the third electrode layer. Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for polishing the surface of the third electrode layer before forming the second metal layer.The flowchart shown in FIG. 4 illustrates a method 400 of manufacturing an electrode for a memory cell according to an embodiment of the present disclosure. The operations of method 400 may be implemented according to various manufacturing techniques as described herein. For example, the operations of method 400 may be implemented by the manufacturing techniques discussed with reference to FIGS. 1 and 2.At 405, a metal layer for access lines can be formed. The operation of 405 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operations of 405.At 410, an electrode layer for a memory cell may be formed over the metal layer. In some examples, the surface of the electrode layer has an initial surface roughness. The operation of 410 may be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operations of 410.At 415, the surface of the electrode layer can be polished. In some examples, polishing can change the surface from having an initial surface roughness to having a subsequent surface roughness that is less than the initial surface roughness. In some instances, polishing can be accomplished by applying a CMP process to the surface of the electrode layer. The operation of 415 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of the operation of 415.At 420, an active layer may be formed after polishing. In some examples, the active layer may be in contact with the surface of the electrode layer. The uniformity of the thickness of the active layer may be based on the subsequent surface roughness. The operation of 420 may be performed according to the method described herein. In some instances, aspects of the operations of 420 may be performed using the manufacturing techniques discussed with reference to FIGS. 1 and 2.At 425, a second electrode layer for the memory cell may be formed over the active layer. The operation of 425 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of the operation of 425.At 430, the surface of the second electrode layer may be polished before forming the second active layer. In some examples, polishing the surface of the second electrode layer can change the surface of the second electrode layer from the second initial surface roughness to a second subsequent surface roughness that is less than the second initial surface roughness. The operation of 430 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of the operations of 430.At 435, a second active layer may be formed over the second electrode layer. The operation of 435 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1 and 2 may be used to perform aspects of operation of 435.The flowchart shown in FIG. 5 illustrates a method 500 of manufacturing an electrode for a memory cell according to an embodiment of the present disclosure. The operations of method 500 may be implemented according to various manufacturing techniques as described herein. For example, the operations of the method 500 may be implemented by the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B.At 505, a metal layer for access lines can be formed. The operation of 505 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operations of 505.At 510, a first electrode layer including carbon may be formed over the metal layer. In some cases, the first electrode layer may be used for memory cells. In some examples, forming the first electrode layer may include depositing electrode materials through a deposition process. The operation of 510 may be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operation of 510.At 515, the surface roughness of the upper surface of the first electrode layer can be reduced. In some examples, the upper surface roughness can be reduced by applying a CMP process to the upper surface of the first electrode layer. In some other examples, applying a CMP process to the upper surface of the first electrode layer may include breaking the vacuum seal associated with the deposition process. The operation of 515 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operation of 515.At 520, a chalcogenide layer in contact with the upper surface of the first electrode layer may be formed after applying a CMP process. The operation of 520 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operation of 520.At 525, a second electrode layer including carbon may be formed over the chalcogenide layer. In some examples, the second electrode layer can be used for the memory cell. The operation of 525 can be performed according to the method described herein. In some instances, the manufacturing techniques discussed with reference to FIGS. 1A, 1B, 1C and 2A, 2B may be used to perform aspects of the operation of 525.In some instances, the device may use general-purpose or special-purpose hardware to perform the described aspects of manufacturing. The apparatus may include features, means, or instructions for forming a metal layer for access lines and forming a first electrode layer including carbon for memory cells over the metal layer. The apparatus may include features, devices, or instructions for reducing the surface roughness of the upper surface of the first electrode layer by applying a CMP process to the upper surface of the first electrode layer. The apparatus may include features for forming a chalcogenide layer in contact with the upper surface of the first electrode layer after applying a CMP process and forming a second electrode layer including carbon for the memory cell over the chalcogenide layer, Device or instruction.Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for reducing the surface roughness of the upper surface of the second electrode layer by applying the second CMP process to the upper surface of the second electrode layer. Some examples of the above methods and apparatuses may further include processes, features, devices, or instructions for forming a second chalcogenide layer in contact with the upper surface of the second electrode layer, wherein the thickness of the second chalcogenide layer may be based on subtraction The surface roughness of the upper surface of the second electrode layer is small. Some examples of the above-mentioned methods and apparatuses may further include processes, features, devices, or instructions for forming a second chalcogenide layer in contact with the upper surface of the second electrode layer, wherein the thickness of the second chalcogenide layer may be based on the first The initial surface roughness of the upper surface of the two electrode layer.It should be noted that the method described above describes possible implementations, and the operations and steps can be rearranged or modified in other ways, and other implementations are possible. In addition, embodiments from two or more methods can be combined.In some cases, a device, system, or device manufactured according to various manufacturing techniques as described herein may include a first access line for a memory cell, a first electrode for the memory cell, and a first access line disposed on the first access line. A first electrode above and including carbon oxide, and an active component for the memory cell, the active component being in contact with the first electrode and including chalcogenide.In some examples of the aforementioned apparatus, system, or equipment, the carbon oxide may be oxidized based at least in part on the CMP process associated with the first electrode. In some cases, the carbon oxide may be oxidized based at least in part on the breach of the vacuum seal associated with the CMP process or based at least in part on the CMP process itself. In some examples of the above-mentioned apparatus, system, or device, the active component for the memory cell may include a selective component of the memory cell, a storage component, or a combination thereof.In some examples, the device, system, or device may further include a second electrode for the memory cell. The device, system, or device may also include a second active component for the memory cell, where the second active component may be in contact with the second electrode and may include a chalcogenide. In some examples, the first electrode may have a first surface in contact with the active component, the first surface having a first roughness. In addition, the second electrode may include a second surface in contact with the second active component, wherein the second surface has a second roughness that may be greater than the first roughness.In some cases of the aforementioned devices, systems or equipment, the active component may include the first chalcogenide material. In some examples, the second active component can include a second chalcogenide material, where the second chalcogenide material can be different from the first chalcogenide material. In some other examples, the active component and the second active component may include the same chalcogenide material. In some examples, the second electrode may include carbon oxide. In some cases, the first electrode includes two sublayers, where the sublayer in contact with the active component may include carbon.In some cases, the device, system, or device described above may include a third electrode for the memory cell, the third electrode being in contact with the second active component. The device, system or device may further include a second access line for the memory unit. In some examples, the third electrode may include carbon oxide.The term "coupled" refers to the relationship between components that support the flow of electrons between components. This may include direct connections between components or may include intermediate components. Components that are in electronic communication or coupling with each other may actively exchange electrons or signals (for example, in a power-on circuit), or may not actively exchange electrons or signals (for example, in a power-off circuit), but may be configured and operable to Exchange electrons or signals when the circuit is energized. As an example, two components that are physically connected via a switch (e.g., a transistor) can be coupled regardless of the state of the switch (ie, open or closed).The term "layer" as used herein refers to a layer or sheet of geometric structure. Each layer can have three dimensions (e.g., height, width, and depth), and can cover some or all of the surface. For example, the layer may be a three-dimensional structure, where two dimensions are greater than the third dimension, such as a film. Layers can contain different elements, components and/or materials. In some cases, a layer may consist of two or more sublayers. In some drawings, two dimensions in a three-dimensional layer are depicted for illustrative purposes. However, those skilled in the art will recognize that the layers are three-dimensional in nature.As used herein, the term "substantially" means that the modified feature (eg, a verb or adjective that is substantially modified by the term) need not be absolute but close enough to obtain the advantage of the feature.As used herein, the term "electrode" can refer to electrical conductors, and in some cases, can be used as electrical contacts to memory cells or other components of a memory array. Electrodes may include traces, wires, conductive lines, conductive layers, etc. that provide conductive paths between elements or components of the memory array.The chalcogenide material may be a material or alloy containing at least one of the elements S, Se, and Te. The phase change materials discussed herein may be chalcogenide materials. Chalcogenide materials may include S, Se, Te, Ge, As, Al, Sb, Au, indium (In), gallium (Ga), tin (Sn), bismuth (Bi), palladium (Pd), cobalt (Co) ), oxygen (O), silver (Ag), nickel (Ni), platinum (Pt) alloy. Example chalcogenide materials and alloys may include (but are not limited to) Ge-Te, In-Se, Sb-Te, Ga-Sb, In-Sb, As-Te, Al-Te, Ge-Sb-Te, Te- Ge-As, In-Sb-Te, Te-Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn-Sb-Te, In-Sb-Ge, Te-Ge- Sb-S, Te-Ge-Sn-O, Te-Ge-Sn-Au, Pd-Te-Ge-Sn, In-Se-Ti-Co, Ge-Sb-Te-Pd, Ge-Sb-Te- Co, Sb-Te-Bi-Se, Ag-In-Sb-Te, Ge-Sb-Se-Te, Ge-Sn-Sb-Te, Ge-Te-Sn-Ni, Ge-Te-Sn-Pd or Ge-Te-Sn-Pt. The hyphenated chemical composition symbol as used herein indicates an element contained in a specific compound or alloy, and is intended to indicate all stoichiometric quantities related to the indicated element. For example, Ge-Te can include GexTey, where x and y can be any positive integers. Other examples of variable resistance materials may include binary metal oxide materials or mixed valence oxides, including two or more metals, such as transition metals, alkaline earth metals, and/or rare earth metals. Embodiments are not limited to the one or more specific variable resistance materials associated with the memory elements of the memory cell. For example, other examples of variable resistance materials can be used to form memory cells and can include chalcogenide materials, colossal magnetoresistive materials, or polymer-based materials, and so on.The devices discussed herein can be formed on semiconductor substrates, such as silicon, germanium, silicon germanium alloys, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping using various chemical species including but not limited to phosphorus, boron, or arsenic. The doping can be performed by ion implantation or by any other doping method during the initial formation or growth of the substrate.The description set forth herein in conjunction with the drawings describes example configurations, and does not represent all examples that can be implemented or fall within the scope of the claims. The detailed description contains specific details for the purpose of providing an understanding of the described technology. However, these techniques can be practiced without these specific details. In some cases, well-known structures and devices are shown in the form of block diagrams in order to avoid obscuring the concepts of the described examples.In the drawings, similar components or features may have the same reference label. In addition, various components of the same type can be distinguished by following the reference mark by a dash and a second mark distinguishing among similar components. If only the first reference number is used in the specification, the description is applicable to any of the similar components having the same first reference number regardless of the second reference number.And, as used herein, included in the claims, a list of items (for example, a list of items starting with phrases such as "at least one of" or "one or more of") used in The "or" of indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). In addition, as used herein, the phrase "based on" should not be interpreted as a reference to a set of closed conditions. For example, the exemplary steps described as “based on condition A” may be based on both condition A and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should equally be interpreted as the phrase "based at least in part."The description herein is provided to enable those skilled in the art to make or use the present disclosure. Those skilled in the art will easily understand various modifications to the present disclosure, and the general principles defined herein can be applied to other variations without departing from the scope of the present disclosure. Therefore, the present disclosure is not limited to the examples and designs described herein, but is given the widest scope consistent with the principles and novel features disclosed herein.
An apparatus is disclosed. The apparatus comprises a primary power supply (PPS) configured to supply primary power, a PPS sensor configured to measure the power supplied by the PPS and provide a PPS measurement signal indicating an amount of the power supplied by the PPS, a backup power supply (BPS) configured to be provided in an emergency data system and further configured to supply backup power to a modem, and an integrated circuit configured to maintain a clock using the power supplied by the PPS. The integrated circuit is configured to receive the PPS measurement signal from the PPS sensor, determine whether the PPS measurement signal falls below a threshold, and maintain the clock using the power supplied by the BPS in response to a determination that the PPS measurement signal has fallen below the threshold.
CLAIMSWHAT IS CLAIMED IS:1. An apparatus, comprising:a primary power supply (PPS) configured to supply primary power;a PPS sensor configured to measure the power supplied by the PPS and provide a PPS measurement signal indicating an amount of the power supplied by the PPS; a backup power supply (BPS) configured to be provided in an emergency data system and further configured to supply backup power to a modem; andan integrated circuit configured to maintain a clock using the power supplied by the PPS, wherein the integrated circuit is configured to:receive the PPS measurement signal from the PPS sensor;determine whether the PPS measurement signal falls below a threshold; andmaintain the clock using the power supplied by the BPS in response to a determination that the PPS measurement signal has fallen below the threshold.2. The apparatus of claim 1, further comprising a secure processor, wherein the BPS is further configured to supply the backup power to the secure processor.3. The apparatus of claim 2, wherein the secure processor is inaccessible to a user of the apparatus and configured to perform secure transactions and/or manage digital rights based on a time value obtained from the clock.4. The apparatus of claim 2, wherein the integrated circuit comprises the secure processor.5. The apparatus of claim 1, wherein the PPS is further configured to recharge the BPS in response to a determination that the BPS is not fully charged.6. The apparatus of claim 1, further comprising the emergency data system, wherein the emergency data system is configured to provide an automatic crash notification and comprises:the BPS;the modem;a resilient housing, wherein the BPS and the modem are included within the resilient housing; andan antenna.7. A method, comprising:maintaining, with an integrated circuit, a clock using power supplied by a primary power supply (PPS);measuring, with a PPS sensor, the power supplied by the PPS;providing, with the PPS sensor, a PPS measurement signal indicating an amount of the power supplied by the PPS;determining, with the integrated circuit, whether the PPS measurement signal falls below a threshold; andmaintaining the clock using power supplied by a backup power supply (BPS) in response to a determination that the PPS measurement signal has fallen below the threshold, wherein the BPS is configured to be provided in an emergency data system and further configured to supply backup power to a modem.8. The method of claim 7, wherein the BPS is further configured to supply the backup power to a secure processor.9. The method of claim 8, wherein the secure processor is inaccessible to a user of an apparatus comprising the secure processor and configured to perform secure transactions and/or manage digital rights based on a time value obtained from the clock.10. The method of claim 8, wherein the integrated circuit comprises the secure processor.11. The method of claim 7, further comprising: recharging the BPS in response to a determination that the BPS is not fully charged.12. The method of claim 7, wherein the emergency data system is configured to provide an automatic crash notification and comprises:the BPS;the modem;a resilient housing, wherein the BPS and the modem are included within the resilient housing; andan antenna.13. An apparatus, comprising:a primary power supply (PPS) configured to supply a PPS signal and a PPS profile signal, wherein the PPS signal is configured to supply power and the PPS profile signal indicates one or more characteristics of the PPS; andan integrated circuit configured to:receive the PPS signal and the PPS profile signal from the PPS; operate using the power supplied by the PPS signal;determine the one or more characteristics of the PPS based on the PPS profile signal; andmanage power based on the determined one or more characteristics.14. The apparatus of claim 13, wherein the PPS is further configured to:superimpose the PPS profile signal onto the PPS signal;encode the PPS profile signal within the PPS signal; orany combination thereof.15. The apparatus of claim 13, wherein the one or more characteristics of the PPS include PPS identifier information comprising:a manufacturer identifier of the PPS;a model identifier of the PPS;a part number of the PPS; a serial number of the PPS; orany combination thereof.16. The apparatus of claim 13, wherein the one or more characteristics of the PPS include PPS usage information comprising:a length of time since an installation of the PPS;an amount of power provided;an amount of recharge power received; orany combination thereof.17. The apparatus of claim 13, wherein:the one or more characteristics of the PPS include a PPS charging profile of the PPS; andto manage power, the integrated circuit is further configured to determine, based on the PPS charging profile, an optimal charging signal for charging the PPS.18. The apparatus of claim 17, wherein to determine the optimal charging signal, the integrated circuit is further configured to:determine a maximum recommended voltage of the PPS;determine a minimum recommended operating voltage of the PPS;determine one or more preferred charging currents for charging the PPS;determine one or more triggering events that trigger a change in the preferred one or more charging currents; orany combination thereof.19. The apparatus of claim 18, further comprising an engine configured to recharge the PPS, wherein the integrated circuit is further configured to:determine that a voltage of the PPS is below the minimum recommended operating voltage;cause the engine to provide a bulk charging current to the PPS during a bulk charging period;determine that the voltage of the PPS has reached the minimum recommended operating voltage; cause the engine to reduce a charging current to the PPS over the course of an absorption charging period; andcause the engine to provide a float charge current to the PPS during a float charging period.20. The apparatus of claim 19, further comprising one or more sensors configured to sense a condition and provide a sensor signal to the integrated circuit, wherein the integrated circuit is further configured to determine the optimal charging signal based on the sensor signal.21. The apparatus of claim 20, wherein the one or more sensors are further configured to sense a temperature of the PPS, a temperature of the environment external to the PPS, or any combination thereof, and provide a sensed temperature signal to the integrated circuit, wherein the integrated circuit is further configured to determine, as a function of the sensed temperature signal, one or more of:the minimum recommended operating voltage;the bulk charging current;the minimum recommended operating voltage;a duration of the absorption charging period or a charge reduction rate associated with the absorption charging period;the float charge current; orany combination thereof.22. A method, comprising:supplying, with a primary power supply (PPS), a PPS signal and a PPS profile signal, wherein the PPS signal is configured to supply power and the PPS profile signal indicates one or more characteristics of the PPS;receiving, with an integrated circuit, the PPS signal and the PPS profile signal from the PPS;operating, with the integrated circuit, using the power supplied by the PPS signal;determining, with the integrated circuit, the one or more characteristics of the PPS based on the PPS profile signal; and managing, with the integrated circuit, power based on the determined one or more characteristics.23. The method of claim 22, further comprising:superimposing, by the PPS, the PPS profile signal onto the PPS signal;encoding, by the PPS, the PPS profile signal within the PPS signal; or any combination thereof.24. The method of claim 22, wherein the one or more characteristics of the PPS include PPS identifier information comprising:a manufacturer identifier of the PPS;a model identifier of the PPS;a part number of the PPS;a serial number of the PPS; orany combination thereof.25. The method of claim 22, wherein the one or more characteristics of the PPS include PPS usage information comprising:a length of time since an installation of the PPS;an amount of power provided;an amount of recharge power received; orany combination thereof.26. The method of claim 22, wherein:the one or more characteristics of the PPS include a PPS charging profile of the PPS; andthe managing of the power comprises determining, based on the PPS charging profile, an optimal charging signal for charging the PPS.27. The method of claim 26, wherein determining the optimal charging signal comprises:determining a maximum recommended voltage of the PPS;determining a minimum recommended operating voltage of the PPS; determining one or more preferred charging currents for charging the PPS; determining one or more triggering events that trigger a change in the preferred one or more charging currents; orany combination thereof.28. The method of claim 27, further comprising recharging, with an engine, thePPS;wherein the managing of the power further comprises:determining that a voltage of the PPS is below the minimum recommended operating voltage;causing the engine to provide a bulk charging current to the PPS during a bulk charging period;determining that the voltage of the PPS has reached the minimum recommended operating voltage;causing the engine to reduce a charging current to the PPS over the course of an absorption charging period; andcausing the engine to provide a float charge current to the PPS during a float charging period.29. The method of claim 28, further comprising:sensing, with one or more sensors, a condition;providing, with the one or more sensors, a sensor signal to the integrated circuit; wherein the managing power further comprises determining the optimal charging signal based on the sensor signal.30. The method of claim 29, wherein:sensing the condition comprises sensing a temperature of the PPS, sensing a temperature of the environment external to the PPS, or any combination thereof;providing the sensor signal comprises providing a sensed temperature signal to the integrated circuit; andthe managing of the power further comprises determining, as a function of the sensed temperature signal, one or more of:the minimum recommended operating voltage; the bulk charging current;the minimum recommended operating voltage;a duration of the absorption charging period or a charge reduction rate associated with the absorption charging period;the float charge current; orany combination thereof.
POWER MANAGEMENT IN AN AUTOMOTIVE VEHICUECROSS-REFERENCE TO REUATED APPUICATION[0001] The present Application for Patent claims the benefit of U.S. Provisional Patent Application No. 62/597,915, entitled “POWER MANAGEMENT IN AN AUTOMOTIVE VEHICLE,” filed December 12, 2017, pending, and assigned to the assignee hereof and hereby expressly incorporated herein by reference in its entirety.INTRODUCTION[0002] Aspects of this disclosure relate generally to automotive vehicles, and more particularly to automotive vehicle power management.[0003] A conventional automotive vehicle includes a primary power source, for example, a battery. Typically, the primary power source is depleted when used to start the vehicle (or to perform any other electrical function) and then recharged by the engine once the vehicle is running. For example, many cabin features - dashboard indicators, infotainment system, power-assist for windows, locks, etc. - also rely on the primary battery for power.[0004] If the primary power source runs down or gets disconnected, vehicle features lose power and cease to function. For example, when the primary power source is replaced, clock maintenance functionality cannot be maintained. Once power is restored to vehicle (for example, by recharging or replacing a battery), the clock must be reset. Solutions are needed for maintaining clock maintenance functionality in the absence of a power supply from the primary battery.SUMMARY[0005] The following summary is an overview provided solely to aid in the description of various aspects of the disclosure and is provided solely for illustration of the aspects and not limitation thereof.[0006] In accordance with aspects of the disclosure, an apparatus is disclosed. The apparatus may comprise, for example, a primary power supply (PPS) configured to supply primary power, a PPS sensor configured to measure the power supplied by the PPS and provide a PPS measurement signal indicating an amount of the power supplied by the PPS, a backup power supply (BPS) configured to be provided in an emergency data system and further configured to supply backup power to a modem, and an integrated circuit configured to maintain a clock using the power supplied by the PPS. The integrated circuit is configured to receive the PPS measurement signal from the PPS sensor, determine whether the PPS measurement signal falls below a threshold, and maintain the clock using the power supplied by the BPS in response to a determination that the PPS measurement signal has fallen below the threshold.[0007] In accordance with other aspects of the disclosure, a method is disclosed. The method may comprise, for example, maintaining, with an integrated circuit, a clock using power supplied by a primary power supply (PPS), measuring, with a PPS sensor, the power supplied by the PPS, providing, with the PPS sensor, a PPS measurement signal indicating an amount of the power supplied by the PPS, determining, with the integrated circuit, whether the PPS measurement signal falls below a threshold, and maintaining the clock using power supplied by a backup power supply (BPS) in response to a determination that the PPS measurement signal has fallen below the threshold, wherein the BPS is configured to be provided in an emergency data system and further configured to supply backup power to a modem.[0008] In accordance with aspects of the disclosure, another apparatus is disclosed. The apparatus may comprise, for example, a primary power supply (PPS) and an integrated circuit. The PPS may be configured to supply a PPS signal and a PPS profile signal, wherein the PPS signal is configured to supply power and the PPS profile signal indicates one or more characteristics of the PPS. The integrated circuit may be configured to receive the PPS signal and the PPS profile signal from the PPS, operate using the supply power associated with the PPS signal, determine the one or more characteristics of the PPS based on the PPS profile signal, and manage power based on the determined one or more characteristics.[0009] In accordance with other aspects of the disclosure, another method is disclosed. The method may comprise, for example, supplying, with a primary power supply (PPS), a PPS signal and a PPS profile signal, wherein the PPS signal is configured to supply power and the PPS profile signal indicates one or more characteristics of the PPS, receiving, with an integrated circuit, the PPS signal and the PPS profile signal from the PPS, operating, with the integrated circuit, using the supply power associated with the PPS signal, determining, with the integrated circuit, the one or more characteristics of the PPS based on the PPS profile signal, and managing, with the integrated circuit, power based on the determined one or more characteristics.BRIEF DESCRIPTION OF THE DRAWINGS[0010] The accompanying drawings are presented to aid in the description of various aspects of the disclosure and are provided solely for illustration of the aspects and not limitation thereof.[0011] FIG. 1 generally illustrates a vehicle in accordance with aspects of the disclosure.[0012] FIG. 2 generally illustrates a method of maintaining a clock.[0013] FIG. 3 generally illustrates another vehicle in accordance with aspects of the disclosure.[0014] FIG. 4 generally illustrates a method of controlling power management.[0015] FIG. 5 generally illustrates a charging profile for a PPS.DETAILED DESCRIPTION[0016] FIG. 1 generally illustrates a vehicle 100 in accordance with aspects of the disclosure.[0017] The vehicle 100 may include a PPS 110 (where“PPS” is an abbreviation of“primary power supply”), a PPS sensor 120, an IC 130 (where“IC” is an abbreviation of integrated circuit), an emergency data system 140, an antenna 150, and an engine 160.[0018] The PPS 110 may be, for example, a battery. The PPS 110 may supply the power used to light the dashboard and headlamps, operate power windows and power locks, start the engine 160 of the vehicle 100, etc.[0019] The PPS sensor 120 may be configured to measure the power supplied by the PPS 110 and provide a PPS measurement signal indicating an amount of the power supplied by the PPS 110. In some implementations, the PPS sensor 120 may be a voltmeter. The PPS measurement signal may be generated by the PPS sensor 120 and provided to, for example, the IC 130.[0020] The IC 130 may be, for example, an integrated circuit, a power management IC (PMIC), a system-on-chip (SoC), and/or any component thereof. The IC 130 may be configured to receive the PPS measurement signal from the PPS sensor 120, as will be described in greater detail below. The IC 130 may further include a clock 132. The clock 132 may be used to track the time of day, which may be displayed to an operator of the vehicle 100 (on, for example, the dashboard), or used for any other application that requires a timing measurement. In some implementations, the IC 130 may comprise a secure processor. In other implementations, a secure processor may be provided elsewhere in the vehicle 100. In either case, repeated loss of the power supplied to the secure processor (and/or repeated loss of accurate timing information from the clock 132) may cause a malfunction of the secure processor. The secure processor may typically be inaccessible to the end user, and may be configured to support financial transactions, digital rights management (DRM) for premium content, or any other suitable secure executions. For example, if an end user purchases the rights to view a movie for one week, the secure processor may be configured to perform the purchase transaction. Moreover, at the expiration of one week, the secure processor would deactivate viewing in order to ensure that digital rights are respected. If the secure processor were accessible to the end user, then the end user may be able to manipulate the clock that the secure processor relies on to perform many of its functions. The secure processor may have non-volatile memory (NVM) configured to store a finite series of power failures where the last known time could be stored. At power restoration the new time could be compared to the stored time to ensure time didn’t go backwards. However, the clock time maintenance described in the present application may be preferable relative to, for example, enlargement of an on-die NVM memory.[0021] If the power supplied by the PPS 110 is interrupted, the IC 130 may not be able to maintain the clock 132. An accurate clock may be a pre-requisite for any number of important functions, including security purposes. In conventional implementations, the operator of the vehicle 100 must manually set the clock 132 after every power interruption (which may be due to maintenance, overuse, etc.). However, as will be discussed in greater detail below, the method 200 depicted in FIG. 2 may enable the IC 130 to maintain the clock 132 even in the event that the power supplied by the PPS 110 is interrupted.[0022] The emergency data system 140 may include a modem 142 and a BPS 144 (where“BPS” is an abbreviation of“backup power supply”). In some implementations, the emergency data system 140 may include a resilient housing that is designed to protect the modem 142 (and other critical components of the emergency data system 140) from damage, and remain in operation during an emergency (for example, when the car has been in an accident). The modem 142 may be coupled to the antenna 150, and if the vehicle 100 is in an accident, the modem 142 and antenna 150 may be used to contact emergency services and relay relevant data thereto.[0023] In some implementations, the engine 160 may be, for example, an internal combustion engine. Power supplied by the PPS 110 may be used to start the engine 160. After the engine 160 is running, the power generated by the engine 160 may be used to re-charge the PPS 110 and/or the BPS 144.[0024] In other implementations, the engine 160 may be replaced and/or augmented with other power sources. For example, if the vehicle 100 is a hybrid vehicle, then the vehicle 100 may include a plurality of power sources (for example, batteries). If the vehicle 100 is an electric vehicle, then the engine 160 may be omitted and replaced by a plurality of power sources (for example, batteries). In either case, the vehicle 100 has a PPS 110 that supplies power to the IC 130, emergency data system 140, and other components of the vehicle 100.[0025] FIG. 2 generally illustrates a method of maintaining a clock, for example, the clock 132 depicted in FIG. 1.[0026] At 210, the method 200 maintains the clock 132 and/or a secure processor using power supplied by the PPS 110. The maintaining at 210 may be performed by, for example, the IC 130 depicted in FIG. 1.[0027] At 220, the method 200 measures the power supplied by the PPS 110. The measuring at 220 may be performed by, for example, the PPS sensor 120 depicted in FIG. 1.[0028] At 230, the method 200 provides a PPS measurement signal indicating an amount of power supplied by the PPS 110. The providing at 230 may be performed by, for example, a PPS sensor analogous to the PPS sensor 120 depicted in FIG. 1. The PPS measurement signal may be provided to, for example, the IC 130. As noted above, in some implementations, the PPS sensor 120 may include a voltmeter. The voltmeter may be used to monitor the voltage of the PPS 110.[0029] At 240, the method 200 determines whether the PPS measurement signal falls below a threshold. If the PPS measurement signal does not fall below the threshold (‘no’ at 240), then the method 200 returns to the maintaining at 210 and the method 200 continues to maintain the clock 132 using power supplied by the PPS 110. If the PPS measurement signal falls below the threshold (‘yes’ at 240), then the method 200 proceeds to 250.[0030] At 250, the method 200 maintains the clock 132 and/or the secure processor using power supplied by the BPS 144. The transition from the PPS 110 to the BPS 144 may be performed using a switch. The switch may open the electrical path to the PPS 110 and/or close the electrical path to the BPS 144. The switch may do the reverse if the PPS sensor 120 determines that the power supplied by the PPS 110 has been restored.[0031] In the event that the power supplied by the PPS 110 is interrupted, the method 200 may be performed in order to maintain functioning of the secure processor and/or avoid the necessity of manually resetting the clock 132 upon restoration of the primary power supply.[0032] FIG. 3 generally illustrates a vehicle 300 in accordance with aspects of the disclosure.[0033] The vehicle 300 may include a PPS 310, a PPS sensor 320, an IC 330, an emergency data system 340, an antenna 350, and an engine 360.[0034] The PPS 310 may be analogous to the PPS 110. Accordingly, the PPS 310 may supply the power used to light the dashboard and headlamps, operate power windows and power locks, start the engine 360 of the vehicle 300, etc.[0035] The PPS sensor 320 may be analogous in some respects to the PPS sensor 120. For example, the PPS sensor 320 may be configured to measure a voltage of the PPS 110. However, the PPS sensor 320 may be further configured to sense PPS profile data, as will be discussed in greater detail below with reference to FIG. 4.[0036] The IC 330 may be analogous in some respects to the IC 130. The IC 330 may be configured to receive the PPS profile data from the PPS sensor 320. The IC 330 may be coupled to a database 334 and/or one or more sensors 336.[0037] The database 334 may be used by the IC 330 to store the PPS profile data provided by the PPS sensor 320. In some conventional implementations, the power supply signal is a relatively static direct current (“DC”) voltage. However, in accordance with aspects of the disclosure, the PPS profile data may be superimposed, encoded, etc., on the direct current generated by the PPS 310 and used to power the vehicle 300. Additionally or alternatively, the PPS profile data is communicated from the PPS 310 to the IC 330 by any suitable method, for example, Bluetooth, Bluetooth Low-Energy, WiFi, radio frequency identification (“RFID”), etc. [0038] The one or more sensors 336 may provide sensor data. The database 334 may be used by the IC 330 to store the sensor data received from the one or more sensors 336. The one or more sensors 336 may include one or more temperature sensors, one or more accelerometers, one or more accident detectors (for example, airbag deployment sensors), and/or any other suitable sensors.[0039] The emergency data system 340 may be analogous to the emergency data system 140 and may include a modem 342 analogous to the modem 142. Accordingly, the emergency data system 340 may include a resilient housing that is designed to protect the modem 342 (and other critical elements of the emergency data system 340) from damage, and remain in operation during an emergency (for example, when the car has been in an accident). The modem 342 may be coupled to the antenna 350, and if the vehicle 300 is in an accident, the modem 342 and antenna 350 may be used to contact emergency services and relay relevant data thereto.[0040] In some implementations, PPS profile data, sensor data, or any other data may be received by the antenna 350 and provided to the modem 342. In accordance with the present disclosure, the PPS profile data and/or sensor data (or portions thereof) may be obtained from external databases (for example, a remote server), external sensors (i.e., not associated with the vehicle 300), etc.[0041] The engine 360 may be, for example, an internal combustion engine. Power supplied by the PPS 310 may be used to start the engine 360. After the engine 360 is running, the power generated by the engine 360 may be used to re-charge the PPS 310. A regulator 362 may regulate the power used to re-charge the PPS 310. As will be discussed in greater detail below, the regulator 362 may be controlled by, for example, the IC 130.[0042] In another implementation, the engine 360 may be omitted or augmented with other power sources. For example, if the vehicle 300 is a hybrid vehicle, then the vehicle 300 may include a plurality of power sources (for example, batteries). If the vehicle 300 is an electric vehicle, then the engine 360 may be omitted and replaced by a plurality of power sources (for example, batteries). In either case, the vehicle 300 has a PPS 310 that supplies power to the IC 330, emergency data system 340, and other components of the vehicle 300.[0043] FIG. 4 generally illustrates a method 400 of controlling power management. [0044] At 410, the method 400 receives from the PPS 310 a PPS signal configured to supply power and a PPS profile signal configured to indicate one or more characteristics of the PPS 310. The receiving may be performed by, for example, the IC 330. The PPS profile signal may be superimposed on the PPS signal, encoded within the PPS signal, or otherwise included in the PPS signal.[0045] The PPS profile signal may include PPS profile information that includes, for example, identifiers for the manufacturer, the model, the part number, and/or the serial number of the PPS 310. Additionally or alternatively, the PPS profile information may include, for example, PPS usage information, for example, a length of time since the battery was installed, an amount of power provided to the vehicle, an amount of recharge power received, etc. Additionally or alternatively, the PPS profile information may include, for example, one or more charging profiles associated with the PPS 310. The charging profiles may specify any combination of conditions and/or charge characteristics. For example, the charging profiles may identify a voltage, rate of voltage change, charging current, or rate of current change that is optimal for the PPS 310. In some implementations, the charging profile may vary with the temperature and/or the voltage of the PPS 310. For example, a first charging voltage may be optimal when it is cold, and a second charging voltage (different from the first charging voltage) may be optimal when it is hot. In this scenario, the charging profile may be modeled as an algebraic expression with temperature as a variable.[0046] At 420, the method 400 operates using the power supplied by the PPS signal. The operating at 420 may be performed by, for example, the IC 330.[0047] At 430, the method 400 determines one or more characteristics of the PPS 310 based on the PPS profile signal.[0048] At 435, the method 400 optionally receives sensor data provided by the one or more sensors 336. To return to an earlier example, the sensor data may include a current temperature. The receiving at 430 may be performed by, for example, the IC 330.[0049] At 440, the method 400 manages power based on the one or more characteristics (determined at 430) and/or the sensor data (received at 435). To return to an earlier example, the managing at 440 may comprise determining an optimal charging signal (for example, an optimal charging current or other suitable characteristic) and causing the engine 360 to provide the optimal charging current to the PPS 310 (for example, by providing an indication of the optimal values to the regulator 362). The managing may be performed by, for example, the IC 330. The regulator 362 may receive the indication of the optimal values, and may control the charging current (or other characteristic) of the power supplied to the PPS 310 by the engine 360.[0050] FIG. 5 generally illustrates a charging profile 500 for a PPS, for example, the PPS 310. As noted above, the charging profile 500 may be included in the PPS profile data. The charging profile 500 may indicate a maximum recommended voltage of the PPS 310 (for example, 15V), a preferred charging current (for example, 1.3A), and one or more trigger voltages (for example, 14V) that trigger a change in the preferred charging current (for example, 0.1 A). It will be appreciated that the charging profile will vary based on the type and capacity of the battery and may additionally be influenced by other environmental and battery specific conditions.[0051] The charging profile 500 shows a charging current 501, indicated by a thick line, and a PPS voltage 502, indicated by a thick dashed line. In an initial state 510, the PPS 310 has a voltage of 4V and is not receiving a charge current. The voltage may be higher or lower than 4V and it will be understood that the particular values identified in the description of FIG. 5 may, in some implementations, be any suitable value. The IC 330 may be configured to detect an error if, for example, the voltage of the PPS 310 is below a threshold voltage, a resistance of the PPS 310 is above a threshold resistance, or a temperature of the PPS 310 or its environment is below a minimum temperature or above a maximum temperature. The IC 330 may be further configured to provide error notifications based on the detected errors.[0052] The IC 330 may determine that the PPS voltage 502 of the PPS 310 (for example, the voltage in the initial state 510) is below a minimum voltage threshold (for example, 14V). The determination may be based on, for example, a signal from the PPS sensor 320. Accordingly, the IC 330 may perform power management by commanding the regulator 362 to provide a charge current to the PPS 310. In some implementations, the IC 330 may perform power management only if there is no error detected by the IC 330.[0053] The regulator 362 may respond by increasing a level of the charging current 501 to a bulk charging current. The charging current 501 may be provided to the PPS 310 during a bulk charging period 520. During the bulk charging period 520, the PPS voltage 502 may rise in response to the increase in the charging current 501. The bulk charging 520 may end when the PPS voltage 502 reaches a trigger voltage, for example, a minimum recommended operating voltage. The bulk charging 520 may also end if an error is detected. The IC 330 may be configured to detect an error if, for example, a temperature of the PPS 310 or its environment is below a minimum temperature or above a maximum temperature, or a bulk charging period timer has expired.[0054] Following the bulk charging period 520 is an absorption charging period 530. During the absorption charging period 530, the amount of charging current 501 is reduced while the voltage is generally held constant. In the example of FIG. 5, the charging current 501 drops from 1.3 A to 0.1 A over the course of the absorption charging period 530. As noted above, transition from the bulk charging period 520 to the absorption charging period 530 may be triggered when the PPS voltage 502 reaches a trigger voltage (for example, 14V). For example, the PPS sensor 320 may sense the PPS voltage 502 and provide the results to the IC 330. The IC 330 may determine if the trigger voltage has been reached. In particular, when the PPS voltage 502 reaches (for example) 14V, the IC 330 may command the regulator 362 to reduce the charging current 501. In the example of FIG. 5, the charging current 501 drops to 0.1 A during the absorption charging period 530.[0055] Following the absorption charging period 530 is a float charge period 540. During the float charge period 540, the IC 330 commands the regulator 362 to maintain the charging current 501 at a float charge current (0.1 A in the example of FIG. 5). In response to the float charge current, the PPS voltage 502 rises to a maximum recommended voltage (for example, 15V).[0056] Although not shown, it will be understood that once the maximum recommended voltage is reached, the charging current 501 may be cut off (i.e., reduced to 0.0A). It will be further understood that the process described above in relation to FIG. 5 may be repeated if the PPS voltage 502 ultimately falls back below the minimum voltage threshold.[0057] As noted above, the charging profile 500 may be specific to a particular make or model of PPS 310. In addition, the charging profile 500 may be a dynamic charging profile wherein the particular values of the minimum voltage threshold, the bulk charging current, the trigger voltage, the float charge current, etc., vary in response to sensed conditions. For example, if the one or more sensors 336 include a thermometer, then temperature (for example, battery temperature, ambient temperature, etc.) may be the sensed condition. As the temperature varies, the particular values of one or more of the minimum voltage threshold, the bulk charging current, the trigger voltage, the float charge current, etc., may also vary. For example, if the temperature falls below ten degrees Celsius, then the optimal bulk charging current may increase or decrease. Accordingly, the charging profile 500 provides instructions for how the specified values should be modified, and what should be commanded in response to a particular sensed condition.[0058] The terminology used herein is for the purpose of describing particular embodiments only and not to limit any embodiments disclosed herein. As used herein, the singular forms“a”,“an” and“the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms“comprises”, “comprising”,“includes” and/or“including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Similarly, the phrase“based on” as used herein does not necessarily preclude influence of other factors and should be interpreted in all cases as“based at least in part on” rather than, for example,“based solely on”.[0059] It will be understood that terms such as“top” and“bottom”,“left” and “right”,“vertical” and“horizontal”, etc., are relative terms used strictly in relation to one another, and do not express or imply any relation with respect to gravity, a manufacturing device used to manufacture the components described herein, or to some other device to which the components described herein are coupled, mounted, etc.[0060] It should be understood that any reference to an element herein using a designation such as“first,”“second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not imply that there are only two elements and further does not imply that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements may comprise one or more elements. In addition, terminology of the form“at least one of A, B, or C” or“one or more of A, B, or C” or“at least one of the group consisting of A, B, and C” used in the description or the claims means“A or B or C or any combination of these elements.”[0061] In view of the descriptions and explanations above, one skilled in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0062] Accordingly, it will be appreciated, for example, that an apparatus or any component of an apparatus may be configured to (or made operable to or adapted to) provide functionality as taught herein. This may be achieved, for example: by manufacturing (e.g., fabricating) the apparatus or component so that it will provide the functionality; by programming the apparatus or component so that it will provide the functionality; or through the use of some other suitable implementation technique. As one example, an integrated circuit may be fabricated to provide the requisite functionality. As another example, an integrated circuit may be fabricated to support the requisite functionality and then configured (e.g., via programming) to provide the requisite functionality. As yet another example, a processor circuit may execute code to provide the requisite functionality.[0063] Moreover, the methods, sequences, and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random-Access Memory (RAM), flash memory, Read-only Memory (ROM), Erasable Programmable Read-only Memory (EPROM), Electrically Erasable Programmable Read-only Memory (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory storage medium known in the art. As used herein the term“non-transitory” does not exclude any physical storage medium or memory and particularly does not exclude dynamic memory (e.g., RAM) but rather excludes only the interpretation that the medium can be construed as a transitory propagating signal. An example storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor (e.g., cache memory).[0064] While the foregoing disclosure shows various illustrative aspects, it should be noted that various changes and modifications may be made to the illustrated examples without departing from the scope defined by the appended claims. The present disclosure is not intended to be limited to the specifically illustrated examples alone. For example, unless otherwise noted, the functions, steps, and/or actions of the method claims in accordance with the aspects of the disclosure described herein need not be performed in any particular order. Furthermore, although certain aspects may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
A system and method for forming a magnetic tunnel junction (MTJ) storage element utilizes a composite free layer structure (214, 216, 218). The MTJ element includes a stack comprising a pinned layer (206, 208, 210), a barrier layer (212), and a composite free layer. The composite free layer includes a first free layer (214), a superparamagnetic layer (218) and a nonmagnetic spacer layer (216) interspersed between the first free layer and the superparamagnetic layer. A thickness of the spacer layer controls a manner of magnetic coupling between the first free layer and the superparamagnetic layer.
CLAIMS WHAT IS CLAIMED IS: 1. A magnetic tunnel junction (MTJ) storage element comprising: a stack comprising a pinned layer and a barrier layer; and a composite free layer formed on the barrier layer, comprising a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. 2. The MTJ storage element of claim 1, further comprising an interlay er exchange coupling between the first free layer and the superparamagnetic layer. 3. The MTJ storage element of claim 2, wherein a magnetic polarization of the first free layer is aligned parallel to a magnetic polarization of the superparamagnetic layer. 4. The MTJ storage element of claim 2, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 5. The MTJ storage element of claim 1, further comprising an interlayer fringe coupling between the first free layer and the superparamagnetic layer. 6. The MTJ storage element of claim 5, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 7. The MTJ storage element of claim 1, wherein the superparamagnetic layer is formed from a ferromagnetic layer of reduced thickness. 8. The MTJ storage element of claim 1, wherein the superparamagnetic layer is formed from an antiferromagnetic layer of reduced thickness. 9. The MTJ storage element of claim 1, wherein the superparamagnetic layer is formed from a nonmagnetic material doped with ferromagnetic elements. 10. The MTJ storage element of claim 1, wherein the superparamagnetic layer is formed from a ferromagnetic material doped with nonmagnetic elements. 11. The MTJ storage element of claim 1 , wherein the superparamagnetic layer is formed from a laminated structure comprising one or more layers of ferromagnetic elements, interspersed with one or more layers of nonmagnetic elements. 12. The MTJ storage element of claim 1, further comprising an antiferromagnetic material in contact with the pinned layer, formed below the pinned layer. 13. The MTJ storage element of claim 1, wherein the pinned layer is a pinned layer stack comprising two or more layers. 14. The MTJ storage element according to claim 1, wherein the storage element is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the MTJ storage element is integrated. 15. The MTJ storage element according to claim 1, wherein the storage element is integrated in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT- MRAM). 16. The STT-MRAM device according to claim 15, wherein the STT-MRAM device is integrated in at least one semiconductor die. 17. A method of forming a magnetic tunnel junction (MTJ) storage element, the method comprising: forming a stack comprising a pinned layer and a barrier layer; and forming a composite free layer on top of the barrier layer comprising a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. 18. The method of claim 17, further comprising coupling the first free layer and the superparamagnetic layer via interlayer exchange coupling. 19. The method of claim 18, wherein a magnetic polarization of the first free layer is aligned parallel to a magnetic polarization of the superparamagnetic layer. 20. The method of claim 18, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 21. The method of claim 17, further comprising: coupling the first free layer the superparamagnetic layer via interlayer fringe coupling. 22. The method of claim 21, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 23. The method of claim 17, wherein the superparamagnetic layer is formed by reducing the thickness of a ferromagnetic or antiferromagnetic layer. 24. The method of claim 17, wherein the superparamagnetic layer is formed by doping a nonmagnetic material with ferromagnetic elements. 25. The method of claim 17, wherein the superparamagnetic layer is formed by doping a ferromagnetic material with nonmagnetic elements. 26. The method of claim 17, wherein the superparamagnetic layer is formed by interspersing one or more layers of nonmagnetic elements with one or more layers of ferromagnetic elements. 27. The method according to claim 17, wherein the MTJ storage element is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device,personal digital assistant (PDA), fixed location data unit, and a computer, into which the MTJ storage element is integrated. 28. The method according to claim 17, wherein the MTJ storage element is integrated in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT- MRAM). 29. A magnetic tunnel junction (MTJ) storage element comprising: a first magnetic means for holding a first polarization; a composite magnetic means for holding a second polarization comprising ferromagnetic means; superparamagnetic means; and nonmagnetic means interspersed between the ferromagnetic means and the superparamagnetic means, wherein a thickness of the nonmagnetic means controls a manner of coupling between the ferromagnetic means and the superparamagnetic means; and insulating means interspersed between the first magnetic means and composite magnetic means to enable a flow of tunneling current between the first magnetic means and the composite magnetic means. 30. The MTJ storage element of claim 29, wherein the manner of coupling between the ferromagnetic means and the superparamagnetic means is interlayer exchange coupling. 31. The MTJ storage element of claim 30, wherein a magnetic polarization of the ferromagnetic means is aligned parallel to a magnetic polarization of the superparamagnetic means. 32. The MTJ storage element of claim 30, wherein a magnetic polarization of the ferromagnetic means is aligned anti-parallel to a magnetic polarization of the superparamagnetic means. 33. The MTJ storage element of claim 29, wherein the manner of coupling between the ferromagnetic means and the superparamagnetic means is interlayer fringe coupling. 34. The MTJ storage element of claim 33, wherein a magnetic polarization of the ferromagnetic means is aligned anti-parallel to a magnetic polarization of the superparamagnetic means. 35. The MTJ storage element of claim 29, wherein the superparamagnetic means is formed from a ferromagnetic or anti-ferromagnetic material of reduced thickness. 36. The MTJ storage element of claim 29, wherein the superparamagnetic means is formed from a nonmagnetic material doped with ferromagnetic elements. 37. The MTJ storage element of claim 29, wherein the superparamagnetic means is formed from a ferromagnetic material doped with nonmagnetic elements. 38. The MTJ storage element of claim 29, wherein the superparamagnetic means is formed from a laminated structure comprising one or more layers of ferromagnetic elements, interspersed with one or more layers of nonmagnetic elements. 39. The MTJ storage element according to claim 29, wherein the MTJ storage element is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the MTJ storage element is integrated. 40. The MTJ storage element according to claim 29, wherein the MTJ storage element is integrated in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM). 41. A method of forming a magnetic tunnel junction (MTJ) storage element, the method comprising: step for forming a stack comprising, a pinned layer and a barrier layer; andstep for forming a composite free layer on top of the barrier layer comprising a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. 42. The method of claim 41, further comprising coupling the first free layer and the superparamagnetic layer via interlayer exchange coupling. 43. The method of claim 42, wherein a magnetic polarization of the first free layer is aligned parallel to a magnetic polarization of the superparamagnetic layer. 44. The method of claim 42, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 45. The method of claim 41, further comprising coupling the first free layer the superparamagnetic layer via interlayer fringe coupling. 46. The method of claim 45, wherein a magnetic polarization of the first free layer is aligned anti-parallel to a magnetic polarization of the superparamagnetic layer. 47. The method of claim 41, wherein the superparamagnetic layer is formed by reducing the thickness of a ferromagnetic or antiferromagnetic layer. 48. The method of claim 41, wherein the superparamagnetic layer is formed by doping a nonmagnetic material with ferromagnetic elements. 49. The method of claim 41, wherein the superparamagnetic layer is formed by doping a ferromagnetic material with nonmagnetic elements. 50. The method of claim 41, wherein the superparamagnetic layer is formed by interspersing one or more layers of nonmagnetic elements with one or more layers of ferromagnetic elements. 51. The method according to claim 41, wherein the MTJ storage element is applied in an electronic device, selected from the group consisting of a set top box, music player, video player, entertainment unit, navigation device, communications device, personal digital assistant (PDA), fixed location data unit, and a computer, into which the MTJ storage element is integrated. 52. The method according to claim 41, wherein the MTJ storage element is integrated in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT- MRAM).
SPIN-TRANSFER SWITCHING MAGNETIC ELEMENT UTILIZING A COMPOSITE FREE LAYER COMPRISING A SUPERPARAMAGNETIC LAYER Field of Disclosure [0001] Disclosed embodiments are related to employing a composite free layer comprising a superparamagnetic layer in a Magnetic Tunnel Junction (MTJ) storage element usable in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) cell. Background [0002] Magnetoresistive Random Access Memory (MRAM) is a non-volatile memory technology that uses magnetic elements. For example, Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) uses electrons that become spin-polarized as the electrons pass through a thin film (spin filter). STT-MRAM is also known as Spin Transfer Torque RAM (STT-RAM), Spin Torque Transfer Magnetization Switching RAM (Spin-RAM), and Spin Momentum Transfer (SMT- RAM). [0003] Fig. 1 illustrates a conventional STT-MRAM bit cell 100. The STT-MRAM bit cell 100 includes magnetic tunnel junction (MTJ) storage element 105, a transistor 101, a bit line 102 and a word line 103. The MTJ storage element is formed, for example, from at least two ferromagnetic layers (a pinned layer and a free layer), each of which can hold a magnetic field or polarization, separated by a thin non-magnetic insulating layer (tunneling barrier). Electrons from the two ferromagnetic layers can penetrate through the tunneling barrier due to a tunneling effect under a bias voltage applied to the ferromagnetic layers. The magnetic polarization of the free layer can be reversed so that the polarity of the pinned layer and the free layer are either substantially aligned (parallel) or opposite (anti-parallel). The resistance of the electrical path through the MTJ will vary depending on the alignment of the polarizations of the pinned and free layers. This variance in resistance can be used to program and read the bit cell 100. The STT-MRAM bit cell 100 also includes a source line 104, a sense amplifier 108, read/write circuitry 106 and a bit line reference 107. Those skilled in the art will appreciate the operation and construction of the memory cell 100. [0004] For example, the bit cell 100 may be programmed such that a binary value "1" is associated with an operational state wherein the polarity of the free layer is parallel tothe polarity of the pinned layer. Correspondingly, a binary value "0" may be associated with an anti-parallel orientation between the two ferromagnetic layers. A binary value may thus be written to the bit cell by changing the polarization of the free layer. A sufficient current density (typically measured in Amperes/centimeter2) generated by the electrons flowing across the tunneling barrier is required to change the polarization of the free layer. The minimum current density required to switch the polarization of the free layer is also called switching current density. Decreasing the value of the switching current density leads to beneficially lowering the power consumption of the MTJ cells. Additionally, lower switching current density enables smaller device dimensions and a correspondingly higher density of MTJ cells in an STT-MRAM integrated circuit. [0005] Existing techniques to reduce the switching current density may adversely affect the thermal stability of the MTJ cell. Accordingly, there is a need for decreasing the switching current density without impacting the thermal stability of the device. SUMMARY [0006] Exemplary embodiments of the invention are directed to systems and method for employing a composite free layer comprising a superparamagnetic layer in a Magnetic Tunnel Junction (MTJ) storage element usable in a Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) cell. [0007] For example, an exemplary embodiment is directed to an MTJ storage element comprising a stack comprising a pinned layer and a barrier layer; and a composite free layer formed on the barrier layer, comprising a first free layer a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. [0008] Another exemplary embodiment is directed to a method of forming an MTJ storage element, the method comprising forming a stack comprising a pinned layer and a barrier layer; and forming a composite free layer on top of the barrier layer comprising a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. [0009] Yet another exemplary embodiment is directed to an MTJ storage element comprising a first magnetic means for holding a first polarization; a composite magnetic means for holding a second polarization comprising ferromagnetic means; superparamagnetic means; and nonmagnetic means interspersed between the ferromagnetic means and the superparamagnetic means, wherein a thickness of the nonmagnetic means controls amanner of coupling between the ferromagnetic means and the superparamagnetic means; and insulating means interspersed between the first magnetic means and composite magnetic means to enable a flow of tunneling current between the first magnetic means and the composite magnetic means. [0010] Another exemplary embodiment is directed to a method of forming an MTJ storage element, the method comprising step for forming a stack comprising a pinned layer and a barrier layer; and step for forming a composite free layer on top of the barrier layer comprising a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. BRIEF DESCRIPTION OF THE DRAWINGS [0011] The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof. [0012] FIG. 1 illustrates a conventional Spin Transfer Torque Magnetoresistive Random Access Memory (STT-MRAM) cell array. [0013] FIG. 2 illustrates an exemplary MTJ storage element utilizing a composite free layer structure comprising a superparamagnetic layer. [0014] FIG. 3 illustrates different coupling effects in exemplary embodiments. [0015] FIG. 4 illustrates interlayer fringe coupling effects in exemplary embodiments. [0016] FIG. 5 illustrates interlayer exchange coupling effects in exemplary embodiments [0017] FIG. 6 illustrates different techniques to form an exemplary superparamagnetic layer. [0018] FIG. 7 illustrates a flowchart for forming a memory device. DETAILED DESCRIPTION [0019] Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention. [0020] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to beconstrued as preferred or advantageous over other embodiments. Likewise, the term "embodiments of the invention" does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. [0021] As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising,", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [0022] The disclosed embodiments recognize that, with conventional methods, it may be difficult to decrease the switching current density of MTJ devices while maintaining their thermal stability. The physical characteristics of the ferromagnetic used in MTJ cells include a large internal magnetic field at room temperature. Reversing the polarization of ferromagnetic layers requires a relatively large current density, unless accompanied by factors such as an increased thermal energy. [0023] Existing techniques to reduce the switching current density include the use of "spin diffusion layers", as in Huai et al., "Current- Switched Spin-Transfer Magnetic Devices with Reduced Spin-Transfer Switching Current Density", United States Patent Application Publication, Pub. No. US 2007/0171694 Al . The spin diffusion layers diffuse the electron spins outside the MTJ. As a result, the spin dependent current flowing through the MTJ may be diminished in the layers outside the free layer so that most of the spin dependent current may be confined in the magnetically active part of the MTJ stack. This may lead to a reduction of the switching current density. [0024] Prior art techniques also include the use of low saturation magnetization materials for forming the free layer. For example, Nguyen et al., "Spin Transfer Magnetic Element Having Low Saturation Magnetization Free Layers", United States Patent Application Publication, Pub. No. US 2007/0159734 Al, which is incorporated in its entirety herein, describes techniques wherein the free layer includes ferromagnetic materials diluted with nonmagnetic materials and/or ferrimagnetically doped to provide low saturationmagnetizations. Lowering the saturation magnetization of the free layer may reduce the switching current density. [0025] Exemplary embodiments recognize that in contrast to ferromagnetic materials, the magnetization of superparamagnetic materials is significantly low at room temperature. Accordingly, a very low current density is required to reverse the polarization of a composite free layer with a superparamagnetic material at room temperature. While existing techniques include the limitations of free layers formed of ferromagnetic materials, disclosed embodiments provide techniques wherein the free layer may advantageously include superparamagnetic materials. Exemplary embodiments detail the use of superparamagnetic materials in lowering the current density while enhancing the thermal stability of the MTJ. [0026] FIG. 2 illustrates the MTJ cell 105 according to an exemplary embodiment. An antiferromagnetic (AFM) layer 204 is first formed on a bottom electrode 202, and then a first ferromagnetic layer is formed on top of the AFM layer. The first ferromagnetic layer is "pinned" with a fixed magnetic polarization to form a pinned layer. The pinned layer may include one or more layers, such as a bottom pinned layer 206, a coupling layer 208 typically formed of a non-magnetic metal such as ruthenium, and a top pinned layer 210. Pinned layers 206, 208 and 210 may be collectively referred to as a pinned layer stack. A tunneling barrier layer 212 is formed of an insulator such as a metal oxide on top of the pinned layer. A free layer with variable magnetic polarization is formed on top of the barrier layer. The free layer may include a first free layer 214, a non-magnetic spacer layer 216 and a superparamagnetic layer 218 as shown in FIG. 2. Such a multilayered free layer structure is called a composite free layer or "synthetic" free layer. It will be appreciated that the formation of MTJ devices with synthetic free layers is well known. A top electrode (not shown) is formed on top of the free layer. [0027] The electrons tunneling through the barrier layer 212 from the pinned layers enter the first free layer 214, causing an effect on the magnetic polarization of the first free layer 214. If switching current density is achieved, depending on the direction of spin of the majority of electrons tunneling through the barrier 212, the magnetic polarization of the first free layer may become substantially aligned (parallel) to the magnetic polarization of the pinned layers, or substantially aligned opposite (anti-parallel) to the magnetic polarization of the pinned layers. The directional arrows illustrated within the layers are merely an illustrative aid to depict an exemplary direction of polarization of the layer,and the embodiments are not limited in any manner by these illustrations. The spacer layer 216 is non-magnetic. The magnetization of the superparamagnetic layer 218 may be influenced by one of three methods. [0028] The first method of coupling the superparamagnetic layer 218 is illustrated in FIG. 3 A. If the nonmagnetic spacer layer 216 is made sufficiently thin, the superparamagnetic layer 218 may become magnetically "coupled" to the first free layer 214. The magnetic polarization of the superparamagnetic layer 218 is derived from, and aligned with, the polarization of the first free layer 214 due to an exchange of energy between the two layers. This manner of coupling is called "direct interlayer coupling". The coupling strength is controlled by factors such as the thickness of the spacer layer 216. [0029] The spacer layer 216 may be formed from a nonmagnetic material such as Ru, which leads to a second coupling mechanism known as "RK Y coupling" or "indirect interlayer coupling". In this manner of coupling, the thickness of the nonmagnetic material determines whether the coupling between the first free layer 214 and superparamagnetic layer 218 is parallel or anti-parallel. [0030] FIG. 3B illustrates a third method of coupling between the free layers 214 and 218. If the thickness of the spacer layer 216 is increased, the interlayer coupling effect is diminished. However, the effect of fringe fields between the sidewalls of the two free layers 214 and 218 may lead to a magnetic coupling between them. This manner of coupling is usually referred to as "interlayer fringe coupling". The polarization of the superparamagnetic layer 218 is usually aligned anti-parallel to the polarization of the first free layer 214 under the effect of fringe coupling. [0031] In the case of interlayer exchange coupling (first and second methods as described above, as shown in FIG. 3A), it is possible to achieve a strong coupling between free layers 214 and 218. The coupling strength may depend on factors which include the thickness and material of the spacer layer 216. If the spacer layer 216 is sufficiently thin, a strong coupling effect may be formed. Under a strongly coupled synthetic free layer structure, the two free layers 214 and 218 may behave as though they were one single free layer. This facilitates a "coherent" switching of the free layer, i.e., when a switching current causes the first free layer 214 to switch polarization, the coupling effect causes an instantaneous switching effect on the superparamagnetic layer 218. [0032] The maximum polarization which can be induced in a magnetic material is called saturation magnetization. It will be understood that achieving a lower saturationmagnetization will lead to lower switching current density. In the composite free layer structure with strong exchange coupling as explained above, a sufficient current density is only required to switch the first free layer 214, but the net effect is equivalent to switching both free layers 214 and 218. Moreover, a composite free layer can have a lower switching current density than a single free layer, such as 214. As described previously, existing composite free layer structures are formed from ferromagnetic materials. Hence the advantages of using a composite free layer, to achieve a lower switching current density, are limited by the inherent magnetic properties of ferromagnetic materials. However, according to an exemplary embodiment, utilizing a superparamagnetic layer 218 in a strongly coupled synthetic free layer, leads to a significantly lower switching current density. [0033] Superparamagnetic materials are composed of small ferromagnetic clusters. But these clusters are of such small dimensions that their polarizations may flip randomly under thermal fluctuations. As a result, the net polarization of a superparamagnetic material averages out to zero in the absence of an external magnetic field. However, when an external magnetic field is applied, the superparamagnetic material becomes easily polarized, even at room temperature. On the other hand, ferromagnetic materials have an inherent non-zero polarization at room temperatures. Accordingly, reversing the polarization of ferromagnetic materials at room temperature needs a significantly greater magnetic energy, than the polarization of a superparamagnetic material. [0034] Using a superparamagnetic material to form the superparamagnetic layer 218 in the case of strongly coupled free layers which exhibit coherent switching has various beneficial effects. For example, the coupling fields required to polarize the superparamagnetic free layer 218 are lower, which in turn leads to a stable domain structure resulting in the better uniformity of switching behavior. Exemplary embodiments use a superparamagnetic material to form the superparamagnetic layer 218 in a strongly coupled composite free layer structure, which results in lower saturation magnetization of the composite free layer and an enhancement of the spin-torque efficiency in the STT-MRAM. [0035] Exemplary embodiments also include the use of superparamagnetic materials in the case of composite free layers which exhibit fringe coupling behavior. In contrast to the coherent switching characteristics exhibited by exchange coupling, the coupling effect in fringe coupling is weaker, and the switching behavior is more stochastic. Sometimesthe fringe coupling may be so weak that it may be the equivalent of no magnetic coupling at all. In such scenarios, the composite free layer exhibits a "non-coherent" switching behavior, i.e., the first free layer 214 undergoes switching at a first point in time. Due to the extremely weak coupling behavior, the superparamagnetic layer 218 is caused to switch subsequently at a second point in time. It will be appreciated that such non-coherent switching leads to an increased spin-torque efficiency, and hence a reduced switching current density. [0036] FIG. 4 illustrates exemplary embodiments using a superparamagnetic layer 402. FIG. 4A illustrates randomly aligned magnetic clusters in the superparamagnetic layer 402 taken in isolation. In the absence of an external magnetic field, the polarizations of the magnetic clusters cancel out, resulting in a net magnetic moment of zero. Depending on the type of coupling behavior (i.e. exchange coupling or fringe coupling), and the thickness and material of the nonmagnetic spacer layer, the superparamagnetic layer 402 may be polarized parallel or anti-parallel with respect to the first free layer 214. The first free layer 214 may itself become polarized parallel (P) or anti-parallel (AP) with respect to the top pinned layer, depending on the spin direction of the majority of electrons tunneling through the barrier layer 212. [0037] FIG. 4B represents an exemplary embodiment wherein the first free layer 214 is polarized anti-parallel (AP) with respect to the top pinned layer 210. Fringe coupling effects in this embodiment cause the superparamagnetic layer 402 to be polarized anti- parallel to the first free layer 214. This embodiment is referred to an "AP state" of MTJ 105. FIG. 4C represents a "P state" of MTJ 105. [0038] When the magnetic moment between a superparamagnetic layer and a first free layer is anti-parallel, the switching current density required for "P to AP" polarizations of the superparamagnetic layer 402 is effectively the same as the switching current density required for "AP to P" polarizations. The use of superparamagnetic materials to form the superparamagnetic layer 402 can result in uniform switching behavior by promoting a stable domain structure. [0039] FIG. 5 illustrates yet another exemplary embodiment wherein the coupling mechanism between the two free layers 214 and 218 is interlayer exchange coupling. As explained previously, the superparamagnetic layer 402 may be aligned either parallel to the first free layer 214 (as illustrated in FIGS. 5A-B) or anti-parallel to the first free layer 214 (asillustrated in FIGS. 5C-D), based on the thickness and material of the nonmagnetic spacer layer 216. [0040] Both parallel and anti-parallel alignments between the first free layer 214 and superparamagnetic layer 402 advantageously reduce the switching current density of the MTJ cell by enhancing the spin-torque efficiency and lowering the saturation magnetization of the free layer. For the parallel alignment between first free layer 214 and superparamagnetic layer 402, the increase of the spin-torque efficiency is due to the reflection of major spin electron by superparamagnetic free layer during the switching from AP to P. Spin current that comes from the barrier layer 212 enters the first free layer 214, which acts as a spin filter, changing the magnitude and direction of the spin current. This spin current is reflected back at the interface between first free layer 214 and superparamagnetic layer 402. The torque exerted by the reflected current assists the torque of the spin current entering the free layer 214, causing the free layer 214 to switch at a lower switching current density. In other words, the enhancement of the spin-torque efficiency is due to the phase difference between the first free layer 214 and superparamagnetic layer 402 The reference, Yen et al., "Reduction in critical current density for spin torque transfer switching with composite free layer", Applied Physics Letters 93, 092504 (2008), provides further details on the relationship between phase difference in composite free layer structures and the associated reflection of spin current, contributing to lower switching current density. [0041] For the anti-parallel alignment between first free layer 214 and superparamagnetic layer 402, the increase of the spin-torque efficiency is due to the enhancement (polarization) of minor spin electron by superparamagnetic free layer during the switching from P to AP. In the case of anti-parallel alignment as illustrated in FIGS. 5D, the minority electron spin direction contributes significantly to balancing the switching current density between the "P to AP" polarization embodiments. [0042] Methods for forming the superparamagnetic layer 402 are well known to one of ordinary skill in the art and will not be described in detail herein. FIG. 6 illustrates a few conventional techniques for forming a superparamagnetic layer 402. FIG. 6 A demonstrates a common technique wherein the thickness of a layer formed from a ferromagnetic or antiferromagnetic material is reduced in dimension. As the thickness is reduced, the ferromagnetic or antiferromagnetic material starts losing its inherent magnetic field, resulting in randomly aligned magnetic clusters. When the thickness isreduced below a certain value (typically less than 15 A0 for a ferromagnetic material, and less than 50 A0 for an antiferromagnetic material) the net magnetic moment of the randomly polarized magnetic clusters becomes zero, and the material is said to be superparamagnetic. [0043] A nonmagnetic material doped with ferromagnetic materials as shown in FIG. 6B may also give rise to a net magnetic moment of zero, but under the influence of an external magnetic field, the ferromagnetic clusters can be easily polarized to align with the external field, at room temperature. A superparamagnetic layer may be formed using the technique illustrated in FIG. 6B. FIG. 6C shows a similar technique to form a superparamagnetic layer, by doping a ferromagnetic layer with nonmagnetic materials. FIG. 6D represents yet another method for forming a superparamagnetic layer, by using a laminated ferromagnetic/nonmagnetic multilayer. The superparamagnetic layer 402 in exemplary embodiments may be advantageously formed using techniques which include the techniques of FIGS. 6A-D. [0044] Accordingly, disclosed embodiments with a composite free layer comprising a superparamagnetic layer advantageously exhibit reduced switching current density due to low saturation magnetization and enhancement of spin efficiency. Further, as compared to composite free layers which only include ferromagnetic materials, the disclosed composite free layer structures with a superparamagnetic layer contribute to enhanced thermal stability due to a stable domain structure. Moreover the larger volume of the disclosed composite free layer structures, as compared to conventional single layer free layer structures, also improves the thermal stability of the MTJ device. [0045] It will be appreciated from the foregoing disclosure that embodiments can include various methods including those used to form the memory devices described herein. Accordingly, as illustrated in FIG. 7, an embodiment can include a method for forming a memory device having a magnetic tunnel junction (MTJ) storage element. A stack is formed (step 702) having an antiferromagnetic layer, a pinned layer stack and a barrier layer. A composite free layer is formed on top of the barrier layer (step 704). The composite free layer includes a first free layer, a nonmagnetic spacer layer and a superparamagnetic layer, such that the spacer layer is interspersed between the first free layer and the superparamagnetic layer. It will be appreciated that FIG. 7 and the foregoing description are not intended to limit the embodiments to the illustrated andexpressly discussed feature. Embodiments can further include any of the additional steps / functionalities described herein. [0046] It will be appreciated that memory devises including the MTJ storage elements described herein may be included within a mobile phone, portable computer, hand-held personal communication system (PCS) unit, portable data units such as personal data assistants (PDAs), GPS enabled devices, navigation devices, settop boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, or any other device that stores or retrieves data or computer instructions, or any combination thereof. Accordingly, embodiments of the disclosure may be suitably employed in any device which includes active integrated circuitry including memory having MTJ storage elements as disclosed herein. [0047] The foregoing disclosed devices and methods can be designed and can be configured into GDSII and GERBER computer files, stored on a computer readable media. These files are in turn provided to fabrication handlers who fabricate devices based on these files. The resulting products are semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips are then employed in devices described above. [0048] Accordingly, embodiments can include machine-readable media or computer-readable media embodying instructions which when executed by a processor transform the processor and any other cooperating elements into a machine for performing the functionalities described herein as provided for by the instructions. [0049] While the foregoing disclosure shows illustrative embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments described herein need not be performed in any particular order. Furthermore, although elements of the embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
PROBLEM TO BE SOLVED: To provide techniques of performing ranging with body motion capture.SOLUTION: An apparatus comprises: a receiver configured to receive ranging information generated by ranging performed among one or more pairs of apparatuses in a body area network (BAN) mounted on at least one body; a first circuit configured to estimate motion of the at least one body on the basis of the ranging information; and a second circuit configured to schedule the ranging between the pairs of apparatuses according to a scheduling priority of each of the pairs.
A device for estimating body movement, said device transmitting a first radio frequency (RF) signal to a first body worn node of a body area network (BAN) worn on said body Transmitter, and receiver, a second RF signal transmitted in response to the first RF signal transmitted by the device, and the BAN attached to the body. A receiver configured to receive, from the first body-worn node, first ranging information indicating a distance between the first body-worn node and a second body-worn node; A processor, determining second ranging information indicative of a distance between the device and the first body mounted node based on round trip timing of the first and second RF signals. And a little Comprising a estimating the motion of the body on the basis of Kutomo the first and second ranging information, and a processor configured to perform, the apparatus.The apparatus of claim 1, wherein the first signal, the second signal, and the first ranging information are communicated using ultra-wide band (UWB) technology.The receiver is further configured to receive data from a sensor associated with the body, and the processor is further configured to estimate the movement of the body using the data received from the sensor The apparatus of claim 1, whereinThe apparatus of claim 3, wherein the processor is further configured to correct a drift component of the sensor based on the first or second ranging information.4. The apparatus of claim 3, wherein the sensor comprises at least one of an inertial sensor, a magnetic detector, a proximity device, a microphone, or a camera to provide the data.The sensor is associated with the first body-mounted node or the second body-mounted node of the BAN, and the data is transmitted from the first or second body-mounted node of the BAN. The apparatus of claim 3 received via.The receiver further includes second ranging information indicating a distance between the third body-mounted node of the BAN and a fourth body-mounted node from the third body-mounted node of the BAN. Configured to receive, the processor being configured as a first pair between the first body mounted node and the second body mounted node, and as a second pair the third body mounted node The apparatus according to claim 1, configured to schedule ranging operations to be performed between the and the fourth body mounted node.The processor is further configured to determine whether the body-mounted node of the first pair or the second pair is occluded by the body based on a model of the body of the BAN. The device according to claim 7,.The processor is further configured to turn off the ranging operation of the pair of body mounted nodes to be occluded in response to the body determining to occlude one of the pairs. An apparatus according to claim 8.The processor is further configured to increase a rate of ranging operation of a non-occluded pair of body mounted nodes in response to the body determining to occlude one of the pairs. The device according to 8.The processor further determines that the ranging information of each of the first and second ranging operations determines, by the processor, the distance between the first wearable node and the second wearable node. The first body-worn node and the third body-worn node in response to determining that the body occludes the first pair of body-worn nodes to be useful for 9. The method of claim 8, wherein the first ranging operation during a period of time and the second ranging operation between the second wearable node and the third wearable node are scheduled. Device.The processor further determines the time elapsed since the last ranging operation, the magnitude of the estimated output error associated with one of the pairs, the estimated sensor error associated with one of the pairs Size, current pose of the body of the BAN, previous pose of the body, predicted future pose of the body, previous range measurement for one of the pair, one of the pair Said at least one of the probability of occlusion between the two, the power consumption associated with one of the pairs, or one or more values of inertial sensor data associated with the BAN, 8. The apparatus of claim 7, configured to schedule the ranging operation of one or a second pair.Each ranging information generated by the ranging operation of the first and second pairs of body-worn nodes includes a timestamp, and the receiver of the device is further based at least in part on the timestamp. The apparatus of claim 7, wherein the apparatus is configured to collect the respective ranging information from the first and second pairs of wearable nodes.The transmitter may further transmit data packets to the pair of wearable nodes with timing information associated with global system time that is efficient to enable generation of the time stamp of the ranging information. The apparatus of claim 13, wherein the apparatus is configured to:The apparatus according to claim 1, wherein the apparatus can be mounted on the body of the BAN or implemented as a stationary ground node.The processor is further configured to calibrate parameters associated with the model of the body used to track the movement of the body using the first or second ranging information. The device according to 1.The parameters may be the position of the wearable node on the body, the orientation of one of the wearable nodes on the body, the height of a person using one of the wearable nodes, or The apparatus according to claim 1, wherein the apparatus comprises a length of a bone of a person to which one of the wearable nodes is attached.The apparatus according to claim 1, wherein the processor is further configured to determine whether the movement of the body of the BAN corresponds to a recognizable gesture.19. The apparatus of claim 18, wherein the processor is further configured to use a pattern matching algorithm to determine whether the movement corresponds to a recognizable gesture.The processor may further determine whether the movement corresponds to a recognizable gesture, data obtained by the first or second ranging information and one or more inertial sensors of the BAN. The apparatus according to claim 18, wherein the apparatus is configured to combineA method for estimating body movement, said method transmitting a first radio frequency (RF) signal to a first body worn node of a body area network (BAN) worn on the body And a second RF signal transmitted in response to the transmitted first RF signal, and between the first body-mounted node and the second body-mounted node of the BAN. To the first body-worn node based on receiving, from the first body-worn node, first ranging information indicating a distance, and round trip timing of the first and second RF signals. Determining second ranging information indicative of the distance of the second distance, and estimating the movement of the body based at least on the first and second ranging information.22. The method of claim 21, wherein the first signal, the second signal, and the first ranging information are communicated using ultra-wide band (UWB) technology.22. The method of claim 21, wherein the method is implemented by a body mounted device of the BAN or a device in a fixed position not worn on the body of the BAN.22. The second body mounted node is mounted on another body of the BAN, the method further comprising: estimating the relative position of the BAN between the body and another body. The method described in.22. The method of claim 21, further comprising: receiving data from a sensor associated with the body; and estimating the movement of the body using the data received from the sensor.26. The method of claim 25, further comprising: correcting a drift component of the sensor data based on the first or second ranging information.26. The method of claim 25, wherein the sensor comprises at least one of an inertial sensor, a magnetic detector, a proximity device, a microphone, or a camera to provide the data.The sensor is associated with the first body-mounted node or the second body-mounted node of the BAN, and the data is transmitted from the first or second body-mounted node of the BAN. 26. The method of claim 25, received via:Receiving second ranging information indicating a distance between the third body-mounted node and the fourth body-mounted node of the BAN from a third body-mounted node of the BAN; A first pair of wearable nodes between the first body-worn node and the second body-worn node, and a second pair of body-worn nodes with the third body-worn node and 22. The method of claim 21, further comprising: scheduling ranging operations performed between the fourth body mounted nodes.30. The method of claim 29, further comprising determining whether the first pair or the second pair of wearable nodes is occluded by the body based on a model of the body of the BAN. the method of.31. The system of claim 30, further comprising turning off the ranging operation of the pair of body worn nodes to be occluded in response to determining that the body occludes one of the pairs. Method.31. The method of claim 30, further comprising increasing a rate of ranging operation of an unoccluded pair of body mounted nodes in response to determining that the body occludes one of the pairs.A first ranging operation between the first wearable node and the third wearable node in response to determining that the body occludes the first pair of wearable nodes And scheduling a second ranging operation between the second body-mounted node and the third body-mounted node, and based on ranging information of each of the first and second ranging operations. 31. The system of claim 30, further comprising: determining the distance between the first wearable node and the second wearable node to estimate the movement of the body. the method of.Each ranging information generated by the ranging operation of the first and second pairs of body-worn nodes includes a timestamp, and the method further comprises body-worn based at least in part on the timestamp. 30. The method of claim 29, comprising collecting the respective ranging information from the first and second pairs of template nodes.Claim further comprising transmitting a data packet to the pair of wearable nodes with timing information associated with global system time that is efficient to enable generation of the time stamp of the ranging information. 34. The method according to 34.22. The method of claim 21, further comprising determining whether the movement of the body of the BAN corresponds to a recognizable gesture.Determining whether the movement of the body corresponds to a recognizable gesture comprises using a pattern matching algorithm to determine whether the movement corresponds to a recognizable gesture. The method described in 21.Combining the first or second ranging information with data obtained from one or more inertial sensors of the BAN to determine whether the movement corresponds to a recognizable gesture 22. The method of claim 21 further comprising.A computer readable storage medium storing processor executable instructions, the instructions being responsive to execution by a processor to cause the processor to transmit a first radio frequency (RF) signal via a transmitter of a device. Transmitting a signal to a first body-mounted node of a body-mounted body area network (BAN) attached to the body, and a first transmitted in response to the first RF signal transmitted by the transmitter. The second RF signal and first ranging information indicating a distance between the first body-mounted node and the second body-mounted node of the BAN through the receiver of the device; The device and the first body-worn node based on receiving from one of the body-worn nodes and round-trip timing of the first and second RF signals. A computer readable storage for performing an operation comprising determining second ranging information indicative of a distance of the subject and estimating the movement of the body based at least on the first and second ranging information. Medium.40. The method of claim 39, wherein the transmitter and receiver of the device are configured to communicate using ultra-wide band (UWB) technology.A device for estimating body movement, said device transmitting a first radio frequency (RF) signal to a first body worn node of a body area network (BAN) worn on said body And a second RF signal transmitted in response to the first RF signal transmitted by the device, and the first body-mounted node of the BAN mounted on the body. Means for receiving from the first body-worn node first ranging information indicating a distance to the second body-worn node, and round trip timing of the first and second RF signals Means for determining second ranging information indicating a distance between the device and the first body-worn node based on at least the first and second ranging information. And means for estimating the motion of a body, the apparatus.42. The apparatus of claim 41, wherein the means for transmitting and receiving communicate according to an ultra-wide band (UWB) signaling protocol.A user device for estimating body movement, the apparatus comprising: a first ultra-wide band (UWB) signal to a first body worn node of a body area network (BAN) worn on the body A transmitter configured to transmit, a receiver, a second UWB signal transmitted in response to the first UWB signal transmitted by the user device, and a body mounted First ranging information indicating a distance between the first wearable node and the second wearable node of the BAN; and sensor data from a sensor associated with the body A receiver configured to receive from a wearable node, and a processor, the user device based on round trip timing of the first and second RF signals. Determining second ranging information indicating a distance between the chair and the first body-worn node, and based at least on the first ranging information, the second ranging information, and the sensor data A processor configured to: estimate the movement of the body.
Ranging using body motion captureClaim of priorityThis application claims the benefit of US Provisional Application Ser. No. 61/447, entitled “Ranging with body motion capture,” filed on Feb. 28, 2011, which is assigned to the assignee of the present application and expressly incorporated herein by reference. Claim a benefit against 470.Certain aspects of the present disclosure generally relate to signal processing and, more particularly, to methods of ranging for body motion capture.Body tracking systems have been advanced in two different ways. First, professional grade "motion capture" systems are available that can capture motion of actors, athletes, players, etc. with high fidelity, for use in, for example, movie and game studios. These systems are typically expensive and thus not suitable for consumer grade applications.Second, living room game controllers have evolved in recent years from button press types to those based on player actions. Because these are consumer products, the cost of this technology is quite low and generally the performance is also quite low. For example, in the NINTENDO® Wii system, a low cost inertial sensor can detect hand movement used to control game play. Challenges with the accuracy of this type of game control have boosted the growth of camera-based motion capture using camera augmentation systems. For example, the SONY® Move system can use a camera to track spherical features on a handheld game controller. This input can be combined with the inertial sensor data to detect motion. In addition, the MICROSOFT® Kinect system can completely remove the controller and uses these cameras alone to detect body movement using a combination of a conventional camera and a depth detection camera be able to.There are two major classes of problems with current technology. First, these systems have performance issues that limit the types of motion that can be detected and limit the types of games and user interactions that are possible. For example, the camera system may only operate on objects that are in the field of view of the camera and not blocked by an object or person. Second, camera augmentation systems are constrained to operate in an environment, usually a living room, in which a stationary camera may be installed and installed.As such, it is desirable that advances in technology allow for improvements in consumer grade body tracking performance, allowing these systems to go where the user desires. Exemplary applications include mobile gaming between one or more players and sports performance tracking and training (outdoor or at the gym). Furthermore, there are even more potential applications for such tracking that can appear when mobile body tracking technology is available at consumer prices.Certain aspects of the present disclosure provide an apparatus wearable on a body of a body area network (BAN). The device generally includes a first circuit configured to perform ranging with another body-worn device using ultra-wide band (UWB) radio technology, wherein ranging communicates the signal with the other device. And the signal conforms to the UWB radio technology.Certain aspects of the present disclosure provide a method for communication. The method is generally to perform ranging with another body-worn device by means of a wearable device on a body area network (BAN) using ultra-wide band (UWB) radio technology; Ranging comprises communicating the signal with other devices, the signal conforms to the UWB wireless technology, performing, transmitting the information generated by the ranging to the stationary device of the BAN, based on the ranging Generating information, the information being used to track movement of the body, generating.Certain aspects of the present disclosure provide an apparatus wearable on a body of a body area network (BAN). The device is generally a means for performing ranging with another body-worn device using ultra-wide band (UWB) radio technology, wherein ranging comprises communicating the signal with the other device, the signal being Implemented according to UWB wireless technology, means for transmitting information generated by ranging to a stationary device of BAN, and means for generating information based on ranging, wherein the information is body movement Generating means used to track.Certain aspects of the present disclosure provide a computer program product for communication performed by a body wearable device of a body area network (BAN). The computer program product generally includes a computer readable medium encoded with instructions executable to perform ranging with another body-worn device using ultra-wide band (UWB) wireless technology. The ranging comprises communicating the signal with other devices, the signal being in accordance with UWB radio technology.Certain aspects of the present disclosure provide a user device wearable on the body of a body area network (BAN). The user device is generally a circuit configured to perform ranging with another body-worn user device using ultra-wide band (UWB) radio technology, wherein ranging is used to signal other user devices. And a signal according to UWB wireless technology, the circuit comprising: a circuit; and an interface configured to display an indication based on the communicated signal.Certain aspects of the present disclosure provide an apparatus for communication. The device is generally configured to receive ranging information generated by ranging implemented between one or more pairs of devices in at least one body-mounted body area network (BAN). And a first circuit configured to estimate at least one body movement based on the ranging information.Certain aspects of the present disclosure provide a method for communication. The method generally comprises receiving ranging information generated by ranging performed between one or more pairs of devices in at least one body-mounted body area network (BAN); Inferring at least one body movement based on the information, altering at least one drift component of one or more sensors associated with the at least one body based on the ranging information, and in the body of the BAN Receiving one or more signals from one or more sensors associated with at least one of the attached devices, and utilizing the one or more signals to estimate body movement Including.Certain aspects of the present disclosure provide an apparatus for communication. The device generally comprises means for receiving ranging information generated by ranging performed between one or more pairs of devices in at least one body-mounted body area network (BAN); And means for estimating at least one body movement based on the information.Certain aspects of the present disclosure provide a computer program product for communication performed by an apparatus. The computer program product generally receives and ranging information generated by ranging implemented between one or more pairs of devices in at least one body-mounted body area network (BAN). A computer readable medium encoded with instructions executable to estimate at least one body movement based on the information.Certain aspects of the present disclosure provide a user device. The user device is generally configured to receive ranging information generated by ranging implemented between one or more pairs of devices in at least one body-mounted body area network (BAN) And a circuit configured to estimate at least one body movement based on the ranging information, and an interface configured to display an indication based on the ranging information.Certain aspects of the present disclosure provide an apparatus wearable on the body. The device generally includes a wireless circuit configured to perform data communication in a body area network (BAN) associated with the body and to perform ranging with another device in the BAN.Certain aspects of the present disclosure provide an apparatus for communication. The device is generally configured to asynchronously collect ranging information generated by ranging implemented between pairs of devices in a body area network (BAN) associated with at least one body. And a second circuit configured to utilize ranging information asynchronously collected to update motion estimation of at least one body.Certain aspects of the present disclosure provide an apparatus for communication. The device is generally configured to schedule ranging between pairs of devices mounted on the same body or different bodies of a body area network (BAM) according to the scheduling priority of each pair including.Certain aspects of the present disclosure provide an apparatus for communication. The device generally uses this information to track body movement using a receiver configured to receive information about ranging between a pair of devices worn on the same body or different bodies. And circuitry configured to calibrate one or more parameters associated with the model of the body being used for.Certain aspects of the present disclosure provide an apparatus for wireless communication embedded in a body area network (BAN). The device generally estimates movement of the body with a first circuit configured to communicate with at least one device worn on the body of the BAN to obtain information associated with the body. And a second circuit configured to utilize the information.Certain aspects of the present disclosure provide an apparatus for communication. The device is generally configured to collect ranging information generated by ranging performed between one or more pairs of devices in a body area network (BAN) associated with at least one body. A second circuit configured to utilize ranging information to determine whether at least one body movement corresponds to a recognizable gesture.In order that the features of the present disclosure described above can be understood in detail, a more detailed description of the content briefly outlined above is described with reference to the embodiments, some of which are attached. It is shown in the drawing. However, the accompanying drawings are considered as limiting the scope of the present disclosure, as they show only certain typical aspects of the present disclosure, and the description may be recognized in other aspects that are equally effective. Please note that it should not be.FIG. 1 illustrates an example of mobile body motion capture with ranging in a body area network (BAN), in accordance with certain aspects of the present disclosure.FIG. 2 illustrates various components that may be utilized in a wireless device of a BAN in accordance with certain aspects of the present disclosure.FIG. 3 illustrates exemplary operations that may be performed on a wearable node for ranging with another node, in accordance with certain aspects of the present disclosure.FIG. 3A shows exemplary components that can perform the operations shown in FIG.FIG. 4 illustrates example operations that may be performed at a stationary (stationary) node using information generated by ranging between one or more pairs of body-worn nodes, in accordance with certain aspects of the present disclosure. Show.FIG. 4A shows exemplary components that can perform the operations shown in FIG.FIG. 5 illustrates example operations that may be implemented in a wearable node comprising a common radio for both ranging and data communication in accordance with certain aspects of the present disclosure.FIG. 5A shows exemplary components that can perform the operations shown in FIG.FIG. 6 illustrates example operations that may be performed at a fixed node to process asynchronous range measurements in accordance with certain aspects of the present disclosure.FIG. 6A shows exemplary components that can perform the operations shown in FIG.FIG. 7 illustrates example operations that may be implemented at a fixed node for range scheduling, in accordance with certain aspects of the present disclosure.FIG. 7A shows exemplary components that can perform the operations shown in FIG.FIG. 8 illustrates exemplary operations that may be performed at a fixed node for calibration of parameters associated with a BAN, in accordance with certain aspects of the present disclosure.FIG. 8A shows exemplary components that can perform the operations shown in FIG.FIG. 9 illustrates exemplary operations that may be performed on a mobile device incorporated into a BAN in accordance with certain aspects of the present disclosure.FIG. 9A shows exemplary components that can perform the operations shown in FIG.FIG. 10 illustrates example operations that may be implemented at a fixed node for gesture recognition based on ranging information, in accordance with certain aspects of the present disclosure.FIG. 10A shows exemplary components that can perform the operations shown in FIG.Detailed descriptionIn the following, various aspects of the present disclosure will be more fully described with reference to the accompanying drawings. However, the disclosure is incorporated in many different forms and should not be construed as being limited to any particular configuration or function presented throughout this disclosure. Rather, these aspects provide a thorough and complete disclosure of this disclosure, which is provided to fully convey the scope of the present disclosure to those skilled in the art. Based on the teachings of the present specification, those skilled in the art will realize that any aspect of the content disclosed herein, whether realized independently of any other aspect of the present disclosure, is realized. Nevertheless, it should be understood that the scope of the present disclosure is intended to cover those aspects. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. Additionally, the scope of the present disclosure is embodied using other structures, functions, or structures and functions in addition to or in addition to the various aspects of the disclosure set forth herein. It is intended to cover such devices or methods. It should be understood that any aspect of the subject matter disclosed herein may be embodied by one or more elements of a claim.The term "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.Although particular aspects are described herein, many variations and permutations of these aspects are included within the scope of the disclosure. Although some benefits and advantages of the preferred embodiments are mentioned, the scope of the present disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the present disclosure are intended to be broadly applicable to different wireless technologies, system configurations, networks, and transmission protocols, some of which are illustrated as examples in the drawings and the following description of the preferred aspects. Indicated. The detailed description and drawings are merely illustrative of the disclosure, rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.Exemplary Wireless Communication Systems The techniques described herein may be used for various broadband wireless communication systems, including communication systems based on orthogonal multiplexing schemes and single carrier transmissions. Examples of such communication systems include Orthogonal Frequency Division Multiple Access (OFDMA) systems, Single-Carrier Frequency Division Multiple Access (SC-FDMA) systems, Code Division Multiple Access (CDMA), etc. An OFDMA system utilizes Orthogonal Frequency Division Multiplexing (OFDM), which is a modulation technique that divides the overall system bandwidth into multiple orthogonal subcarriers. These subcarriers may also be called tones, bins, etc. In the case of OFDM, each subcarrier may be modulated independently of data. The SC-FDMA system utilizes interleaved FDMA (IFDMA) to transmit on subcarriers distributed across the system bandwidth or uses localized FDMA (LFDMA) to transmit on one block of adjacent subcarriers. It may transmit or transmit on multiple blocks of adjacent subcarriers using Enhanced FDMA (EFDMA). In general, modulation symbols are created in the frequency domain with OFDM and in the time domain with SC-FDMA. A CDMA system may utilize spread spectrum techniques and coding schemes in which each transmitter (i.e., user) is assigned a code to enable multiple users to be multiplexed over the same physical channel.The teachings herein may be incorporated into (e.g., embodied in or implemented by) various wired or wireless devices (e.g., nodes). In some aspects, a node comprises a wireless node. Such wireless nodes may, for example, provide a connection for a network (eg, a wide area network such as the Internet or a cellular network) or a connection to the network via a wired communication link or a wireless communication link. In some aspects, a wireless node implemented in accordance with the teachings herein may comprise an access point or access terminal.Certain aspects of the present disclosure may support methods implemented in a body area network (BAN). BAN represents the concept of continuous body monitoring for motion capture, diagnostic purposes in medicine, etc.FIG. 1 shows an example 100 of a mobile game between two players, each wearing a node. Each node may determine the distance (ie, range) from other nodes located on the same player or other players. An optional stationary ground node 102 is also shown in FIG. 1, but it is not worn on the body, but instead is placed in a stationary position. In aspects of the present disclosure, the wearable node 104 and the stationary node 102 may communicate with each other as part of the BAN.Each of the wearable nodes 104 senses one or more signals associated with the body (eg, an electrocardiogram (ECG) signal, an electroencephalogram (EEG) signal, a 3D accelerometer (3D-Accl) signal, etc.) Comprising a wireless sensor (acquiring) and communicating this signal to a stationary node (also referred to herein as an estimator) 102 (eg, via the wireless channel or communication link 106 shown in FIG. 1) for processing purposes sell. In aspects of the present disclosure, pairs of body mounted nodes may also communicate with one another for range sensing purposes. Range information generated by the wearable node 104 may be utilized at the stationary node 102 to estimate the player's movement of FIG.Thus, the BAN of FIG. 1 may be considered as a wireless communication system in which various wireless nodes communicate using an orthogonal multiplexing scheme, single carrier transmission, pulse multiplexing scheme, or other communication method. The estimator 102 may be a monitoring device, a personal digital assistant (PDA), a mobile handset, a personal computer, etc. In one aspect, the wireless node of FIG. 1 may also operate in accordance with compressed sensing (CS), where the acquisition rate is less than the Nyquist rate of the signal being acquired. For example, the wearable node of FIG. 1 may obtain signals associated with the body according to CS.As described in detail below, in some aspects, communication link 106 comprises a pulse-based physical layer. For example, the physical layer may utilize ultra-wide band pulses having a relatively short length (eg, on the order of a few nanoseconds) and a relatively wide width. In some aspects, the ultra-wide band may be defined as having a fractional bandwidth on the order of about 20% or more, and / or having a bandwidth on the order of about 500 MHz or more. The fractional band is the specific bandwidth associated with the device divided by its center frequency. For example, a device according to the present disclosure may have a bandwidth of 1.75 GHz with a center frequency of 8.125 GHz, so this fractional bandwidth is 1.75 / 8.125, or 21.5%. is there.FIG. 2 illustrates various components that may be utilized in a wireless device (wireless node) 202 that may be used in the system of FIG. Wireless device 202 is an example of a device that may be configured to implement the various methods described herein. The wireless device 202 may correspond to either the estimator 102 of FIG. 1 or the wearable node 104.The wireless device 202 may include a processor 204 that controls the operation of the wireless device 202. Processor 204 may also be referred to as a central processing unit (CPU). Memory 206, which may include both read only memory (ROM) and random access memory (RAM), provides instructions and data to processor 204. Some of the memory 206 may also include non-volatile random access memory (NVRAM). Processor 204 typically performs logical and arithmetic operations based on program instructions stored in memory 206. The instructions in memory 206 may be executable to implement the methods described herein.The wireless device 202 also includes a transmitter 210 and a receiver 212 that enable transmission and reception of data between the wireless device 202 and another wireless node (eg, another wireless node at a remote location). Housing 208 can be included. Transmitter 210 and receiver 212 may be combined into transceiver 214. Wireless device 202 may also include one or more antennas 216 electrically coupled to transceiver 214. The wireless device 202 may also include multiple transmitters, multiple receivers, and / or multiple transceivers (not shown).The wireless device 202 may also include a signal detector 218 that may quantify the level of the signal received by the transceiver 214. Signal detector 218 may quantify detection of such signals using total energy, energy per subcarrier per symbol, power spectral density, and / or other quantification metrics. The wireless device 202 may also include a digital signal processor (DSP) 220 for use in processing signals.Various components of wireless device 202 may be coupled by bus system 222, which may include, in addition to data buses, a power bus, a control signal bus, and a status signal bus.Mobile Body Tracking According to certain aspects, a mobile body tracking system may use a body-mounted inertial sensor associated with a BAN. These systems have limited dynamic range and can be limited by the estimator drift common to inertial sensors. Furthermore, because each joint part of the body may need an omnidirectional estimation, acceptable body motion estimation may use a large number (e.g., a minimum of 15) of sensor nodes. In addition, existing systems may require the performance of industrial grade inertial sensors, which may increase costs and the like.For consumers, ease of use and cost are usually concerns. Therefore, it is desirable to develop new methods to reduce the number of nodes required for mobile body tracking while maintaining the desired accuracy.It should be noted that although the term "body" is used herein, the description herein may also be applied to capturing the pose of a machine such as a robot. The techniques presented may also be applied to capturing the pose of props in activities, such as swords / shields, skateboards, rackets / clubs / bats, etc.Use of Ranging for Motion Capture Ranging is a sensing method that determines the distance between two nodes. The body motion estimator may combine the inertial sensor measurements with the range to correct the error and provide the ability to estimate the drift component in the inertial sensor. According to particular aspects, the set of wearable nodes may emit transmissions that may be detected using one or more stationary ground reference nodes. The reference nodes have known locations and can be time synchronized with each other and with the wearable nodes to within a fraction of a nanosecond. However, this system may not be practical for consumer grade products due to its complex setup requirements. Therefore, further innovation is desired.Certain aspects of the present disclosure support mechanisms that allow the system to overcome the limitations of previous approaches, enabling products that have the required characteristics of consumer grade products.Ranging Mechanisms In one aspect of the present disclosure, one node may generate range information associated with another node based on the round trip time of the signal rather than the arrival time. This can remove the clock difference between the two nodes from the range estimate and remove the requirement to synchronize the nodes, which can dramatically simplify the setup. Furthermore, this method makes all nodes essentially the same with regard to synchronization, as there is no concept of "synchronized nodes" versus "non-synchronized nodes".The method may determine the range between any two nodes, including between different wearable nodes. Stationary nodes (e.g., the estimator 102 of FIG. 1) may use these ranges as inertial sensor data (i.e. measurements obtained by inertial sensors that may be worn on the body associated with the BAN), and kinematics. The pose and / or movement of the body to which the wearable node is also attached is estimated in combination with the constraints provided by the human body model. Whereas the previous system only implemented ranging from body nodes to fixed nodes, removing the time synchronization requirement allows ranging between any two nodes. With the additional range data available and with direct sensing of the relative position of the body, these additional ranges can be very valuable in motion tracking estimators. Ranges between nodes on different bodies are also useful in determining the relative position and pose between those bodies.With a high precision round trip time range and a range between both on-body and off-body nodes, the number and quality of inertial sensors may be reduced. By reducing the number of nodes, usage can be significantly simplified and costs can be reduced by reducing the required accuracy of the inertial sensor. All these improvements are desirable in producing a system suitable for consumer products.Returning to FIG. 1, two players 108, 110 may participate in the mobile game. Each player may wear a node that can range between nodes on the same player or other players. Stationary ground node 102 may be configured as an estimator to capture player motion based at least in part on information generated by ranging.FIG. 3 is a wearable node of a body area network (BAN) for ranging with another wearable node of a BAN according to certain aspects of the present disclosure (eg, wearable node 104 of FIG. ) Show exemplary operations 300 that may be performed. At 302, the wearable node may perform ranging with other wearable nodes using ultra-wide band (UWB) wireless technology. Here, ranging comprises communicating the signal with other wearable nodes, which may be in accordance with UWB radio technology.In one aspect, nodes and other nodes may be mounted on the same body of the BAN. In another aspect, other nodes may be mounted on another body of the BAN. In general, a BAN may comprise one or more body-mounted devices (nodes), and each of the BAN's nodes may communicate with one or more other nodes of the BAN.In aspects of the present disclosure, the other wearable node may comprise a wearable personal computer (PC). Additionally, the wearable node may be configured to transmit the information generated by ranging to the stationary device of the BAN. In one aspect, information may be transmitted at a throughput of about 5.5 Mbps in accordance with UWB technology. Further, at least one pulse associated with at least one of the signals has at least one of a fractional band of at least about 20%, or a bandwidth of at least about 500 MHz.FIG. 3A illustrates a wearable node of a BAN (eg, wearable node of FIG. 1) for ranging with another wearable node (eg, node 104) of the BAN in accordance with certain aspects of the present disclosure. 7 shows an exemplary operation 300A that may be performed at any of 104). At 302A, a first circuit of a wearable node 104 (eg, processor 204) may perform ranging with another wearable node 104 using UWB wireless technology. Here, ranging may comprise communicating a signal with another device, which may be in accordance with UWB radio technology. In one aspect, processor 204 may perform ranging with other body-mounted nodes 104 based on the round trip times of signals exchanged between the nodes and the other nodes.FIG. 4 is a block diagram of a stationary (stationary) node that utilizes information generated by ranging between one or more pairs of body-worn nodes in accordance with certain aspects of the present disclosure (eg, stationary node 102 of FIG. ) Show exemplary operations 400 that may be performed. At 402, a stationary node (ie, an estimator) may receive ranging information generated by ranging performed between one or more pairs of nodes in a BAN attached to at least one body. At 404, the estimator may estimate at least one body movement based on the ranging information.In one aspect, the estimator may comprise a mobile device, which may comprise a mobile phone. In one aspect of the present disclosure, the estimator comprises information and data from one or more sensors associated with the at least one body, or at least one model of the body for estimating the movement of the at least one body. And at least one of the following restrictions. The estimator may use this information to correct the drift component of the sensor. Here, the one or more sensors may comprise at least one of one or more inertial sensors, one or more magnetic sensors, or one or more optical sensors, or a combination thereof.FIG. 4A illustrates information generated by ranging between one or more pairs of body-mounted nodes (eg, any of the body-mounted nodes 104 of FIG. 1) in a BAN in accordance with certain aspects of the present disclosure FIG. 7 illustrates an exemplary operation 400A that may be implemented at a stationary (stationary) node that utilizes (eg, at the stationary node 102 of FIG. 1). At 402A, a receiver (eg, receiver 212) of stationary node 102 may receive ranging information generated by ranging performed between one or more pairs of at least one body-mounted node 104 of the BAN. It can be received. At 404A, a first circuit (eg, processor 204) of stationary node 102 may be configured to estimate at least one body movement based on the ranging information. In one aspect, the processor 204 may change at least one drift component of one or more sensors associated with the at least one body based on the ranging information. Additionally, in some aspects, a second circuit (eg, processor 204) of stationary node 102 may determine the relative position between the two bodies of the BAN based on the estimated motion. Additionally, a third circuit (eg, processor 204) of stationary node 102 may determine at least one body pose based on the estimated motion.Common Radios for Ranging and Data Communication Any system that has a wearable node may carry a control command to the node and may need a communication network to carry measurements from the node. As mentioned above, this BAN may be part of system operation. In one aspect of the present disclosure, the same radio in a wearable node may be configured for both range sensing and data communication in a BAN. This integrated approach can reduce costs and reduce the complexity of the final product.FIG. 5 may be implemented at a wearable node (eg, at any of the wearable nodes 104 of FIG. 1) with a common radio for both ranging and data communication, in accordance with certain aspects of the present disclosure An exemplary operation 500 is shown. At 502, the radio circuit of the wearable node may be configured to perform data communication in the BAN associated with the body and perform ranging with another node in the BAN.In one aspect of the present disclosure, the wireless circuitry may also be configured to transmit the information generated by ranging. For example, information may be transmitted at a throughput of about 5.5 Mbps according to UWB technology. In one aspect, the BAN may comprise at least one or a combination of one or more toy weapons, one or more skateboards, one or more rackets, or one or more baseball bats. .FIG. 5A may be implemented at a wearable node (eg, at any of wearable nodes 104 of FIG. 1) with a common radio for both ranging and data communication, in accordance with certain aspects of the present disclosure. An exemplary operation 500 is shown. At 502A, the wireless circuitry of the wearable node 104 (eg, transceiver 214) performs data communication in the BAN associated with the body and with another node in the BAN (eg, any of wearable node 104). It may be configured to perform heeling). In one aspect, transceiver 214 may transmit the information generated by ranging, for example, at a throughput of about 5.5 Mbps. Additionally, in some aspects, the second circuitry (eg, processor 204) of the wearable node 104 may generate information based on ranging. Here, the information may be used to track the movement of the body.Processing of Asynchronous Range Measurements According to certain aspects, the system may attempt to create range measurements that are very close to each other in time so that the ranges can be processed simultaneously by the estimator. However, for the system setup described above, where the range can be generated on a best effort basis and without exact synchronization of the measurement timestamps, the estimator may need to be able to incorporate range measurements at any time. . In the present disclosure, an approach is proposed that may enable the use of asynchronously collected range information in a body motion estimator. The estimator may do this by weighting the estimated update according to the body motion estimate prior to the geometry and update of the collected range. Thus, for all dimensions estimated, one range may not be sufficient to determine, but ranges collected from different node pairs over time will be sufficient observability of all dimensions (observability) ) Can be provided.Although asynchronous ranges may be processed by the system, ranges having a given timestamp may be accurate on a global system time basis. This may be necessary so that the final body motion estimator can accurately incorporate time stamped measurements. The system proposed in the present disclosure may use control mechanisms that allow each node to synchronize to global system time. This may be achieved by sending a data packet with the time information embedded in this packet. Note that the time accuracy requirements may be loose enough to be achieved simply by data transmission, as opposed to the time accuracy required for time of arrival (TOA) or time of arrival (TDOA) ranging . It should be noted that the global time base described in the present disclosure may also have other applications, such as, for example, with scheduling of range measurements as described in the detailed description below.FIG. 6 illustrates example operations 600 that may be performed at a fixed node (eg, at stationary node 102 of FIG. 1) for processing asynchronous range measurements, in accordance with certain aspects of the present disclosure. At 602, the fixed node may asynchronously collect ranging information generated by ranging performed between multiple pairs of devices in the BAN associated with the at least one body. At 604, the fixed node can update the at least one body motion estimate utilizing the asynchronously collected ranging information.FIG. 6A illustrates example operations 600A that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for processing asynchronous range measurements, in accordance with certain aspects of the present disclosure. At 602A, a first circuit (eg, signal detector 218) of fixed node 102 is ranging performed between multiple pairs of devices (eg, node 104) in the BAN associated with at least one body. The ranging information generated by may be collected asynchronously. At 604A, a second circuit (eg, processor 204) of fixed node 102 may update the at least one body motion estimate utilizing the asynchronously collected ranging information. In one aspect, processor 204 may also be configured to update motion estimation according to the time stamp of its ranging information. Additionally, in some aspects, the transmitter (eg, transmitter 210) of fixed node 102 may be configured to synchronize one or more packets to each of the nodes 104 at global system time. May be sent to node 104 along with information related to the global system time embedded in Here, each ranging information generated by ranging between each of the pair of nodes 104 may comprise a timestamp associated with the global system time. Additionally, each of the ranging information may comprise a timestamp indicating the time at which the ranging information was generated. In one aspect, signal detector 218 may collect ranging information asynchronously based at least in part on the timestamp. In addition, the third circuit (eg, processor 204) of fixed node 102 may perform updated motion estimation according to the at least one body motion estimation prior to the geometry and updating of the asynchronously collected ranging information. It can be weighted.Range Scheduling As mentioned above, the proposed system may enable ranging between any two nodes in the system. However, due to practical constraints, any one device (node) can only participate in ranging with only one other device (node) at a time (this is an example of a constraint on ranging) Only other types may exist). As such, some scheduling of ranging attempts may be required. In addition, some intelligence is needed in selecting the range, as the pair of nodes from which the range is available can change with time due to line-of-sight occlusions from parts of the body It can be According to a particular aspect of the present disclosure, the system estimates the magnitude of the output error, the time elapsed since the last range, such as using motion detection to trigger range measurement (s), sensor Error magnitude estimates, previous body pose, current pose and predicted future body pose / motion, previous range measurement values, probability of occlusion, power consumption control or minimization, or inertial sensor measurement A list of prioritized node pairs may be maintained based on various factors such as at least one of the values.In one aspect of the present disclosure, information about the prioritized list of node pairs can be utilized to form a control command and sent to a node in the system, the node ranging. It can help determine when to implement and which other nodes to implement.FIG. 7 illustrates example operations 700 that may be implemented at a fixed node (e.g., at stationary node 102 of FIG. 1) for range scheduling, in accordance with certain aspects of the present disclosure. At 702, the fixed nodes may schedule ranging between pairs of nodes mounted on the same or different bodies of the BAN according to the scheduling priorities of each pair. In one aspect of the disclosure, the scheduling priority for a particular pair of devices (ie, for a pair of nodes) is the time elapsed since the last ranging, the magnitude of the estimated output error associated with the pair, The magnitude of the estimated sensor error associated with the pair, the current pose of the at least one body fitted with the node, the previous pose of the at least one body, the predicted future pose of the at least one body, One or more values of previous range measurements for the pair, probability of occlusion between nodes of the pair, power consumption associated with the pair of nodes, or one or more of inertial sensor measurements associated with the BAN It may be based on at least one of the plurality of values.FIG. 7A illustrates example operations 700A that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for range scheduling, in accordance with certain aspects of the present disclosure. At 702A, a first circuit (eg, processor 204) of fixed node 102 performs between a pair of devices (eg, node 104) mounted on the same or different bodies of the BAN, according to the scheduling priorities of each pair. It can schedule ranging. In one aspect, processor 204 dynamically reschedules ranging between pairs of nodes 104 based on at least one of an estimated position of a body or an estimated relative position between bodies. It can. In another aspect, processor 204 may reschedule ranging based on one or more estimated operations of node 104. In yet another aspect, if one or more pairs of nodes 104 are utilized to estimate a particular state of the body, the processor 204 can perform ranging between one or more of the pairs of nodes 104. It is possible to change the rate. Additionally, in some aspects, a transmitter (eg, transmitter 210) of fixed node 102 may transmit control commands to a particular pair of nodes 104, along with information about scheduling priorities. Furthermore, if the one or more estimated measurement errors associated with the BAN are greater than or equal to one or more threshold values, the second circuit (eg, processor 204) of fixed node 102 may initiate ranging.Method of Calibration According to certain aspects of the present disclosure, calibration techniques may be required for tracking of body movement of the mobile and for consumer items. As such, in order to maintain ease of use, the simplicity (ie, invisibility) of the calibration requirements may be required. The system proposed in the present disclosure may allow for concise and accurate calibration due to the availability of ranges between multiple nodes. The types of parameters to be calibrated may comprise the position and orientation of nodes on the body, body parameters such as bone length, human height, inertial sensor offset and bias.Some of these parameters may also be estimated during active motion tracking, making the determination of these parameters almost invisible to the user. For example, the position of a node on the user's arm may be considered static. Next, during estimation of the body pose, the estimator may determine the location that most closely matches the measured data. This on-line calibration may be possible as the relative range measurements may not drift with time.FIG. 8 illustrates example operations 800 that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for calibration of parameters, in accordance with certain aspects of the present disclosure. At 802, the fixed node may receive information about ranging between a pair of devices mounted on the same body or different bodies. At 804, the fixed node may use this information to calibrate one or more parameters associated with the body model used to track body movement.FIG. 8A illustrates example operations 800A that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for calibration of parameters, in accordance with certain aspects of the present disclosure. At 802A, a receiver (eg, receiver 212) of fixed node 102 may receive information about ranging between a pair of devices (eg, node 104) mounted on the same body or different bodies. At 804A, circuitry of fixed node 102 (eg, processor 204) may use this information to calibrate one or more parameters associated with the model of the body used to track body movement. In one aspect, processor 204 may estimate at least one of the parameters while tracking body movement according to this model.Integration with Mobile Devices Mobile Body Tracking users are likely to also have mobile devices. Mobile devices (eg, smart phones) may provide gateways to game content, social networking of activity development or results, high quality screens for feedback, and even extremely high performance processors. The BAN system of the present disclosure may be integrated with one or more mobile devices to take advantage of the features listed above, in addition to incorporating input from sensors directly into the mobile device. For example, most mobile devices may comprise at least one of one or more inertial sensors, magnetic detectors, proximity devices, microphones, or cameras, and so on.If the mobile device is in a static position during motion capture, it may provide a stationary node location for ranging. If it is on the body, it can be used as a wearable node. If the mobile device comprises a camera, it can be directed to the user and can provide additional input to the body motion estimation algorithm by identifying body features and tracking their movements. If the mobile device is body worn, a camera may also be used to track features and contribute to body motion estimation.As mentioned above, the system proposed in the present disclosure may also be mobile for some of the most intensive data processing, such as the final fusion of all sensor information in the body pose estimator. It may take advantage of the computing capabilities of the processor on the device.FIG. 9 illustrates example operations 900 that may be implemented on a mobile device incorporated into a BAN, in accordance with certain aspects of the present disclosure. At 902, the mobile device can communicate with at least one device worn on the body of the BAN to obtain information associated with the body. At 904, the mobile device may utilize information to estimate body movement.According to certain aspects of the present disclosure, the mobile device may be worn on the body (e.g., any of the wearable nodes 104 of FIG. 1 may represent the mobile device). In one aspect of the present disclosure, a mobile device may comprise a mobile phone. In another aspect, the mobile device may comprise a Playstation Portable (PSP) smartphone. In yet another aspect, the mobile device may comprise a dual-screen (DS) smartphone.FIG. 9A illustrates an example operation 900A that may be performed at a mobile device embedded in a BAN (eg, at either the estimator 102 or the wearable node 104 of FIG. 1) according to certain aspects of the present disclosure. . At 902, a first circuit of the mobile device (eg, transceiver 214) may communicate with at least one node 104 mounted on the body of the BAN to obtain information associated with the body. At 904A, a second circuit of the mobile device (eg, processor 204) may utilize information to estimate body movement. Additionally, in some aspects, a receiver of the mobile device (eg, receiver 212) may receive one or more signals from one or more sensors associated with at least one node 104, the processor 204 May utilize one or more signals to estimate body movement. In one aspect, transceiver 214 may provide a stationary location for ranging of at least one node 104 if the mobile device is stationary while capturing body movement. Additionally, a third circuit of the mobile device (e.g., processor 204) may perform a final fusion of information acquired by sensors associated with the body to estimate the pose of the body.Ranging Augmentation of Gesture Recognition The relevant area to the system described above is a pattern to determine whether there is a movement (gesture) classified into one of a fixed number of predefined classes. Associated with a system that uses a matching algorithm. This technique is sometimes called gesture recognition. The gesture recognition system may view sensor data as an input to a matching algorithm. These systems may often utilize machine learning algorithms to tune the matching algorithm (or "classifier") based on trials in which many participants are implementing motion classes.The success of these systems may depend, in part, on the sensors available as input during the training and matching phases. The use of relative range information from nodes on the body and nodes not on the body can be useful, mostly for the same reasons as the full motion capture example described above. For example, relative range sensors may provide drift-free motion information not available with inertial sensors alone. This allows the gesture recognition system to be more accurate in classification, and also allows more classes to be defined that were previously indistinguishable with previous sensing methods.Furthermore, the system of the present disclosure enables alternative classification strategies, which may receive processed sensor data, either in the form of full motion estimation or some partial processing such as node orientation. I reckon. Because this input can be formed of multiple sensors with complementary performance characteristics, the performance of the classifier can be improved.FIG. 10 illustrates example operations 1000 that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for gesture recognition based on ranging information, in accordance with certain aspects of the present disclosure. At 1002, the fixed node is ranging information generated by ranging performed between one or more pairs of devices in the BAN associated with the body (eg, between one or more pairs of body-worn nodes) Can collect At 1004, the fixed node may utilize ranging information to determine whether body movement corresponds to a recognizable gesture. In one aspect, the recognizable gestures may belong to a predetermined set of gestures.FIG. 10A illustrates an example operation 1000A that may be implemented at a fixed node (eg, at stationary node 102 of FIG. 1) for gesture recognition based on ranging information, in accordance with certain aspects of the present disclosure. At 1002 A, a first circuit (eg, transceiver 214) of fixed node 102 is coupled between one or more pairs of devices (eg, node 104) in the BAN associated with the body (eg, wearable node 104). Ranging information generated by ranging performed between one or more pairs of At 1004 A, a second circuit (eg, processor 204) of fixed node 102 may utilize ranging information to determine whether body movement corresponds to a recognizable gesture. In one aspect, processor 204 may use a pattern matching algorithm to determine whether the motion corresponds to a recognizable gesture. In another aspect, processor 204 may combine ranging information with information obtained by one or more inertial sensors of the BAN to determine whether the motion corresponds to a recognizable gesture.Optimization of Ranging-Based Motion Capture System Performance and Power Consumption Motion capture systems based on inertial or optical sensors have many issues that have been well documented. Examples include drift errors in measurements from sensors that generate cumulative errors in position estimation, and loss of data due to optical sensor occlusion. Augmentation of motion capture using ranging can eliminate many of these issues by providing a means for dead reckoning position. This approach may also enable a user-friendly method of recalibrating estimates from inertial or optical sensors.Even with ranging-based motion capture or estimation, some issues may remain, such as the fact that power consumption needs to be minimized to improve the battery life of the node performing ranging . Furthermore, occlusions can still occur due to the nodes being oriented such that the perspective between them diminishes or disappears.The present disclosure proposes several ideas to optimize the power consumption and performance of the ranging based motion capture system. This system can benefit most from an array of meshed networked nodes where any node can potentially range with any other node in the network. Methods are described to exploit this network architecture in a collaborative framework, where the network as a whole may be self-aware in order to reduce power consumption and improve performance.The general setup of all the ideas below may assume a node with one or more motion sensors (inertial, optical, magnetic, etc) and a network of nodes or scheduling nodes that make central decisions. Ranging may be possible between any pair of nodes. The scheduling node may have access to all range measurements.Activity-Based Ranging Adaptation Certain aspects of the present disclosure support performing ranging between nodes when appropriate. In a typical motion capture scenario, the ultimate goal may be to enable tracking of the operation of various nodes that are part of the system. However, there is a possibility that not all nodes are simultaneously moving significantly.In the present disclosure, a method is proposed to schedule ranging for a particular node based on its motion determination. For example, it may be considered as a node N having a non-ranging sensor A that consumes less power than ranging at the same sampling rate. One example is that sensor A can be continuously in the "on" state and that it can locally determine whether the node is operating. Alternatively, measurements from sensor A can be analyzed by the scheduler to determine if it is in operation. In one aspect, motion may be defined as one or more components of the measurement from sensor A that exceeds a predefined set of thresholds.The scheduler may initiate the ranging process for node N only when it is determined that node N is in operation. The scheduler may stop the ranging process for node N if it is classified as stationary.This can be generalized beyond the simple classification of stationary nodes or mobile nodes. The mobile node may not have enough motion to guarantee ranging measurements. By scheduling ranging for nodes on demand, the system may be more power efficient and may have longer battery life.Joint Ranging Based on Model-Based Estimation Certain aspects of the present disclosure support joint ranging to optimize power and performance. Motion capture may rely on estimating relative motion between one or more sets of body worn nodes being monitored. These estimates are conditional on a body model that determines the set of possible body motion states.Thus, based on the current state of the body, it is very likely that certain other states can be excluded from being possible in a given time window. Thus, nodes that exclusively determine the set of excluded physical conditions may not need to be located. In such situations, the scheduler may predict a subset of the nodes that it needs to obtain position based on the current state and stop ranging to nodes that it does not need. This approach may save power consumption for nodes that are not needed.This same technique can be used to enhance the estimation accuracy. If the scheduler decides that a particular node is needed to estimate a particular state, it can increase the rate of measurements for those nodes. This helps to get a better estimate of the position of the nodes that may be important in determining physical condition.Adaptation of Ranging Based on Drift or Other Errors In one aspect of the present disclosure, the scheduler may also control the ranging rate based on the determination or estimation of drift or other types of measurement errors. In certain scenarios, using non-ranging methods may be sufficient to determine physical condition until the error creeps in. The scheduler may keep track of such errors and may initiate ranging or increase the rate of ranging if it determines that the errors are greater than or equal to one or more threshold values. On the other hand, the scheduler either reduces the rate of ranging or terminates the ranging if one or more estimated measurement errors associated with the BAN fall below or below one or more threshold values. It can be done.Improving Ranging Measurement During Occlusion Even with ranging, it is extremely likely that occlusion and non-sight conditions will occur. In order to solve this problem, two possible methods are proposed in the present disclosure: prediction of occlusion using a body model and use of surrogate measurements from other nodes.In the first method, a body model can be used to predict when nodes will be in occlusion. The scheduler may then perform several actions. In one aspect of the present disclosure, the scheduler may proactively turn off ranging to affected nodes to conserve power. In another aspect, it may update its estimation algorithm more quickly to take into account the fact that measurements from affected nodes are not available. In yet another aspect, the scheduler turns on ranging or increases the rate of ranging for one or more other nodes that are not occluded according to the prediction to compensate for occluded nodes.In the second method, proxy measurements from other nodes may be utilized. Node A (the first device) is occluded from the scheduler S (the second device), but another Node B (the third) so that ranging between A-B and B-S is still possible. In situations where the device may be positioned, node A may be ranged by node B to generate range value R AB. At the same time, Node B may be ranging by scheduler S to generate range value R BS. The scheduler may then estimate the value of R AS using the two range values R AB and R BS. Note that due to the occlusion between node A and scheduler S, R AS is not directly measurable.The various acts of the above-described method may be implemented by any suitable means capable of carrying out the corresponding functions. These means may include various hardware and / or software components and / or modules including, but not limited to, circuits, application specific integrated circuits (ASICs), or processors. In general, where there are operations shown in the figures, those operations may have corresponding means plus function components with similar reference numbers. For example, operations 300, 400, 500, 600, 700, 800, 900, and 1000 shown in FIGS. 3, 4, 5, 6, 7, 8, 9, and 10 are illustrated in FIGS. 3A, 4A, 5A,. It corresponds to the components 300A, 400A, 500A, 600A, 700A, 800A, 900A and 1000A shown in 6A, 7A, 8A, 9A and 10A.For example, the means for performing ranging may comprise an application specific integrated circuit such as, for example, the processor 204 of the wireless device 202 of FIG. Means for transmitting may comprise, for example, a transmitter such as transmitter 210 of wireless device 202. Means for performing data communication and ranging may comprise, for example, a transceiver such as transceiver 214 of wireless device 202. Means for receiving may comprise, for example, a receiver such as receiver 212 of wireless device 202. The means for utilizing may comprise, for example, an application specific integrated circuit such as processor 204. The means for combining may comprise, for example, an application specific integrated circuit such as processor 204. The means for determining may comprise an application specific integrated circuit such as, for example, the processor 204. The means for generating may comprise, for example, an application specific integrated circuit such as processor 204. The means for asynchronously collecting may comprise, for example, an application specific integrated circuit such as the signal detector 218 of the wireless device 202. The means for scheduling may comprise, for example, an application specific integrated circuit such as processor 204. The means for dynamically rescheduling may comprise, for example, an application specific integrated circuit such as processor 204. The means for modifying may comprise, for example, an application specific integrated circuit such as processor 204. The means for predicting may comprise, for example, an application specific integrated circuit such as processor 204. The means for turning off may comprise, for example, an application specific integrated circuit such as processor 204. The means for calibrating may comprise, for example, an application specific integrated circuit such as processor 204. The means for estimating may comprise, for example, an application specific integrated circuit such as processor 204. Means for communicating may comprise, for example, a transceiver such as transceiver 214 of wireless device 202. The means for use may comprise, for example, an application specific integrated circuit such as processor 204.As used herein, the term "determining" includes a broad range of operations. For example, “determining” may include calculating, computing, processing, deriving, examining, looking up (eg, looking at a table, a database, or another data structure). May be included), confirmation, etc. Also, "determining" can include receiving (eg, receiving information), accessing (eg, accessing data in a memory) and the like. Also, "determining" may include resolving, selecting, choosing, establishing and the like.As used herein, the expression "at least one of the list of items" refers to any combination of those items, including a single member. As an example, "at least one of a, b or c" is intended to cover a, b, c, ab, ac, bc and abc Ru.The various illustrative logic blocks, modules, and circuits described in connection with the present disclosure may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate array signals (FPGAs). Or other programmable logic devices (PLDs), discrete gates or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Or can be performed. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. The processor may also be a combination of computing devices, eg, a combination of a DSP and one microprocessor, multiple microprocessors, one or more microprocessors coupled to a DSP core, or any other such It can be realized as a configuration.The steps of a method or algorithm described in connection with the present disclosure may be incorporated directly into hardware, into a software module executed by a processor, or into a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, etc. It is possible. A software module may comprise a single instruction or multiple instructions and may be distributed across several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and / or actions may be interchanged with one another without departing from the scope of the present claims. In other words, as long as a particular order of steps or acts is not specified, the order and / or use of particular steps and / or acts may be changed without departing from the scope of the present claims.The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, these functions may be transmitted or stored as one or more instructions or code on a computer readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example and not limitation, such computer readable media may be RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage device, or instructions or data structures. It may comprise any other medium that can be used to carry or store the desired program code in form and can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, software from a website, server, or other remote source, coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared (IR), wireless, and microwave When transmitted using, coaxial technology, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwave are included in the definition of medium. Disks and discs as used herein are compact discs (CDs), laser discs (registered trademark), optical discs, digital versatile discs (DVDs), floppy discs (registered trademark), And Blu-ray (registered trademark) discs, and the disc normally reproduces data magnetically, while the disc reproduces data optically using a laser. Thus, in some aspects computer readable medium may comprise non-transitory computer readable medium (eg, tangible medium). Additionally, in other aspects computer readable medium may comprise transitory computer readable medium (eg, a signal). Combinations of the above should also be included within the scope of computer readable media.Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such computer program product may comprise a computer readable medium having stored (and / or encoded) instructions which perform the operations described herein. Executable by one or more processors. In particular aspects, a computer program product may include packaging material.Software or instructions may also be sent through a transmission medium. For example, software may use wireless technologies such as coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or infrared, wireless, and microwave from a website, server, or other remote source When transmitted, wireless technologies such as coaxial cable, fiber optic cable, twisted pair, DSL, or infrared, wireless, and microwave are included in the definition of transmission medium.Additionally, modules and / or other suitable means for performing the methods and techniques described herein may be downloaded by the user terminal and / or base station, as appropriate, and / or other methods. It should be recognized that it can be obtained at For example, such devices may be coupled to a server to facilitate transport of means for performing the methods described herein. Alternatively, the various methods described herein are provided via storage means (eg, RAM, ROM, physical storage medium such as a compact disc (CD) or floppy disc, etc.), The user terminal and / or the base station may obtain or acquire various methods by coupling or providing storage means to the device. Additionally, any other suitable technique for providing the devices with the methods and techniques described herein may be utilized.It should be understood that the claims of the present application are not limited to the precise configuration and components as described above. Various modifications, changes and variations may be made to the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the present claims.A wireless device (wireless node) of the present disclosure may include various components that perform functions based on signals transmitted by or received at the wireless device. A wireless device may also refer to a wearable wireless device. In some aspects, the wearable wireless device may comprise a wireless headset or a wireless watch. For example, the wireless headset may include a transducer adapted to provide an audio output based on data received via the receiver. The wireless watch may include a user interface adapted to provide an indication based on data received via the receiver. The wireless sensing device may include a sensor adapted to provide data to be transmitted via the transmitter.A wireless device may communicate based on or otherwise support any adaptable wireless communication technology via one or more wireless communication links. For example, in some aspects a wireless device may be associated with a network. In some aspects, the network is implemented using ultra-wide band technology or some other suitable technology, personal area network (eg, supporting wireless coverage area on the order of 30 meters) or body area network (For example, supporting a wireless coverage area on the order of 10 meters). In some aspects, the network may comprise a local area network or a wide area network. The wireless device may support or use various wireless communication technologies, protocols, or standards such as, for example, CDMA, TDMA, OFDM, OFDMA, WiMAX, Wi-Fi. Similarly, the wireless device may support or otherwise use one or more of a corresponding variety of modulation or multiplexing schemes. As such, the wireless device establishes one or more wireless communication links using the wireless communication techniques described above or other wireless communication techniques, as appropriate components (e.g., for communicating therewith). , Air interface). For example, the device may be wireless with associated transmitter and receiver components (eg, transmitter 210 and receiver 212) that may include various components (eg, signal generator and signal processor) that facilitate communication through the wireless medium. It can be equipped with a transceiver.The teachings herein may be incorporated into (e.g., embodied in or performed by) various devices (e.g., devices). For example, one or more aspects taught herein are portable, including phones (eg, cellular phones), personal digital assistants (“PDAs”), so-called smart phones, entertainment devices (eg, music players and video players) Media devices), headsets (eg headphones, earphones etc), microphones, medical sensing devices (eg biometric sensors, heart rate monitors, pedometers (registered trademark), EKG devices, smart bandages etc), users Monitoring device that can receive data from I / O devices (eg, watches, remote controls, light switches, keyboards, mice, etc.), environmental sensing devices (eg, tire pressure monitors), medical sensing devices or environmental sensing devices (E.g. Desktop, mobile computer, etc.), point-of-care (point-of-care) device, a hearing aid, a set-top box or may be incorporated into other suitable device. The monitoring device also has access to data from different sensing devices via a connection with the network.These devices may have different power and data requirements. In some aspects, the teachings herein are adapted for use in low power applications (e.g., using an impulse based signaling scheme and low duty cycle mode), and further, including relatively high data rates. Data rates (with high bandwidth pulses).In some aspects, a wireless device may comprise an access device (eg, an access point) for a communication system. Such an access device may, for example, provide a connection to another network (eg, a wide area network such as the Internet or a cellular network) via a wired communication link or a wireless communication link. Thus, the access device may enable another device (eg, a wireless station) to access other networks or other features. Additionally, it should be appreciated that one or both of the devices may be portable or, in some cases, relatively non-portal. Further, it should be appreciated that the wireless device may also transmit and / or receive information in a manner other than wireless (eg, via a wired connection) via a suitable communication interface.While the foregoing is directed to the aspects of the present disclosure, other aspects and additional aspects of the present disclosure may be devised without departing from the basic scope thereof, the scope of which is set forth in the following claims. It is determined by the range.
Semiconductor devices employing Field Effect Transistors (FETs) with multiple channel structures without shallow trench isolation (STI) void-induced electrical shorts are disclosed. In one aspect, a semiconductor device is provided that includes a substrate. The semiconductor device includes channel structures disposed over the substrate, the channel structures corresponding to a FET. An STI trench is formed between each corresponding pair of channel structures. Each STI trench includes a bottom region filled with a lower quality oxide, and a top region filled with a higher quality oxide. Thelower quality oxide is susceptible to void formation in the bottom region during particular fabrication steps of the semiconductor device. However, the higher quality oxide is not susceptible to voidformation. Thus, the higher quality oxide does not include voids with which a gate may electrically couple to other active components, thus preventing STI void-induced electrical shorts in the semiconductor device.
1.A semiconductor device comprising:Substratea plurality of channel structures disposed over the substrate and corresponding to field effect transistors (FETs);One or more shallow trench isolation (STI) trenches, each STI trench being formed between a corresponding one of the plurality of channel structures and comprising:The bottom region is filled with a lower quality oxide;The top region is filled with a higher quality oxide.2.The semiconductor device of claim 1 wherein said top region of each of said one or more STI trenches is filled with said higher quality oxide such that said top region No gaps are formed in the middle.3.The semiconductor device of claim 2 wherein said lower quality oxide comprises a high aspect ratio oxide, said high aspect ratio oxide being configured to fill having a height and width greater than ten to one (10: 1) Than the area.4.The semiconductor device of claim 3 wherein said higher quality oxide comprises a low aspect ratio oxide, said low aspect ratio oxide being configured to fill having a height and width of less than ten to one (10:1) Than the area.5.The semiconductor device according to claim 4, wherein said lower quality oxide is selected from the group consisting of spin-on dielectric oxide (SOD); and flowable chemical vapor deposition (CVD) (FCVD) oxidation. Things.6.The semiconductor device of claim 5 wherein said higher quality oxide comprises silicon oxide.7.The semiconductor device of claim 2, further comprising:a gate disposed over the top region of each of the plurality of channel structures and the one or more STI trenches;a source disposed on the plurality of channel structures and the first side of the one or more STI trenches;a drain disposed on the plurality of channel structures and a second side of the one or more STI trenches opposite the first side,The top region of each of the one or more STI trenches electrically isolates the gate from the source and the drain.8.The semiconductor device according to claim 7, wherein:Each of the plurality of channel structures includes a fin; andThe FET includes a FinFET.9.The semiconductor device of claim 7, wherein the FET comprises a nanowire FET.10.The semiconductor device of claim 1 further comprising:a plurality of channel structures disposed over the substrate and corresponding to the second FET;One or more STI trenches, each STI trench being formed between a corresponding one of the plurality of channel structures corresponding to the second FET and comprising:a bottom region filled with the lower quality oxide;a top region filled with the higher quality oxide;A deep STI trench is formed between the FET and the second FET and is configured to electrically isolate the FET and the second FET.11.The semiconductor device of claim 10 wherein said deep STI trench is filled with said higher quality oxide.12.The semiconductor device of claim 10 wherein said deep STI trench comprises:a bottom region filled with the lower quality oxide;The top region is filled with the higher quality oxide.13.The semiconductor device according to claim 1 is integrated into an integrated circuit (IC).14.The semiconductor device according to claim 1, integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communication device; a fixed location data unit; a mobile location data unit; Cellular phone;smartphone;tablet computer;tablet phone;server;computer;portable computer;desktop computer;personal digital assistant (PDA);monitor;computer display;television;tuner;radio broadcasting equipment;satellite radio broadcasting equipment ; music player; digital music player; portable music player; digital video player; video player; digital video disc (DVD) player; portable digital video player;15.A semiconductor device comprising:Means for providing a substrate;Means for providing a plurality of channel structures disposed over the substrate and corresponding to field effect transistors (FETs);Means for providing one or more shallow trench isolation (STI) trenches, each STI trench being formed between a corresponding one of the plurality of channel structures and comprising:The bottom region is filled with a lower quality oxide;The top region is filled with a higher quality oxide.16.A method for fabricating a semiconductor device using a field effect transistor (FET) having a plurality of channel structures and having no electrical short circuit caused by shallow trench isolation (STI) voids, the method comprising:Providing a substrate and one or more STI trenches, the substrate including a plurality of channel structures disposed over the substrate, each STI trench being formed in a corresponding one of the plurality of channel structures Between a pair of channel structures;Providing a lower quality oxide in each STI trench;Etching the lower quality oxide in each STI trench to the top layer of the bottom region of each STI trench;A higher quality oxide is disposed over the lower quality oxide in a top region of each STI trench, wherein the higher quality oxide fill is formed in the lower quality oxide a void in the space adjacent the top layer of the bottom region.17.The method of claim 16 further comprising forming a gate over said plurality of channel structures to form field effect transistors (FETs) corresponding to said plurality of channel structures.18.The method of claim 16 further comprising:A hard mask is disposed over the plurality of channel structures such that an opening is formed over a first subset of the channel structures, and the hard mask covers the first subset of the channel structures a second subset of channel structures on either side and a third subset of channel structures;Etching the first subset of the channel structures and the substrate to form a deeper between the second subset of the channel structures and the third subset of the channel structures An STI trench, wherein the second subset of the channel structures corresponds to a first FET, and the third subset of the channel structures corresponds to a second FET.19.The method of claim 18 further comprising: disposing said higher quality oxide in said deep STI trench.20.The method of claim 18, further comprising:Providing the lower quality oxide in the deep STI trench;Etching the lower quality oxide in the deep STI trench to the top layer of the bottom region of the deep STI trench;The higher quality oxide is disposed in the top region of the deep STI trench over the lower quality oxide.21.The method of claim 16 further comprising annealing the lower quality oxide.22.The method of claim 16 wherein disposing the lower quality oxide comprises: providing a high aspect ratio oxide configured to fill with a ratio greater than ten to one (10: 1) The area of the aspect ratio.23.The method of claim 22 wherein disposing said higher quality oxide comprises: providing a low aspect ratio oxide, said low aspect ratio oxide being configured to have a fill of less than ten to one (10: 1) The area of the aspect ratio.
Semiconductor device using a field effect transistor (FET) having a plurality of channel structures and having no electrical short circuit caused by shallow trench isolation (STI) voidsPriority applicationThis application claims a semiconductor device (SEMICONDUCTORDEVICES EMPLOYING) filed on September 15, 2016 and entitled "Electrical Short Circuit Using Field Effect Transistors (FETs) with Multiple Channel Structures and Without Shallow Trench Isolation (STI) Voids FIELD EFFECT TRANSISTORS (FETs) WITH MULTIPLE CHANNEL STRUCTURES WITHOUT SHALLOW TRENCH ISOLATION (STI) VOID-INDUCED ELECTRICALSHORTS) US Patent Application Serial No. 15/266,214, the entire disclosure of which is incorporated herein by reference.Technical fieldThe technology of the present disclosure generally relates to semiconductor devices employing shallow trench isolation (STI), and in particular to avoiding electrical shorts caused by STI voids in semiconductor devices.Background techniqueAs the functionality of electronic devices becomes more complex, it is desirable to include a greater number of transistors in such devices. However, due to the need to provide electronics in smaller and smaller packages, such as in mobile devices, there is a need to provide a greater number of transistors in smaller integrated circuit (IC) chips. This increase in the number of transistors is achieved, in part, by continuing efforts to miniaturize the transistors in the IC (i.e., placing more and more transistors into the same amount of space). Specifically, the node size in the IC is reduced by a reduction in the minimum metal line width (eg, 65 nanometers (nm), 45 nm, 28 nm, 20 nm, etc.) in the IC. As a result, the gate length of the planar transistor is also scaled down, thereby reducing the channel length and interconnect lines of the planar transistor. The reduced channel length in planar transistors has the benefit of increasing drive strength (ie, increased drain current) and providing less parasitic capacitance, resulting in reduced circuit delay. However, as the channel length in the planar transistor is reduced such that the channel length is close to the magnitude of the depletion layer width, a short channel effect (SCE) that degrades performance may occur. More specifically, the SCE in a planar transistor causes increased current leakage, reduced threshold voltage, and/or threshold voltage roll-off (ie, a threshold voltage that decreases at a shorter gate length).In this regard, in order to address the need to reduce the channel length in planar transistors while avoiding or mitigating SCE, transistor designs have been developed that replace planar transistors. One such alternative transistor design includes a fin field effect transistor (FET) (FinFET) that provides a conductive channel via a "fin" formed from a substrate. The material surrounds the fin to form the gate of the device. For example, FIG. 1 shows an exemplary FinFET 100. FinFET 100 includes a semiconductor substrate 102 and fins 104 formed from semiconductor substrate 102. The oxide layer 106 is included on either side of the fins 104. FinFET 100 includes a source 108 and a drain 110 interconnected by fins 104 such that an inner portion of fin 104 acts as a channel 112 between source 108 and drain 110. The fins 104 are surrounded by a "surround" gate 114. The surrounding structure of the gate 114 provides better static control of the channel 112, thus helping to reduce leakage current and overcome other SCEs.To achieve even greater static control of the channel of the FinFET, the FinFET can be designed to include multiple fins corresponding to a single gate. Each fin of such a FinFET is electrically isolated from an adjacent fin using shallow trench isolation (STI) filled with a non-conductive material such as an oxide. However, as the fin pitch is reduced to reduce the area of the FinFET, the distance between each fin is also reduced. The reduced distance between each fin reduces the width of each STI trench, which increases the aspect ratio (eg, aspect ratio) of each STI trench. Due to the nature of the oxide used to fill the STI trench, conventional fabrication steps such as annealing the oxide result in the formation of voids in the STI trench. The voids may be formed sufficiently close to the gate employed in the FinFET such that the conductive material used to form the gate fills the void, causing an electrical short between the source and drain of the FinFET. Electrically shorting the drain and source of the FinFET in this manner can cause the FinFET to produce an erroneous output.Summary of the inventionVarious aspects disclosed herein include a semiconductor device that employs a field effect transistor (FET) having a plurality of channel structures and does not have an electrical short caused by shallow trench isolation (STI) voids. In one aspect, a semiconductor device including a substrate is provided. The semiconductor device further includes a channel structure disposed over the substrate, the channel structure corresponding to the FET. Additionally, the semiconductor device includes an STI trench formed between each pair of corresponding channel structures. Each STI trench includes a bottom region filled with a lower quality oxide and a top region filled with a higher quality oxide. When a lower quality oxide fills the bottom region of the STI trench, the lower quality oxide tends to form a void in the bottom region during a particular fabrication step (eg, annealing) of the semiconductor device. In contrast, the higher quality oxide filling the top region of the STI trench does not easily form voids. In this regard, the gate disposed over the channel structure is also disposed over the top region of each STI trench rather than above the bottom region. However, because higher quality oxides are less prone to form voids, higher quality oxides do not include voids that the gate can utilize to electrically couple to other active components of the FET, such as source and drain. In this manner, filling the top region of each STI with a higher quality oxide prevents electrical shorts caused by STI voids in the semiconductor device.In this regard, in one aspect, a semiconductor device is provided. The semiconductor device includes a substrate. The semiconductor device also includes a plurality of channel structures disposed over the substrate and corresponding to the FETs. The semiconductor device also includes one or more STI trenches. Each STI trench is formed between a corresponding one of the plurality of channel structures and includes a bottom region filled with a lower quality oxide and a top region filled with a higher quality oxide.In another aspect, a semiconductor device is provided. The semiconductor device includes means for providing a substrate. The semiconductor device also includes means for providing a plurality of channel structures disposed over the substrate and corresponding to the FET. The semiconductor device also includes means for providing one or more STI trenches. Each STI trench is formed between a corresponding one of the plurality of channel structures and includes a bottom region filled with a lower quality oxide and a top region filled with a higher quality oxide.In another aspect, a method for fabricating a semiconductor device employing an FET having a plurality of channel structures and having no electrical shorts caused by STI voids is provided. The method includes providing a substrate. The substrate includes one or more STI trenches and a plurality of channel structures disposed over the substrate. Each STI trench is formed between a corresponding one of the plurality of channel structures. The method also includes providing a lower quality oxide in each of the STI trenches. The method also includes etching a lower quality oxide in each STI trench to a top layer of a bottom region of each STI trench. The method also includes disposing a higher quality oxide over a lower quality oxide in a top region of each STI trench, wherein a higher quality oxide fill is formed in the lower quality oxide The void, the void is adjacent to the top layer of the bottom region.DRAWINGS1 is a perspective view of a conventional fin field effect transistor (FET) (FinFET);2 is a cross-sectional view of an exemplary semiconductor circuit employing a FinFET with an electrical short caused by shallow trench isolation (STI) voids;3 is a cross-sectional view of an exemplary semiconductor device employing a FinFET without an electrical short circuit caused by an STI void;4 is a flow chart showing an exemplary process for fabricating a semiconductor device of FIG. 3 without an electrical short caused by an STI void;5A-5F are cross-sectional views showing the semiconductor device of Fig. 3 at each step in the manufacturing process of Fig. 4;6 is a flow chart showing an exemplary process for fabricating a semiconductor device without electrical shorts caused by STI voids, wherein deep STI trenches that electrically isolate multiple FinFETs are filled with lower and higher quality oxidation Object7A-7C are cross-sectional views showing the semiconductor device at each step in the manufacturing process of Fig. 6;8 is a cross-sectional view of an exemplary semiconductor device employing a nanowire FET without an electrical short caused by an STI void;9 is a block diagram of an exemplary processor-based system that can include the semiconductor devices of FIGS. 3, 7C, and 8.Detailed waysSeveral exemplary aspects of the present disclosure are now described with reference to the drawings. The word "exemplary" is used herein to mean "serving as an example, instance or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.Aspects disclosed in the detailed description include semiconductor devices employing field effect transistors (FETs) having a plurality of channel structures and having no electrical shorts caused by shallow trench isolation (STI) voids. In one aspect, a semiconductor device including a substrate is provided. The semiconductor device further includes a channel structure disposed over the substrate, the channel structure corresponding to the FET. Additionally, the semiconductor device includes an STI trench formed between each pair of corresponding channel structures. Each STI trench includes a bottom region filled with a lower quality oxide and a top region filled with a higher quality oxide. When a lower quality oxide fills the bottom region of the STI trench, the lower quality oxide tends to form a void in the bottom region during a particular fabrication step (eg, annealing) of the semiconductor device. In contrast, the higher quality oxide filling the top region of the STI trench does not easily form voids. In this regard, the gate disposed over the channel structure is also disposed over the top region of each STI trench rather than above the bottom region. However, because higher quality oxides are less prone to form voids, higher quality oxides do not include voids that the gate can utilize to electrically couple to other active components of the FET, such as source and drain. In this manner, filling the top region of each STI with a higher quality oxide prevents electrical shorts caused by STI voids in the semiconductor device.Before discussing a semiconductor device employing an FET having a plurality of channel structures and having no electrical short circuit caused by STI voids starting from FIG. 3, an exemplary conventional semiconductor device having an electrical short caused by an STI void is first described. In this regard, FIG. 2 illustrates a semiconductor device 200 including first and second FinFETs 202(1), 202(2). The first FinFET 202(1) employs three (3) fins 204(1)-204(3), and the second FinFET 202(2) employs three (3) fins 204(4)-204(6). The first FinFET 202(1) includes STI trenches 206(1), 206(2) that electrically isolate the fins 204(1), 204(2) from the fins 204(2), 204(3), respectively. Second FinFET 202(2) includes STI trenches 206(3), 206(4) that electrically isolate fins 204(4), 204(5) from fins 204(5), 204(6), respectively. However, as shown in FIG. 2, the STI trenches 206(1), 206(3) respectively have voids 208 (1) formed in the oxide for filling the STI trenches 206(1), 206(3). ), 208 (2). Specifically, the void 280(1) is sufficiently close to the gate 210(1) corresponding to the first FinFET 202(1) such that the conductive material used to form the gate 210(1) fills the void 208(1), It causes an electrical short between the source and drain (not shown) of the first FinFET 202(1). Electrically shorting the drain and source of the first FinFET 202(1) in this manner can cause the first FinFET 202(1) to produce an erroneous output.To prevent electrical shorting caused by such STI voids, FIG. 3 illustrates a cross-sectional view of an exemplary semiconductor device 300 employing first and second FETs 302(1), 302(2) and having no electrical shorts caused by STI voids. . The semiconductor device 300 includes a substrate 304 on which first and second FETs 302(1), 302(2) are formed. The first FET 302(1) employs corresponding channel structures 306(1)-306(3) disposed over the substrate 304. Additionally, second FET 302(2) employs corresponding channel structures 306(4)-306(6) disposed over substrate 304. In this example, the first and second FETs 302(1), 302(2) are used as FinFETs, and are therefore also referred to herein as first and second FinFETs 302(1), 302(2). In this manner, channel structures 306(1)-306(6) are also referred to as fins 306(1)-306(6). However, as discussed in detail below, various aspects may employ other types of FETs, such as nanowire FETs, including alternative channel structures (eg, lateral nanowires). Further, as discussed in detail below, deep STI trench 308(1) is formed between first and second FinFETs 302(1), 302(2) and is configured to electrically isolate first and second FinFETs 302 ( 1), 302 (2). Deep STI trench 308(2) is also formed to electrically isolate second FinFET 302(2) from other components in semiconductor device 300.With continued reference to FIG. 3, semiconductor device 300 further includes STI trenches 310(1)-310(4) formed between each pair of corresponding channel structures 306(1)-306(6). Specifically, referring to the first FinFET 302(1), the STI trench 310(1) is formed between the channel structures 306(1), 306(2), and the STI trench 310(2) is formed in the channel structure 306. Between (2) and 306(3). Additionally, referring to the second FinFET 302(2), the STI trench 310(3) is formed between the channel structures 306(4), 306(5), and the STI trench 310(4) is formed in the channel structure 306. Between (5) and 306(6). Each of the STI trenches 310(1)-310(4) includes a bottom region 312(1)-312(4) filled with a lower quality oxide 314, and a top region filled with a higher quality oxide 318. 316(1)-316(4).With continued reference to FIG. 3, to fill the bottom regions 312(1)-312(4) of the corresponding STI trenches 310(1)-310(4), the lower quality oxide 314 includes a high aspect ratio oxide, high aspect ratio. The specific oxide is configured to fill a region having an aspect ratio (ie, an aspect ratio) greater than ten to one (10:1). As a non-limiting example, such high aspect ratio oxides can include spin on dielectric oxide (SOD) or flowable chemical vapor deposition (CVD) (FCVD) oxides. Therefore, as the fin pitch P of the semiconductor device 300 is reduced, the aspect ratio of the STI trenches 310(1)-310(4) is increased, and the lower quality oxide 314 can be made easier than the low aspect ratio oxide. The bottom regions 312(1)-312(4) are filled. As a non-limiting example, assuming semiconductor device 300 is fabricated in a ten (10) nanometer (nm) technique, each STI trench 310(1)-310(4) may be approximately twenty-five (25) nm wide, while Each fin 306(1)-306(6) may be approximately ten (10) nm wide such that the fin pitch P is approximately thirty-five (35) nm. Additionally, if each fin 306(1)-306(6) has a height of approximately 150 nm, the corresponding aspect ratio of each STI trench 310(1)-310(4) is approximately equal to 6:1 ( For example, 150 nm: 25 nm). In this manner, because the lower quality oxide 314 is configured to fill regions having an aspect ratio greater than ten to one (10:1), the lower quality oxide 314 can be filled with six to one (6: 1) Aspect ratio STI trenches 310(1)-310(4).However, with continued reference to FIG. 3, the additives employed in the lower quality oxide 314, such as hydrogen or nitrogen, make it easier to form voids during certain fabrication steps of the semiconductor device 300. For example, voids 320(1), 320(2) may be formed in bottom regions 312(1), 312(3) in response to shrinkage of lower quality oxide 314 due to annealing. As used herein, voids 320(1), 320(2) are regions formed within lower mass oxide 314 that are either vacuum or filled with a gas. As a non-limiting example, the voids 320(1), 320(2) may have a diameter as small as two (2) nm, or as large as the width of the corresponding STI trenches 310(1)-310(4).In contrast, with continued reference to FIG. 3, the higher quality oxide 318 filling the top regions 316(1)-316(4) of the corresponding STI trenches 310(1)-310(4) does not include additives, so that it is not easy A void is formed. For example, the higher quality oxide 318 can include silicon oxide without any additives such that voids are not formed in response to annealing. Without such an additive, the higher quality oxide 318 is a low aspect ratio oxide and the low aspect ratio oxide is configured to fill with an aspect ratio of less than ten to one (10:1) (ie, Area of aspect ratio). Thus, the higher quality oxide 318 is designed to fill each of the top regions 316(1)-316(4) without forming voids while also being filled in the bottom regions 312(1)-312(4) and Corresponding top regions 316(1)-316(4) are adjacent to each other. For example, the higher quality oxide 318 fills the top region 316(1) and also fills the void 320(1) of the bottom region 312(1).With continued reference to FIG. 3, the first FinFET 302(1) employs a gate 322(1) formed of a conductive material disposed in the channel structures 306(1)-306(3) and the STI trench 310(1), Above 310(2). The first FinFET 302(1) also employs a source (not shown) disposed on the first side of the channel structures 306(1)-306(3) and the STI trenches 310(1), 310(2), And a drain (not shown) disposed on the second side of the channel structures 306(1)-306(3) and the STI trenches 310(3), 310(4) opposite the first side. In this manner, as previously described, because the void 320(1) is adjacent to the top region 316(1) in the STI trench 310(1), the higher quality oxide 318 of the top region 316(1). The gap 320 (1) is filled. Thus, the top region 316(1) prevents the conductive material of the gate 322(1) from filling the void 320(1). The second FinFET 302(2) similarly employs a gate 322(2), a source (not shown), a drain (not shown) formed of a conductive material. However, the void 320(2) is not adjacent to the top region 316(3) of the STI trench 310(3). In this manner, voids 320(2) are not easily filled by the conductive material of gate 322(2) and are therefore not filled with higher quality oxide 318. The top region 316(1) electrically isolates the gate 322(1) from the source and drain of the first FinFET 302(1) by preventing the conductive material of the gate 322(1) from filling the void 320(1). Additionally, because the higher quality oxide 318 does not readily form voids, the higher quality oxide 318 of the top regions 316(1)-316(4) does not include voids, and the gates 322(1), 322(2) The gap can be electrically coupled to the corresponding source and drain. In this manner, the top regions 316(1)-316(4) of each STI trench 310(1)-310(4) are filled with a higher quality oxide 318, preventing STI voids in the semiconductor device 300. The resulting electrical short circuit.FIG. 4 illustrates an exemplary process 400 for fabricating the semiconductor device 300 of FIG. 3 without electrical shorts caused by STI voids. Further, FIGS. 5A-5F provide cross-sectional views showing semiconductor device 300 during various steps of fabrication process 400. A cross-sectional view showing the semiconductor device 300 of FIGS. 5A-5F will be discussed in conjunction with the discussion of exemplary fabrication steps in the fabrication process 400 of FIG.In this regard, fabrication process 400 includes providing substrate 304 and STI trenches 310(1)-310(7), including substrate structure 306(1)-306(8) disposed over substrate 304. (Box 402, Figure 5A). In this regard, each STI trench 310(1)-310(7) is formed between a corresponding pair of channel structures 306(1)-306(8). Additionally, in this regard, pad oxides 500(1)-500(8) are disposed over each of channel structures 306(1)-306(8), and nitride hard mask 502(1) - 502 (8) is placed over each of the pad oxides 500 (1) - 500 (8). In this manner, each of the pad oxides 500(1)-500(8) and each of the nitride hard masks 502(1)-502(8) protect the corresponding channel structure 306 during the fabrication process 400. (1)-306(8) is protected from damage. Manufacturing process 400 also includes providing a lower quality oxide 314 in each of STI trenches 310(1)-310(7) (block 404, Figure 5A). Manufacturing process 400 may also include annealing the lower quality oxide 314 (block 406, Figure 5A). For example, a first anneal at a temperature between about 450 degrees Celsius (° C.) and 700 ° C can be performed, followed by a second anneal at a temperature between about 850 ° C and 1100 ° C. As previously described, annealing the lower quality oxide 314 in block 406 may result in shrinkage of the lower quality oxide 314, resulting in voids 320(1).With continued reference to FIG. 4, to form more than the first and second FinFETs 302(1), 302(2) in the semiconductor device 300, the fabrication process 400 can also include disposing the hard mask 504 in the channel structure 306(7)- Above 306(8), opening 506(1) is formed over first subset 508(1) of channel structures 306(7), 306(8) (block 408, Figure 5B). In this manner, hard mask 504 covers second subset 508(2) and third subset 508(3) of channel structures 306(1)-306(3), 306(4)-306(8) The second subset 508(2) and the third subset 508(3) are disposed on either side of the first subset 508(1) of channel structures 306(7), 306(8). Manufacturing process 400 may also include etching a first subset 508(1) of channel structures 306(7), 306(8) and substrate 304 to be second in channel structures 306(1)-306(2) A deep STI trench 308(1) is formed between subset 508(2) and a third subset 508(3) of channel structures 306(4)-306(6) (block 410, Figure 5C). In this manner, the second subset 508(2) of channel structures 306(1)-306(3) corresponds to the first FinFET 302(1), and the channel structures 306(4)-306(6) The third subset 508(3) corresponds to the second FinFET 302(2). Additionally, in this aspect, the hard mask 504 further includes an opening 506(2) such that the deep STI trench 308(2) is formed to electrically isolate the second FinFET 302(2) from other components in the semiconductor device 300.With continued reference to FIG. 4, the fabrication process 400 further includes etching a lower quality oxide 314 in each of the STI trenches 310(1)-310(4) to each of the STI trenches 310(1)-310(4). The top layer 510 of the bottom region 312(1)-312(4) (block 412, Figure 5D). Further, the fabrication process 400 includes placing a higher quality oxide 318 over the lower quality oxide 314 at the top region 316(1)-316 of each of the STI trenches 310(1)-310(4) ( 4) Medium (box 414, Figure 5E). In addition to filling the STI trenches 310(1)-310(4), the higher quality oxide 318 also fills the voids 320(1) formed in the lower quality oxide 314, the voids 320(1) and the bottom region 312. The top layer 510 of (1) is adjacent. In this aspect, the fabrication process 400 can also include placing a higher quality oxide 318 in the deep STI trenches 308(1), 308(2) (block 416, Figure 5E). A higher quality oxide 318 can be disposed in block 416 using a conventional high aspect ratio process (HARP). To complete the first and second FinFETs 302(1), 302(2), the fabrication process 400 can include forming over the channel structures 306(1)-306(3), 306(4)-306(6), respectively. Gates 322(1), 322(2) (block 418, Figure 5F). To form gates 322(1), 322(2) in this aspect, pad oxides 500(1)-500(3), 500(4)-500(6), and nitride hard mask 502(1) ) - 502 (3), 502 (4) - 502 (6) were first removed. Additionally, gates 322(1), 322(2) may be formed using conventional fabrication techniques, such as a high dielectric metal gate (HKMG) process. An interlayer dielectric (ILD) may also be disposed to fill the gaps in the semiconductor device 300. As previously described, the void 320(1) is filled with a higher quality oxide 318 such that the void 320(1) electrically isolates the gate 322(1) from the void 320(1), preventing STI in the semiconductor device 300. An electrical short caused by a gap.In addition to providing a higher quality oxide 318 in the deep STI trenches 308(1), 308(2) in the semiconductor device 300 as in FIG. 3, deep STI trenches 308(1) may be employed in other aspects. Both lower quality and higher quality oxides 314, 318 in 308(2). In this regard, FIG. 6 illustrates an exemplary fabrication process 600 that can be substituted for blocks 416, 418 of FIG. 4 such that lower quality and higher quality oxides 314, 318 are disposed in the deep STI trench 308 (1). , 308 (2). Further, FIGS. 7A-7C provide cross-sectional views showing semiconductor device 700 during various steps of fabrication process 600. A cross-sectional view showing the semiconductor device 700 of FIGS. 7A-7C will be discussed in conjunction with the discussion of exemplary fabrication steps in the fabrication process 600 of FIG.In this regard, fabrication process 600 includes providing a lower quality oxide 314 in deep STI trenches 308(1), 308(2) (block 602, FIG. 7A). Fabrication process 600 also includes etching a lower quality oxide 314 in deep STI trenches 308(1), 308(2) to bottom region 704 of each deep STI trench 308(1), 308(2) ( 1), top layer 702 of 704(2) (box 604, Figure 7A). Additionally, the fabrication process 600 includes placing a higher quality oxide 318 over the lower quality oxide 314 at the top region 706(1) of each deep STI trench 308(1), 308(2), 706 (2) (box 606, Figure 7B). Similar to the fabrication process 400 of FIG. 4, the fabrication process 600 can also include forming a gate 322(1) over the channel structures 306(1)-306(3), 306(4)-306(6), respectively, 322(2) (box 608, Figure 7C). Forming the semiconductor device 700 having lower quality and higher quality oxides 314, 318 in the deep STI trenches 308(1), 308(2) in this manner can reduce manufacturing costs because with FIG. Compared to semiconductor device 300, less high quality oxide 318 is used.The STI trench 310(1) is filled with lower quality and higher quality oxides 314, 318 in addition to the semiconductor device 300 employing the first and second FinFETs 302(1), 302(2) in FIG. In addition to 310(4), alternative FET types can be used in other ways while still preventing electrical shorts caused by STI gaps. In this regard, FIG. 8 illustrates a cross-sectional view of an exemplary semiconductor device 800 employing first and second nanowire FETs 802(1), 802(2) and having no electrical shorts caused by STI voids. Semiconductor device 800 includes certain common components of semiconductor device 300 in FIG. 3, such as those shown by like component numbers between FIGS. 3 and 8, and thus will not be re-described herein. In this manner, first nanowire FET 802(1) employs corresponding channel structures 804(1)-804(3) disposed over substrate 304. Additionally, second nanowire FET 802(2) employs corresponding channel structures 804(4)-804(6) disposed over substrate 304.With continued reference to FIG. 8, semiconductor device 800 further includes STI trenches 310(1)-310(4) formed between each pair of corresponding channel structures 804(1)-804(6). Specifically, referring to the first nanowire FET 802(1), the STI trench 310(1) is formed between the channel structures 804(1), 804(2), and the STI trench 310(2) is formed in the channel Between structures 804(2) and 804(3). Additionally, referring to the second nanowire FET 802(2), the STI trench 310(3) is formed between the channel structures 804(4), 804(5), and the STI trench 310(4) is formed in the channel Between structures 804(5) and 804(6). Further, the first nanowire FET 802(1) employs a gate 806(1) disposed over the channel structures 804(1)-804(3) and the STI trenches 310(1), 310(2). Gate 806(1) is formed by using nanowires 808(1)-808(9) surrounded by a conductive material. The second nanowire FET 802(2) similarly employs a gate 806(2) disposed over the channel structures 804(4)-804(6) and the STI trenches 310(3), 310(4). Gate 806(2) is formed by using nanowires 808(10)-808(18) surrounded by a conductive material.With continued reference to FIG. 8, as described with reference to FIG. 3, each STI trench 310(1)-310(4) in semiconductor device 800 includes a corresponding bottom region 312 (1) filled with a lower quality oxide 314. - 312 (4), and corresponding top regions 316 (1) - 316 (4) filled with higher quality oxide 318. In this manner, the higher quality oxide 318 in the top region 316(1) fills the void 320(1) such that the gate 806(1) does not create an electrical short caused by the STI void. Thus, similar to semiconductor device 300 in FIG. 3, top regions 316(1)-316(4) of each STI trench 310(1)-310(4) are filled with a higher quality oxide 318 to prevent An electrical short caused by an STI gap in the semiconductor device 800.The elements described herein are sometimes referred to as devices for achieving particular performance. In this regard, substrate 304 is sometimes referred to herein as "a device for providing a substrate." Channel structures 306(1)-306(6) and 804(1)-804(6) are sometimes referred to herein as "for providing a plurality of channel structures disposed over a substrate and corresponding to FETs. Device". Further, STI trenches 310(1)-310(4) are sometimes referred to herein as "means for providing one or more STI trenches."Additionally, while aspects provided herein include semiconductor devices having multiple FETs, such as first and second FinFETs 302(1), 302(2), other aspects may include semiconductor devices having a single FET. As a non-limiting example, a semiconductor device can employ a single FET having a plurality of channel structures, and as described above, an STI trench with a lower and higher quality oxide between each channel structure.Semiconductor devices employing FETs having multiple channel structures and having no electrical shorts caused by STI voids in accordance with aspects disclosed herein may be provided or integrated into any processor-based device. Examples, but not limitation, include: set top box, entertainment unit, navigation device, communication device, fixed location data unit, mobile location data unit, mobile phone, cellular phone, smart phone, tablet computer, tablet phone, server, computer, portable Computer, desktop computer, personal digital assistant (PDA), monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, digital video playback , video players, digital video disc (DVD) players, portable digital video players and vehicles.In this regard, FIG. 9 illustrates an example of a processor-based system 900 that can employ the semiconductor devices 300, 700, and 800 illustrated in FIGS. 3, 7C, and 8, respectively. In this example, processor-based system 900 includes one or more central processing units (CPUs) 902, each of which includes one or more processors 904. CPU 902 can have a cache 906 coupled to processor 904 for quick access to temporarily stored data. CPU 902 is coupled to system bus 908 and may couple the master and slave devices included in processor-based system 900 to one another. As is known, CPU 902 communicates with other devices by exchanging address information, control information, and data information over system bus 908. For example, CPU 902 can communicate a bus transaction request to memory controller 910, which is an example of a slave device. Although not shown in FIG. 9, a plurality of system buses 908 may be provided, with each system bus 908 constituting a different structure.Other master and slave devices can be connected to system bus 908. By way of example, as shown in FIG. 9, these devices may include a memory system 912, one or more input devices 914, one or more output devices 916, one or more network interface devices 918, and one or more display controls. 920. Input device 914 can include any type of input device including, but not limited to, input keys, switches, voice processors, and the like. Output device 916 can include any type of output device including, but not limited to, audio, video, other visual indicators, and the like. Network interface device 918 may be any device configured to allow data to be exchanged to and exchanged from network 922. Network 922 can be any type of network including, but not limited to, wired or wireless networks, private or public networks, local area networks (LANs), wireless local area networks (WLANs), wide area networks (WANs), BLUETOOTHTM networks, and the Internet. Network interface device 918 can be configured to support any type of communication protocol desired. Memory system 912 can include one or more memory units 924(0)-924(N).The CPU 902 can also be configured to access the display controller 920 via the system bus 908 to control information sent to one or more displays 926. Display controller 920 transmits information to display 926 via one or more video processors 928 for display, and video processor 928 processes the information to be displayed into a format suitable for display 926. Display 926 can include any type of display including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, and the like.Those skilled in the art will further appreciate that the various illustrative logic fast, modules, circuits, and algorithms described in connection with the aspects disclosed herein can be implemented in electronic hardware, stored in a memory, or in another computer readable medium and An instruction executed by a processor or other processing device, or a combination of both. By way of example, the master and slave devices described herein can be employed in any circuit, hardware component, integrated circuit (IC) or IC chip. The memory disclosed herein can be any type and size of memory and can be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their function. How such functionality is implemented depends on the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans are capable of <Desc/Clms Page number> number> The various illustrative logic blocks, modules, and circuits described in connection with the various aspects disclosed herein may utilize a processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or others. Programming logic, discrete gate or transistor logic, discrete hardware components, or any combination designed to perform the functions described herein. The processor can be a microprocessor, but in the alternative, the processor can be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computer devices (eg, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).Aspects disclosed herein may be embodied in hardware and in instructions stored in hardware, and may reside in, for example, random access memory (RAM), flash memory, read only memory (ROM), electrically programmable ROM (EPROM), electricity. An erasable programmable ROM (EEPROM), a register, a hard disk, a removable hard disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be an integral part of the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a remote station. In the alternative, the processor and the storage medium may reside as a discrete component in a remote station, base station or server.It should also be noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The described operations may be performed in many different orders than those shown. Moreover, the operations described in a single operational step can be performed in many different steps. Additionally, one or more of the operational steps discussed in the exemplary aspects can be combined. It will be apparent to those skilled in the art that many different modifications can be made to the operational steps shown in the flowchart. Those skilled in the art will also appreciate that information and signals may be represented using any of a variety of different techniques and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.The previous description of the disclosure is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the present disclosure are obvious to those skilled in the art, and the general principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Therefore, the present disclosure is not intended to be limited to the examples and designs described herein, but rather the broadest scope of the principles and novel features disclosed herein.
The invention relates to low contact resistance graphene device integration. An electronic device (100) has a graphene layer (106) having one or more atomic layers of graphene, with low resistance contacts that includes a carbon-doped metal layer (110) directly on the graphene layer (106). The electronic device (100) is formed by forming a carbon-doped metal layer (110) on a substrate layer (102)of the electronic device (100). The carbon-doped metal layer (110) is subsequently heated to a temperature above which carbon (114) in the carbon-doped metal layer (110) becomes mobile, and subsequently cooled. The carbon (114) in the carbon-doped metal (110) forms the graphene layer (106) under the carbon-doped metal layer (110) and over the substrate layer (102). The carbon-doped metal layer (110) is removed from an area outside of a contact area, leaving the carbon-doped metal layer (110) in the contact area to provide a contact layer to the graphene layer (106).
1.An electronic device comprising:a substrate layer having a top surface;a graphene layer over the substrate layer comprising at least one atomic layer of a graphene layer;A contact layer directly on the graphene layer, the contact layer comprising a carbon doped metal, wherein the contact layer exposes one or more regions of the graphene layer.2.The electronic device according to claim 1, wherein said carbon-doped metal comprises a metal selected from the group consisting of cobalt, nickel, copper, ruthenium, rhodium, palladium, silver, rhodium, iridium, platinum. And gold.3.The electronic device of claim 1 wherein the concentration of carbon atoms in said carbon-doped metal is approximately equal to a saturation concentration of carbon in said carbon-doped metal between about 400 ° C and about 1 100 ° C.4.The electronic device of claim 1 wherein the top surface of the carbon doped metal is substantially free of graphite material.5.The electronic device of claim 1 further comprising a vertical contact on said contact layer.6.The electronic device of claim 1 further comprising a lateral interconnect on said contact layer.7.The electronic device of claim 1 wherein the graphene layer provides a channel layer of a field effect transistor, and wherein the contact layer provides a drain terminal of the field effect transistor.8.A method comprising:Providing a substrate layer having a top surface;Forming a carbon-doped metal layer over the top surface;Heating the carbon-doped metal layer to form a graphene layer including at least one graphene atomic layer over the top surface of the substrate layer and under the carbon-doped metal layer;The carbon-doped metal layer in the region other than the contact region is removed to form a contact layer of the carbon-doped metal layer directly on the graphene layer.9.The method of claim 8 wherein heating the carbon-doped metal layer is performed to heat the carbon-doped metal layer to between about 400 ° C and about 1100 ° C.10.The method of claim 8 wherein heating the carbon-doped metal layer forms a layer of graphite material positioned opposite the graphene layer on an upper surface of the carbon-doped metal layer.11.The method of claim 10 including removing the layer of graphite material prior to removing the carbon-doped metal layer in the region other than the contact region.12.The method of claim 8 wherein forming the carbon doped metal layer comprises:Forming a metal layer over the substrate layer;Heating the metal layer to between about 400 ° C and about 1100 ° C; andThe carbon-containing reagent gas flows over the metal layer while heating the metal layer.13.The method of claim 8 wherein forming the carbon doped metal layer comprises:Forming a metal layer over the substrate layer; andCarbon is implanted into the metal layer.14.The method of claim 8 wherein forming the carbon doped metal layer comprises sputtering a metal and carbon from a carbon doped metal target by a physical vapor deposition PVD process.15.The method of claim 8 wherein the carbon doped metal comprises a metal selected from the group consisting of cobalt, nickel, copper, ruthenium, rhodium, palladium, silver, rhodium, iridium, platinum, and gold.16.The method of claim 8 further comprising:Forming an etch mask over the carbon-doped metal layer after forming the graphene layer, the etch mask covering a region of the graphene layer for a component of an electronic device;Removing the carbon-doped metal layer and the graphene layer from where the etch mask is exposed before removing the carbon-doped metal layer in the region other than the contact region .17.The method of claim 8 wherein removing the carbon-doped metal layer in the region other than the contact region comprises:Forming a contact etch mask over the carbon doped metal layer, the contact etch mask covering the carbon doped metal layer in the contact region;Removing the carbon-doped metal layer at a location exposed by the contact etch mask to maintain the graphene layer substantially intact;The contact etch mask is then removed.18.The method of claim 17 wherein removing the carbon-doped metal layer where exposed by the contact etch mask comprises wet etching.19.The method according to claim 18, wherein said wet etching comprises an etching solution selected from the group consisting of nitric acid in an organic solvent, an aqueous solution containing nitric acid, an aqueous solution of ferric chloride (FeCl3), permanganic acid An aqueous solution of potassium (KMnO4) and a dilute aqueous solution of hydrofluoric acid.20.The method of claim 8 further comprising forming a vertical contact directly on said carbon doped metal layer in said contact layer.
Low contact resistance graphene device integrationTechnical fieldThe present invention relates to the field of electronic devices. More specifically, the present invention relates to a graphene layer in an electronic device.Background techniqueGraphene has desirable properties for components of electronic devices, such as high electron mobility, high current carrying capacity, high thermal conductivity, and bipolar behavior. The successful integration of graphene requires a relatively defect-free graphene layer and a low resistance contact to the graphene layer at a competitive fabrication cost compared to alternative structures using conventional materials and processes. Much effort has been spent to pursue these goals, but integrating graphene into electronic devices in a commercially viable manner remains a problem.Summary of the inventionThe Summary of the Invention is presented below to provide a basic understanding of one or more aspects of the invention. This summary is not an extensive overview of the invention, and is not intended to identify the critical or critical elements of the invention. Rather, the summary of the present invention is intended to be illustrative of the embodiments of the invention.A method of forming an electronic device comprising a graphene layer having a low resistance contact comprises forming a carbon doped metal layer on a substrate layer of the electronic device. The carbon-doped metal layer is then heated to a temperature above which the carbon in the carbon-doped metal layer is at a saturated concentration. The carbon doped metal layer is then cooled to form a graphene layer comprising one or more graphene atomic layers at a bottom surface of the carbon doped metal layer directly adjacent to the substrate layer. The carbon doped metal layer is removed from a region other than the contact region, thereby leaving the carbon doped metal layer in the contact region to provide a low resistance contact layer to the graphene layer. Also disclosed is an electronic device comprising a graphene layer having a contact layer with a metal doped with carbon.DRAWINGS1 is a cross section of an exemplary electronic device having a graphene layer and a carbon-doped metal contact layer on a graphene layer.2A through 2I are cross-sections of an electronic device including a graphene layer having a metal contact layer doped with carbon, depicted in successive stages of an exemplary formation method.3A through 3G are cross-sections of an electronic device including a graphene layer having a metal contact layer doped with carbon, depicted in successive stages of another exemplary formation method.Figure 4 depicts another method of forming a carbon-doped metal layer for the process of forming an electronic device having a graphene layer with a carbon-doped contact layer.Figure 5 is a cross section of another exemplary electronic device having a graphene layer and a carbon doped metal contact layer on the graphene layer.Detailed waysThe invention is described with reference to the drawings. The figures are not drawn to scale and are merely provided to illustrate the invention. Several aspects of the invention are described below with reference to exemplary applications for illustration. It will be appreciated that numerous specific details, relationships, and methods are described to provide an understanding of the invention. The present invention is not limited to the illustrated order of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Moreover, not all illustrated acts or events are required to implement a method in accordance with the present invention.Note that terms such as top, bottom, top, and bottom may be used in the present invention. These terms are not to be interpreted as limiting the position or orientation of the structure or elements, but rather to provide a structure or a spatial relationship between the elements.For the purposes of the present invention, it will be understood that if an element is referred to as "on" or "on" another element, Directly above the other element, or an intervening element may be present. If an element is referred to as being "directly on" or "directly on" another element, it is understood that there are no other intervening elements that are intentionally placed.1 is a cross section of an exemplary electronic device having a graphene layer and a carbon-doped metal contact layer on the graphene layer. For example, electronic device 100 can be an integrated circuit, discrete electronic components (eg, resistors, capacitors, or antennas), electronic display components (eg, light-emitting diode screens), electronic transducers (eg, speakers or actuators), or electronics sensor. The electronic device 100 has a substrate layer 102 that can include a dielectric material that extends to the top surface 104 of the substrate layer 102. For example, the dielectric material can comprise silicon dioxide, a silica-based material, silicon nitride, aluminum oxide, ceramic, silicone polymer, or an organic polymer. The substrate layer 102 can be disposed over an electronic material such as silicon, silicon carbide, gallium arsenide, gallium nitride, cadmium telluride, perovskite or graphene. The electronic device 100 includes a graphene layer 106 disposed over the top surface 104. A graphene layer 106 comprising one or more graphene atomic layers provides a conductive component of the component 108 of the electronic device 100. For the purposes of the present invention, the term "graphene atomic layer" is understood to mean an atomic thick graphene layer. In the current example, component 108 is implemented as field effect transistor 108 and graphene layer 106 provides a channel layer for field effect transistor 108. Other embodiments of the assembly 108 using the graphene layer 106 (e.g., resistors, capacitors, or sensors) are within the scope of the current examples. The carbon doped metal contact layer 110 is disposed directly on the graphene layer 106. Contact layer 110 does not extend over the entire graphene layer 106. For example, contact layer 110 can comprise cobalt, nickel, copper, ruthenium, rhodium, palladium, silver, rhodium, iridium, platinum, gold, or any combination thereof. These metals are not exhaustive and are provided by way of example. Other metals suitable for incorporation of carbon to form a contact layer on the graphene layer are within the scope of the current examples. Contact layer 110 can comprise a homogeneous alloy or mixture of two or more different metals, such as a nickel-copper alloy. Contact layer 110 can comprise a layered structure of two or more layers having different metals. Carbon atoms 114 are schematically depicted in contact layer 110 by circles in FIG. The concentration of carbon atoms 114 in the carbon-doped metal may range from a few parts per million to a few atomic percent, which may be approximately equal to the saturated concentration of carbon in the metal at the temperature at which the graphene layer 106 is formed. For example, cobalt, nickel, palladium and rhodium have higher carbon saturation concentrations than silver, gold and copper. For example, the temperature at which the graphene layer 106 is formed may range from about 400 °C to about 1100 °C. At temperatures below about 400 °C, carbon atoms 114 may not have sufficient mobility to form graphene layer 106. At temperatures above about 1100 °C, degradation of the carbon-doped metal (e.g., loss of adhesion to the substrate layer 102) can occur, or degradation of materials and components of the electronic device 100 can occur. The average thickness 112 of the contact layer 110 may also depend on the saturation concentration of carbon in the metal at the temperature at which the graphene layer 106 is formed. The metal having a carbon saturation concentration in the range of 1 atomic percent to 3 atomic percent may have an average thickness 112 in the range of 50 nm to 500 nm. The average thickness 112 of the metal having a lower carbon saturation concentration may have a correspondingly larger value. The contact resistivity between the contact layer 110 and the graphene layer 106 may be less than 10-7 ohm cm2. In the present example, contact layer 110 provides source and drain terminals for field effect transistor 108, and field effect transistor 108 includes contact layer 110 and gate structure 118.Upper dielectric layer 116 is disposed over substrate layer 102 and field effect transistor 108. For example, the upper dielectric layer 116 can comprise a silicon nitride-lined, silicon-based material (e.g., a primary layer of borophosphosilicate glass (BPSG)) and a silicon nitride or silicon oxynitride cap layer. The gate structure 118 of the field effect transistor 108 is disposed in the upper dielectric layer 116 between the contact layers 110 on the graphene layer 106. The gate structure 118 includes a gate dielectric layer 120 that includes a high-k dielectric material such as hafnium oxide, zirconium oxide, or the like. The gate 122 is disposed on the gate dielectric layer 120. The gate 122 can include a liner 124, a work function layer 126, and a fill layer 128, as depicted in FIG. Alternatively, the gate 122 may have a homogeneous structure of polysilicon, metal silicide, titanium nitride, or a metal element such as nickel, cobalt or rhodium. Other structures for the gate 122 are within the scope of the current example. In the present example, the vertical contacts 130 are disposed through the upper dielectric layer 116 to be directly connected to the contact layer 110. For the purposes of the present invention, the term "vertical" is understood to mean the direction perpendicular to the plane of the current top surface of electronic device 100. In one embodiment of the current embodiment, the vertical contacts 130 may comprise a liner 132 of titanium and titanium nitride or tantalum nitride and a tungsten fill metal 134. In another embodiment, the vertical contacts 130 can comprise a tantalum or tantalum nitride liner 132 and a copper fill metal 134. In yet another embodiment, the vertical contacts 130 can comprise carbon nanotubes, graphene, or graphite materials. In one version of the current example, the vertical contacts 130 can have the same structure and composition as the contacts to the active components of the electronic device 100. In another version, the vertical contacts 130 can have the same structure and composition as the vias to the lateral interconnects of the electronic device 100. The low value of the contact resistivity between the contact layer 110 and the graphene layer 106 can provide a desired value for the drive current of the field effect transistor 108.2A through 2I are cross-sections of an electronic device including a graphene layer having a metal contact layer doped with carbon, depicted in successive stages of an exemplary formation method. Referring to FIG. 2A, the electronic device 200 has a substrate layer 202. Substrate layer 202 can include a dielectric material that extends to top surface 204 of substrate layer 202. A metal layer 236 is formed over the top surface 204. In the current example, the metal layer 236 can be substantially free of carbon, i.e., the concentration of carbon in the metal layer 236 immediately after formation can be less than a few parts per million. Metal layer 236 comprises a metal suitable for subsequent formation of graphene, such as cobalt, nickel, copper, ruthenium, rhodium, palladium, silver, rhodium, iridium, platinum, gold, or any combination thereof. As explained with reference to Figure 1, these metals are not exhaustive and are provided by way of example. Other metals now known or later recognized will be within the scope of the current examples. Metal layer 236 can comprise a homogeneous alloy or mixture of two or more different metals. The metal layer 236 can comprise a layered structure of two or more layers having different metals, such as a copper/nickel/copper stack. The metal layer 236 can be formed, for example, by a sputtering process, an evaporation process, a chemical vapor deposition (CVD) process, or an atomic layer deposition (ALD) process. The thickness 238 of the metal layer 236 is affected by the saturation concentration of carbon in the metal layer 236 at the temperature at which the graphene layer is subsequently formed. In the current example, carbon will be introduced into the metal layer 236 at elevated temperatures. Sufficient carbon must be introduced into the metal layer 236 to provide carbon atoms to the subsequently formed graphene layer 206 of Figure 2C. For an example of the metal layer 236 comprising primarily a metal having a high saturation concentration of carbon (e.g., cobalt, nickel, ruthenium, and/or palladium), the thickness 238 can be, for example, from 50 nanometers to 500 nanometers. For examples of metal layers 236 that primarily comprise a lower saturation concentration of carbon (e.g., silver, copper, gold, or platinum), the thickness 238 can be greater than 500 nanometers.Referring to FIG. 2B, a carbon atom 214, schematically represented by a circle in FIG. 2B, is introduced into the metal layer 236 of FIG. 2A to form a carbon-doped metal layer 240 over the top surface 204 of the substrate layer 202. In the present example, carbon atoms 214 can be introduced into metal layer 236 by heating metal layer 236 while flowing a carbonaceous reagent gas (denoted as "carbon containing reagent gas" in Figure 2B) over metal layer 236. Metal layer 236 can be heated to a temperature of, for example, from about 400 ° C to about 1100 ° C. Metal layer 236 is heated by heating process 242. In some versions of the current example, the electronic device 200 can be placed in a wafer furnace or other device capable of batch operation, thereby providing low process cost. In other versions of the current example, electronic device 200 can be placed in a chemical vapor deposition (CVD) chamber having a heated substrate chuck to provide a high level of process control. In still other versions of the current example, the electronic device 200 can be placed in a rapid thermal processor (RTP) chamber with a radiant heat source to provide high throughput. The carbonaceous reagent gas may comprise methane, ethane, propane or other alkane, ethanol, propanol or other alcohol, or an aromatic reagent such as camphor. Other carbon-containing reagent gases are within the scope of the current examples. Heating process 242 continues and the carbon-containing reagent gas flows over metal layer 236 until the desired concentration of carbon atoms 214 is obtained in carbon-doped metal layer 240 at the desired temperature. A temperature of at least about 400 °C provides sufficient mobility of carbon atoms 214 in metal layer 236. The desired concentration of carbon atoms 214 may be close to the carbon saturation concentration at the desired temperature of the carbon-doped metal layer 240 upon introduction of carbon atoms 214. The carbon saturation concentration depends on the metal composition of the metal layer 236 and on the temperature of the metal layer 240 doped with carbon. Generally, the carbon saturation concentration increases as the temperature of the carbon-doped metal layer 240 increases. Process control of the concentration of carbon atoms 214 is promoted by increasing the temperature of the carbon-doped metal layer 240. However, the temperature is limited by the need to avoid degradation of existing components and materials of the electronic device 200. In versions of the current example where the existing components and materials are free of plastics, transistors, metal interconnects, etc., the temperature can be extended to approximately 1100 °C. The presence of temperature sensitive materials or components may require a reduced temperature. After the desired concentration of carbon atoms 214 is obtained, the flow of the carbon-containing reagent gas is stopped.Referring to FIG. 2C, the carbon doped metal layer 240 is cooled by reducing the thermal power provided by the heating process 242. The carbon doped metal layer 240 can be cooled by abruptly turning off the thermal power or by reducing the thermal power (as schematically indicated in Figure 2C) at a cooling rate of, for example, 1 °C/min to 150 °C/sec. A lower cooling rate in the range of 1 °C/min to 10 °C/min is applicable to the version of the current example performed in a furnace apparatus capable of batch operation. A moderate cooling rate in the range of 10 °C/min to 1 °C/sec is applicable to the version of the current example performed in a CVD chamber with a heated substrate chuck. A higher cooling rate in the range of 1 ° C / sec to 150 ° C / sec is applicable to the version of the current example performed in an RTP chamber with a radiant heat source. As the carbon-doped metal layer 240 cools, the carbon saturation concentration of the carbon-doped metal layer 240 decreases below the actual carbon concentration of the carbon-doped metal layer 240 while the current temperature of the carbon-doped metal layer 240 Next, the carbon atoms 214 are still moving. A first portion of carbon atoms 214 migrates to a lower surface of carbon-doped metal layer 240 adjacent top surface 204 of substrate layer 202 and a graphene layer 206 is formed over top surface 204 and under carbon-doped metal layer 240. A second portion of the carbon atoms 214 migrates to the upper surface of the carbon doped metal layer 240 and forms a disposable graphite layer 244 positioned opposite the graphene layer 206. The disposable graphite layer 244 may comprise graphene or other graphite material and may comprise other forms of carbon.Referring to FIG. 2D, in the current example, the disposable graphite layer 244 can optionally be removed such that at least a portion of the carbon-doped metal layer 240 on the graphene layer 206 remains intact. The disposable graphite layer 244 can be removed, for example, by using an ashing process 246 of oxygen radicals (as schematically depicted in Figure 2D). Other processes for removing the disposable graphite layer 244 (e.g., dry etching processes using ozone) are within the scope of the current examples. After removing the disposable graphite layer 244, the top surface of the carbon-doped metal layer 240 is substantially free of graphite material, ie, the amount of graphite material on the top surface of the carbon-doped metal layer 240 is sufficiently low to not interfere The electrical connection to the subsequently formed contact layer of the carbon-doped metal layer 240 is formed.Referring to FIG. 2E, a graphene etch mask 248 is formed over the carbon-doped metal layer 240 to cover regions for subsequently formed components using the graphene layer 206. For example, the graphene etch mask 248 can comprise a photoresist formed by a photolithography process, and can also include an anti-reflective layer, such as a bottom anti-reflective coating (BARC). In another example, the graphene etch mask 248 can comprise a hard mask material, such as silicon nitride. The carbon doped metal layer 240 and the graphene layer 206 are removed where exposed by the graphene etch mask 248. The carbon-doped metal layer 240 and the graphene layer 206 may be removed by a dry etching process 250, which may include a reactive ion etching (RIE) process using halogen radicals as chlorine (Cl) radicals. (schematically depicted in Figure 2E). The carbon-doped metal layer 240 and the graphene layer 206 can be removed by a wet etching process. The graphene etch mask 248 is then removed, for example by an ashing process. After removal of the graphene etch mask 248, the carbon doped metal layer 240 remains in place over the graphene layer 206.Referring to FIG. 2F, a contact etch mask 252 is formed over the carbon-doped metal layer 240 to cover the regions of the subsequently formed contact layer of the metal layer 240 for doping carbon. The contact etch mask 252 may comprise a photoresist formed by a photolithography process. The photoresist can undergo a baking process to improve adhesion to the carbon doped metal layer 240 in order to reduce undercut during subsequent wet etching.Referring to FIG. 2G, the carbon-doped metal layer 240 of FIG. 2F is removed where exposed by the contact etch mask 252, thereby leaving a remaining doped carbon metal layer 240 under the etch mask 252 to be in the graphene layer. Contact layer 210 is provided on 206. The process of removing the carbon-doped metal layer 240 is performed to maintain the graphene layer 206 substantially intact, ie, the remaining graphene layer 206 provides functionality for a subsequently formed component having the graphene layer 206. The carbon doped metal layer 240 can be removed using a wet etch process 254, as indicated in Figure 2G. For example, the wet etching process 254 may comprise nitric acid in an organic solvent, an aqueous solution containing nitric acid, an aqueous solution of ferric chloride (FeCl3), an aqueous solution of potassium permanganate (KMnO4), or a dilute aqueous solution of hydrofluoric acid. The wet etch process 254 can include using an aqueous solution of nitric acid or potassium permanganate to remove a portion of the carbon-doped metal layer 240, followed by a metal layer 240 to remove residual dopant carbon while simultaneously reducing the underlying graphene layer Timing etching of a dilute aqueous solution of oxidized hydrofluoric acid of 206. After the carbon-doped metal layer 240 is removed by exposure to the etch mask 252, the contact etch mask 252 is removed, such as by a solvent-based wet process, to avoid oxidation of the graphene layer 206.Referring to FIG. 2H, a gate structure 218 of component 208 (denoted as field effect transistor 208 in FIG. 2H) is formed over graphene layer 206. The gate structure 218 of the current example can include a gate dielectric layer 220 over the graphene layer 206 and a gate 222 over the gate dielectric layer 220. Other embodiments of the gate structure 218 are within the scope of the current examples. An upper dielectric layer 216 is formed over the substrate layer 202, the graphene layer 206, the gate structure 218, and the contact layer 210. The upper dielectric layer 216 can include one or more layers of dielectric material, such as disclosed with respect to FIG. The upper dielectric layer 216 is sometimes referred to as a metal front dielectric (PMD) layer 216. A contact hole 256 is formed through the upper dielectric layer 216 to expose the contact layer 210. Contact hole 256 can be formed, for example, by RIE process 258 using fluorine radicals, as schematically indicated in Figure 2H. The process for forming contact holes 256 can remove a portion of contact layer 210, as depicted in Figure 2H. The contact layer 210 advantageously protects the underlying graphene layer 206 from degradation due to the process used to form the contact holes 256.Referring to FIG. 2I, vertical contacts 230 are formed in contact holes 256 for electrical connection to contact layer 210. The contact layer provides a low resistance electrical connection between the vertical contacts 230 and the graphene layer 206. Vertical contact 230 can have a liner and a fill metal structure similar to vertical contact 130 of FIG. Other structures of the vertical contacts 230 (e.g., solid filled metal structures) are within the scope of the current examples.3A through 3G are cross-sections of an electronic device including a graphene layer having a metal contact layer doped with carbon, depicted in successive stages of another exemplary formation method. Referring to FIG. 3A, electronic device 300 has a substrate layer 302 having a top surface 304. A metal layer 336 that is substantially free of carbon is formed over the top surface 304. Metal layer 336 comprises a metal suitable for subsequent formation of graphene, such as cobalt, nickel, copper, ruthenium, rhodium, palladium, silver, rhodium, iridium, platinum, gold, or any combination thereof. Metal layer 336 can be formed, for example, by the process disclosed with reference to FIG. 2A. For example, metal layer 336 may have a thickness 338 of from 150 nanometers to 250 nanometers.Referring to FIG. 3B, carbon atoms 314 are implanted into metal layer 336 of FIG. 3A to form a carbon doped metal layer 340. In the present example, carbon is introduced into the metal layer 336 at a relatively low temperature under conditions far from thermal equilibrium, thereby allowing the desired carbon content to be obtained. The subsequently formed graphene layer will be formed under conditions close to thermal equilibrium. Therefore, sufficient carbon must be introduced to provide carbon for the subsequently formed graphene layer and the subsequently formed disposable graphite layer. The single graphene atomic layer has a carbon atom density of about 4 x 1015 cm-2. Therefore, the dose of the implanted carbon atoms 314 can be as low as 1 x 1016 cm-2 to form a graphene layer having one graphene atomic layer. Alternatively, the dose of the implanted carbon atoms 314 may exceed 6 x 1016 cm-2 to form a graphene layer having five graphene atomic layers. Moreover, the dose of implanted carbon atoms 314 is sufficient to provide carbon for the carbon atoms 314 remaining in the carbon-doped metal layer 340 after formation of the graphene layer.Referring to FIG. 3C, a graphene etch mask 348 is formed over the carbon doped metal layer 340 to cover regions of the graphene layer for subsequent formation. For example, the graphene etch mask 348 can comprise a photoresist and an anti-reflective layer or can comprise a hard mask material. The carbon doped metal layer 340 is removed, for example, by a dry etch process 350 where it is exposed by the graphene etch mask 348. Alternatively, the carbon doped metal layer 340 can be removed by a wet etch process. Subsequently, the graphene etch mask 348 is removed, such as by an ashing process. After removal of the graphene etch mask 348, the carbon doped metal layer 340 remains in place.Referring to FIG. 3D, the carbon-doped metal layer 340 is heated by a heating process 342 to a temperature of, for example, about 400 ° C to about 1100 ° C. Heating process 342 can be a radiant heat process 342 as schematically depicted in Figure 2B. Alternatively, the heating process 342 can be a wafer furnace batch operation or a single substrate heated chuck operation. As the temperature of the carbon-doped metal layer 340 rises, the carbon atoms 314 become sufficiently moved to diffuse within the carbon-doped metal layer 340 such that the carbon atoms 314 are dissolved in the carbon-doped metal layer 340. The heating process 342 is lowered to cause the temperature of the carbon-doped metal layer 340 to drop. During this process of heating and subsequently cooling the carbon-doped metal layer 340, a first portion of the carbon atoms 314 migrates to a lower surface of the carbon-doped metal layer 340 adjacent the top surface 304 of the substrate layer 302 and at the top surface 304. A graphene layer 306 is formed over and under the carbon doped metal layer 340. A second portion of carbon atoms 314 migrates to the upper surface of the carbon doped metal layer 340 and forms a disposable graphite layer 344 positioned opposite the graphene layer 306.Referring to FIG. 3E, a contact etch mask 352 is formed over the disposable graphite layer 344 to cover the regions of the subsequently formed contact layer of the metal layer 340 for carbon doping. Contact etch mask 352 can comprise a photoresist formed by a photolithography process. In the current example, the disposable graphite layer 344 can be removed using a plasma process 346 where exposed by the contact etch mask 352. The disposable graphite layer 344 is much thinner than the contact etch mask 352, allowing the disposable graphite layer 344 to be removed without significantly degrading the contact etch mask 352.Referring to FIG. 3F, the carbon-doped metal layer 340 of FIG. 3E is removed where exposed by the contact etch mask 352, leaving a contact with the remaining carbon-doped metal layer 340 under the etch mask 352 to be in the graphene layer. Contact layer 310 is provided on 306. The process of removing the carbon doped metal layer 340 can be performed using a wet etch process 354, as disclosed with reference to FIG. 2G. After the contact layer 310 is formed, the contact etch mask 352 is removed.Referring to FIG. 3G, graphene layer 306 and contact layer 310 provide components of assembly 308 of electronic device 300. In the present example, component 308 is represented as resistor 308, with graphene layer 306 providing the body of resistor 308. An upper dielectric layer 316 is formed over the substrate layer 302 and over the component 308. Vertical contacts 330 are formed through upper dielectric layer 316 and through disposable graphite layer 344 for electrical connection to contact layer 310. The contact layer provides a low resistance electrical connection between the vertical contact 330 and the graphene layer 306. Vertical contact 330 can have liner 332 and fill metal structure 334 as depicted in Figure 3G, or can have any of the structures disclosed with reference to Figure 1.Figure 4 depicts another method of forming a carbon-doped metal layer for the process of forming an electronic device having a graphene layer with a carbon-doped contact layer. The electronic device 400 has a substrate layer 402 having a top surface 404. The carbon doped metal layer 440 is formed on the top surface 404 by a physical vapor deposition (PVD) process using a carbon doped metal target 460. The carbon doped metal target 460 contains a sufficient density of carbon atoms 414 to provide a desired density of carbon atoms 414 in the carbon doped metal layer 440. The PVD process can use an inert gas environment, represented by argon ions 462 in FIG. 4, to sputter metal and carbon from a carbon doped metal target 460 that is subsequently doped to the carbon layer on the substrate layer 402. In the metal layer 440. Forming the carbon doped metal layer 440 by a PVD process can reduce manufacturing costs and complexity of the electronic device 400.After the carbon-doped metal layer 440 is formed by a PVD process, a graphene layer is formed on the top surface 404 of the substrate layer 402 from the carbon atoms 414 in the carbon-doped metal layer 440. The graphene layer can be formed, for example, as disclosed with reference to Figure 3D. The thickness 438 of the carbon-doped metal layer 440 is such that sufficient carbon atoms 414 are present in the carbon-doped metal layer 440 to form a graphene layer. The carbon doped contact layer is formed of a carbon doped metal layer 440, such as disclosed with reference to Figures 2F and 2G or with reference to Figures 3E and 3F.Figure 5 is a cross section of another exemplary electronic device having a graphene layer and a carbon doped metal contact layer on the graphene layer. The electronic device 500 has a substrate layer 502 having a top surface 504. The electronic device 500 includes a graphene layer 506 disposed over the top surface 504. Graphene layer 506 provides conductive features for assembly 508 of electronic device 500. In the current example, component 508 is represented as antenna 508. Other embodiments (e.g., transistors, resistors, capacitors, or sensors) using component 508 of graphene layer 506 are within the scope of the current examples. The carbon doped metal contact layer 510 is disposed directly on the graphene layer 506. Contact layer 510 does not extend over the entire graphene layer 506. Carbon atoms 514 are schematically depicted by circles in Figure 5. The contact resistivity between the contact layer 510 and the graphene layer 506 may be less than 10-7 ohm cm2.Upper dielectric layer 516 is disposed over substrate layer 502 and graphene layer 506. The upper dielectric layer 516 can comprise a plurality of sub-layers of dielectric material. In the current example, lateral interconnects 566 are disposed directly on contact layer 510. For the purposes of the present invention, the term "lateral" is understood to mean the direction parallel to the plane of the current top surface of electronic device 500. For example, the lateral interconnects 566 can be aluminum interconnects, inlaid copper interconnects, or plated copper interconnects. The aluminum interconnect may comprise an aluminum layer having a few percent of silicon, titanium and/or copper that may be on the adhesion layer comprising titanium, and possibly having titanium nitride anti-reflection on the aluminum layer Floor. The damascene interconnect may comprise copper disposed on the barrier layer of tantalum and/or tantalum nitride in the trench. The plated copper interconnect may comprise a primary layer of plated copper, an adhesion layer at the bottom of the primary layer of plated copper, and may have a barrier layer disposed on the side of the primary layer of plated copper. Other structures and materials for the lateral interconnects 566 (e.g., carbon nanotube bundles, graphene layers, etc.) are within the scope of the current examples. Having a lateral interconnect 566 directly on the contact layer 510 can achieve an advantageous compact structure of the electronic device 500.While the various embodiments of the invention have been described, the embodiments Numerous variations of the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit and scope of the invention. Therefore, the breadth and scope of the present invention should not be limited to any of the embodiments described above. Instead, the scope of the invention should be defined in accordance with the appended claims and their equivalents.
A method for supporting a download of an application from a server to a wireless device is disclosed. The method includes the steps of: (a) downloading the application and an associated reference from the server, (b) determining, prior to execution of the application, and based upon the associated reference, whether execution of the application requires a virtual machine to be loaded on the wireless device, (c) if the determining step determines that the virtual machine is required for execution of the application, performing the a second step of determining whether the virtual machine is loaded on the wireless device. If the second determining step determines that the virtual machine is not loaded on the wireless device, a second download request is sent to the server for the virtual machine and the virtual machine is downloaded, thereby making it available to the application. The steps of first determining, second determining, sending a second download request, and making the virtual machine available, are performed automatically on the wireless device without interaction with a user of the wireless device.
CLAIMS1. A method for supporting a download of an application from a server to a wireless device, comprising:downloading from the server the application and an associated reference;first determining, prior to execution of the application, based upon the associated reference, whether execution of the application requires a virtual machine to be loaded on the wireless device;if said first determining step determines that the virtual machine is required for the execution of the application, performing the step of second determining whether the virtual machine is loaded on the wireless device;if said second determining step determines that the virtual machine is not loaded on the wireless device, sending a second download request to the server for the virtual machine, and making the virtual machine available to the application after the virtual machine has been downloaded from the server in response to the second download request;wherein said steps of first determining, second determining, sending a second download request, and making the virtual machine available are performed automatically on the wireless device without interaction with a user of the wireless device.2. The method of claim 1, wherein the application is written in Java code and the virtual machine is a Java virtual machine.3. A wireless device readable medium including instructions stored thereon that when executed by a processor of a wireless device, causes the wireless device to perform operations, the instructions comprising:■> mteU&^tua! Property •<5mpe of N.2.2 7'MAY 2097""i jC'O| „V.\:9instructions to download from a server an application and an associated reference;instructions to first determine, prior to execution of the application, based upon a the associated reference, whether an execution of the application requires a virtual machine to be loaded on the wireless device;if said first determining step determines that the virtual machine is required for execution of the application, performing the step of second determining whether the virtual machine is loaded on the wireless device;instructions to automatically download from the server the virtual machine, and making the virtual machine available to the application when it is determined the application requires the virtual machine.4. A wireless device, comprising:means for downloading an application and an associated reference;first means for automatically determining, prior to execution of the application and without interaction with a user of the wireless device, based upon the associated reference, whether execution of the application requires a virtual machine to be loaded on the wireless device;second means for automatically determining without interaction with the user of the wireless device whether the virtual machine is loaded on the wireless device when said first means for determining has determined that the application requires a virtual machine; and means for automatically sending without interaction with the user of the wireless device, a second download request for the virtual machine when said second means for determining has determined that the virtual machine is not loaded on the wireless device.ti.v; K.-opi blur,? ol N.7spsrty
5"2.^ 2-7*7WO 2002/075527 PCT/US2002/0083941DYNAMICALLY DOWNLOADING AND EXECUTING SYSTEM SERVICES ON A WIRELESS DEVICEBACKGROUND[0001] Typically, system services, including virtual machines, viewers and plug-ins, need to be installed on a device in order to be used by other objects or applications that require them. For example, Java applets run on a device that has a Java virtual machine loaded. Consequently, a device that intends to execute the Java applet will install the Java virtual machine on the device. Typically, because the virtual machine, or other system service, needs to be integrated into the device it is executing on, the installation is performed in advance of trying to download or execute applet or other application wanting to take advantage of the system service.[0002] Some devices, particularly wireless devices, however, have a constrained environment. Memory, including secondary storage and primary for active programs and data, and processing is more scarce than on other larger computer systems. Consequently, it is advantageous to only download or install some system services, such as virtual machines, on an as needed basis. < Unfortunately, on these constrained devices, system services are required to be installed or loaded into memory and take up valuable resources even when not used. Furthermore, users who want to use applications on devices that do not already have the application's supporting system services are hampered or prevented from doing so because the system services were not already installed on the device.SUMMARY OF THE INVENTION[0003] The present invention satisfies the shortcomings in the art by providing a system and method for dynamically downloading and installing system services, such as virtual machines, viewers, plug-ins, flash-players, other executable content or data, in a device based on the needs of the application running on the device.[0004] In one embodiment, the present invention involves downloading a system service onto a wireless device to be used with an application also being downloaded. The system service may be downloaded automatically when the application is downloaded without user intervention.[0005] In yet another embodiment, the present invention also detects whether a virtual machine is present when an application is loaded into the device's memory for execution and loads and executes the virtual machine, if necessary, so the application may run in the virtual machine's environment.WO 2002/075527PCT/US2002/0083942BRIEF DESCRIPTION OF THE DRAWINGS[0006] Reference will now be made in detail to the presently exemplary and preferred embodiments of the invention as illustrated in the accompanying drawings, in which like reference characters designate like or corresponding parts throughout the several drawings. The nature, objectives and advantages of the present invention will become more apparent to those skilled in the art after considering the following detailed description in connection with the accompanying drawings.[0007] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention and, together with the general description given above and the detailed description of the preferred embodiments given below, serve to explain the principles of the invention. In the drawings:[0008] Figure 1 depicts one exemplary embodiment of the environment in which the present invention may be practiced;[0009] Figure 2 depicts the process involved with downloading an application which uses a system service consistent with an exemplary embodiment of the present invention; and[0010] Figure 3 depicts the process of loading and executing a virtual machine installed on the wireless device in response to loading an application, which uses the virtual machine.mDESCRIPTION OF AN EXEMPLARY EMBODIMENT11] Figure 1 depicts one exemplary embodiment of the environment in which the present invention may be practiced. In this embodiment, the device 100 communicates with the server 105 using a wireless network 125. The device 100 may be a wireless device that may transmit and/or receive data and/or voice. The wireless device 100 may request various pieces of information from the server, including applications 110 and system services 115, such as a virtual machine 120, used by the wireless device.[0012] The wireless device 110 may contain a processor and memory, primary and secondary,(not shown) used to store, load and execute the applications 110 and system services 115 downloaded from the server. These applications 110 and system services 115 may also interact with a software platform located on the wireless device used to simplify interfacing with the wireless device, such as by providing generalized calls for device specific resources. Such a software platform is the Binary Runtime Environment for Wireless™ (BREW™) software developed by QUALCOMM, Inc., San Diego, California.WO 2002/075527PCT/US2002/008394[0013] It will be recognized by those skilled in the art that the wireless device's 100 architecture may also contain an assortment of other components (either not shown or specifically indicated). Such components include, but are not limited to: a display, speaker, microphone, and buttons allowing alphanumeric and symbol inputs. The wireless device may also contain a battery, multiple storage mechanisms, such as ROMs, RAMs, and flash memory, an operating system and a compilation component to aid in the execution of applications, system services, other executable code and manipulation of data, on the device.[0014] While the system services and applications are depicted as being located on the server 105, it will be recognized by those skilled in the art that the applications and software may not be physically located at the server. In this case, the server may make a request for the applications and system services for the wireless device from other systems and then download the requested files or may transfer the request to another system for direct transfer of requested files to the wireless device.[0015] As is recognized by those skilled in the art, Fig. 1 is one exemplary environment for the present invention, i The device may include other types of systems, including wireless and -non-wireless devices. In addition, the device may communicate with the server and other systems by a multiple of network types and communication architectures, including wireless and non-wireless, private and public, switched and non-switched, direct links, and any combination thereof.[0016] Figure 2 depicts the process involved with downloading an application which uses a system service consistent with an exemplary embodiment of the present invention. A device, such as the wireless device 100, requests an application from the server 105 (Step 200). This request may have been initiated because a user requests a specific application, such as a game, from the wireless device. The device may be configured such that the applications available to the user are not all resident on the device itself, but a representation of those applications that the user may access.[0017] This request, however, may be non-user initiated and may include those tasks for system maintenance and for tasks not involving direct user interaction. In addition, the request may be for types of files other than applications, including data, system services, or other types of information.[0018] The device then receives the application from the server and stores it (Step 205). In one embodiment, the server sends the requested application to the device. As stated above, however,WO 2002/075527PCT/US2002/008394the application may not be physically stored at the server; yet, the server may receive the request and initiate the application download to the device.[0019] The device then checks the application to determine whether the application uses a system service (Step 210). The application may include a reference, or some identifier, indicating that a systems service is used with the application. This reference may be included with the application or associated with the application in some other manner.[0001] In one embodiment, the application is a Java applet and requires the use of a Java virtual machine (a system service) to execute. The device may contain a software platform, such as Brew™, described above. The device, using the software platform, determines that a Java virtual machine is used by the applet by checking whether there are any references by the applet to an object class indicating a Java virtual machine. In one embodiment, each object class is represented by a unique 32-bit identifier and this identifier can be used to determine which object classes are referenced.[0021] The device then determines if the system service is already installed on the device (Step 215). !In one embodiment, this is performed by checking the intemal tables listing the object, classes installed on the device. Using the 32-bit identifier referenced by the downloaded application, the internal tables are checked to determine if the referenced object class is installed, or using the above example, whether the Java virtual machine is already installed on the device.[0022] If the system service is not installed, then the "no" branch is followed and the system proceeds to download the system service (Step 220). Following the above example, if the Java virtual machine used by the downloaded application is not installed in the device, then the device makes a request to the server to download the Java virtual machine. Additionally, if other system services are needed, they may also be downloaded to the device.[0023] It will be recognized by those skilled in the art that the downloading of system service(s) used by the downloaded application can be performed without any action by the user. Other than some possible transmission delays or indications, the user may be completely unaware that these downloads are occurring. It may, however, be desirable to inform the user that additional downloads are taking place. This is an implementation preference left to those practicing the present invention.[0024] It will be further recognized by those skilled in the art that the downloading of a system service may be independent of whether the application was downloaded or not (i.e., the downloading of the system service may be initiated because of applications installed at the factory or otherwise transferred onto the device).WO 2002/075527PCT/US2002/008394[0025] If the system service is already installed in Step 210 or after it is downloaded in Step 220, then the system service is available for when the application is executed. It will be recognized by those skilled in the art that the system service may be downloaded onto the device but not loaded in the device for execution.[0026] While the above description incorporates the use of the device making the determination of whether the system services are needed and downloading the system service in response to a request from the device, the invention also embodies the process where the server, or other system, perfoims the determination as to whether system services are needed and downloads the system service based on whether the device already has the system service installed or not.[0027] Figure 3 depicts the process of loading and executing a virtual machine installed on the wireless device in response to loading an application which uses the virtual machine. The process begins by having an application selected from those applications available to the wireless device (Step 300). This selection may be performed by a user wishing to execute the application. The selection, however, may be performed without user intervention by the device or in some other automated manner. <•; . ; : i , .[0028] After the application is selected, the device loads the application into memory (Step 305) for execution. (Depending on the environment in which the application is executing, the loading of the application may be considered part of the application's execution). During this loading phase, the loader (the component loading the application for execution) requests the virtual machine services (Step 310). In one embodiment, the loader may perform this using an Applications Programming Interface (API) mechanism built in the software platform, described above, identifying the virtual machine using a unique class identifier.[0001] For example, the application may be a Java applet requiring the use of a Java virtual machine to execute on the wireless device. A loader loading the Java applet on the device for execution may request Java virtual machine services by using a Brew™ API mechanism identifying the Java virtual machine by a unique identifier. In one embodiment, this identifier is a 32 bit class identifier.[0030] The device then determines whether the virtual machine system service is loaded into memory (Step 315). The device may do this by checking the object classes loaded into memory. It is preferable that an identifier associated with each object class be used in order to track those system services, applications, executable files, data, other data types or object classes that are loaded.WO 2002/075527PCT/US2002/0083946[0031] The device, or specifically in one embodiment the Brew software platform, makes the determination as to whether the virtual machine requested by the loader is in memory already.[0032] If the virtual machine is not already loaded into memory as determined in Step 315, the "no" branch is followed and the device loads the virtual machine (Step 320). If the virtual machine is not already installed on the wireless device, the virtual machine may be downloaded onto the wireless device from an external source, such as a server or other computer system which has access to the virtual machine software.[0033] Depending on the device and operating platform the present invention is implemented on^ as well as the system service, the system service may require an additional execution or start step following its loading. The system service, or virtual machine in this embodiment, should be in a state in the device that is accessible to the downloaded application when executed or possibly other processes.[0034] Following step 320 or if the determination is made after Step 315 that the virtual machine is already loaded, the application then runs in the virtual machine environment (Step 325). If the system'service, is; not a virtual machine, the application can now use the system service downloaded for the application. In the Java virtual machine example, the Java applet executes in the Java virtual machine environment.[0035] If the device contains a software platform, such as the Brew™ software, to simplify the interface from the application to the wireless device, the virtual machine passes system services' requests made by the application or the virtual machine to the software platform (Step 330).|^036] It will be recognized that while figure 3 discusses an application using a virtual machine during execution, that this is for exemplary purposes and the dynamic loading of other system services, other than the virtual machine, other executable content, and data used with applications, are considered within the scope of the present invention.CONCLUSION[0037] The present invention allows for the dynamic download and execution of system services on a device. In one embodiment, an applet requiring a virtual machine is downloaded to a wireless device. A software platform on the device determines that a virtual machine is used by the applet during execution. Without additional user interaction, the virtual machine is downloaded to the wireless device. This allows those applets requiring a virtual machine to be used with devices that don't have the virtual machine already installed.WO 2002/075527PCT/US2002/008394[0038] Another embodiment of the present invention allows the dynamic loading of a system service, such as a virtual machine, when an application is being loaded that uses the system service. In this embodiment, it is performed using a unique identifier associated with the system service that allows the device to determine if the system service is loaded.[0039] The foregoing description of an implementation of the invention has been presented for purposes of illustration and description. It is not exhaustive and does not limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practicing of the invention. For example, the described implementation includes software but one embodiment of the present invention may be implemented as a combination of hardware and software or in hardware alone. The invention may be implemented with both object-oriented and non-object-oriented programming systems. Additionally, although aspects of the present invention are described as being stored in memory, those skilled in the art will appreciate that these aspects can also be stored on other types of computer-readable media, such as, secondary storage devices, like hard disks, floppy disks, or CD-R0M;\a carrier wave from the Internet or other propagation medium; or other forms of RAM or ROM. The scope of the invention is defined by the claims and their equivalents.We claim:8